QoS Issues and Architectures

With VoIP, QoS has to deal with issues such as bandwidth, delay, jitter, and data loss when providing an acceptable level of service. Cisco can provide the following QoS advantages:

  • Use and control of resources efficiently in the network

  • Adapt QoS based on customers and their applications

  • Allow VoIP and mission-critical resources to exist side by side

To deal with QoS issues, you first need to classify your traffic based on issues such as bandwidth, delay, jitter, and loss requirements. Based on this information, you'll want to implement one or more QoS solutions for important traffic to ensure that it gets its necessary level of service. As you'll see throughout the rest of this chapter, Catalyst switches and routers support many QoS features that allow for traffic prioritization.

graphics/alert_icon.gif

Some QoS features dynamically prioritize traffic for you, whereas others require manual configuration of prioritization, giving you the ability to create your own QoS policies. If you have three kinds of traffic, such as voice/video, transactional applications, and data transfers, the three would typically be prioritized as listed.


Problems

QoS needs to deal with four basic problems: amount of bandwidth, delay, jitter, and packet loss. Bandwidth is the amount of throughput a connection needs to support its level of service. However, delay, jitter, and packet loss can also affect a connection's level of service. The next sections cover the last three items.

Delay

Delay is the amount of time it takes for a packet to go from the source to the destination. Within this transmission, there are two general types of delays that affect the total delay: fixed delay and variable delay. Fixed delay deals with the amount of time it takes to encapsulate and de-encapsulate information as well as to physically transfer information on a wire. Variable delay occurs with devices handling traffic where things such as congestion can occur. Here is a list of all the factors for both types of delay that your traffic is subject to:

  • Packetization The time it takes to segment information, sample and encode any signals, process the traffic, and then encapsulate the data in packets

  • Serialization The time it takes to encapsulate a packet in a frame and put the bits of a frame on a wire

  • Propagation The time it takes to transmit the bits of a frame across a wire to the next networking device

  • Processing The time it takes for a networking device to receive a frame, place the frame in the input queue, and take the frame from the input queue and place it in the output queue of the outbound interface

  • Queuing The time a packet stays in the output queue before being forwarded on the outbound interface to the next device

With VoIP traffic, it's important to minimize delay to prevent echo problems.

graphics/alert_icon.gif

Remember the different types of delay in the preceding bulleted list, especially packetization, serialization, and propagation delays.


Jitter

Jitter, or delay variation, is the amount of delay between receiving two packets. The delay variation is the difference between the amounts of time. For example, if it takes only 135ms to receive one packet and 128ms to receive a second packet, the delay variation is 7ms. Cisco uses a buffer to reduce jitter issues. The buffer essentially smoothes out the differences before forwarding the traffic to the application so that it appears the packets are being received within the same time delay variation. The jitter buffer can dynamically adjust itself for changing delay variations. This kind of buffering is very important for voice and video traffic; otherwise, the conversation or picture appears choppy. If your internal buffer has issues handling incoming packets, one of the following problems is occurring:

  • Overrun The jitter buffer cannot resize itself to handle the changes in delay variation, causing dropped packets.

  • Underrun The variation in delay between packets becomes so large that the jitter buffer cannot smooth out the delay variation, causing choppiness.

Either of these situations degrades the quality of a voice or video connection.

Packet Loss

Packet loss is when a networking device has to drop packets, typically because of a queuing problem. Queuing occurs on the ingress (entering an interface) and egress (leaving an interface) of a networking device. Most queuing problems occur on the egress because of congestion issues. With egress queuing and congestion, tail drop packet loss is common. With a tail drop, the first part of the data from a connection is queued, but when the queue has filled up, the remaining data from the connection must be dropped. Specialized queuing and congestion avoidance methods should be implemented to deal with packet loss of sensitive data, such as voice and video.

If you're experiencing ingress packet loss based on ignore, input, no buffer, or overrun problems, you probably need to upgrade your hardware to deal with these problems.

graphics/alert_icon.gif

When dealing with VoIP, packet loss should be less than 1%, one-way delay should be less than 60ms per call leg, and jitter should be less than 20ms in order to provide a good-quality voice connection.


QoS Solutions

Using QoS solutions in your network can deal with the three problems that were just mentioned. QoS should be able to predict the amount of time it takes to transmit information between two devices for delay-sensitive applications and ensure that delay and jitter are minimized. A prioritization scheme is typically used to prioritize time-sensitive traffic (voice and video) over traffic that isn't time-sensitive (data).

Likewise, for certain applications, such as data transfers, data loss is not acceptable because dropped information must be re-sent. With video and voice, some packets can be dropped without affecting the quality of the connection. Therefore, a QoS solution should provide enough bandwidth for applications and should balance packet loss based on the type of application being used.

A well-designed QoS solution should be able to deal with all of these issues. At best, it should avoid congestion, and at worst, manageably deal with congestion without affecting application function. When providing a solution, QoS typically has to deal with the following components:

  • Classification Sorts, or classifies, traffic into different distinct groups.

  • Marking Places information in a packet or frame indicating the priority (or class) of the information.

  • Forwarding Switches traffic from one interface to another (process, fast switching, and CEF).

  • Policing Compares received packets and determines whether they're following expected patterns (amount of bandwidth, jitter, delay, packet loss, and so on) or are breaking them. Packets breaking policing policies are typically dropped.

  • Queuing Examines the classification of traffic to determine how it should be placed in the egress queue.

  • Scheduling Determines how traffic should be processed from the egress queue.

  • Shaping traffic Sends traffic out at a constant, even pace (essentially removing the jitter from a traffic stream and enforcing a bandwidth limit).

  • Dropping Drops packets in an intelligent way to reduce congestion, yet not cause a major problem with the connection.

Please note that all of these components have to be dealt with not just within a single networking device, but across all network devices between the beginning and end points of a connection.

QoS Architectures

QoS architectures fall under one of three services, as listed in Table 9.1. Best Effort services should be used only in environments where QoS is not needed. If you have voice and/or video traffic, you'll probably have to implement QoS solutions, especially if you experience temporary congestion problems in your network.

Table 9.1. QoS Architectures

Architecture

Explanation

When to Use

Best Effort

Lacks QoS; first in, first out (FIFO) queuing

When QoS is not necessary

Integrated Services (IntServ, or hard QoS)

Reserves resources via the Resource Reservation Protocol (RSVP) from end-to-end for each connection

Absolute guarantees for traffic

Differentiated Services (DiffServ, or soft QoS)

Reserves resources on a hop-by-hop basis for traffic classifications through queuing and congestion avoidance techniques

Optimal guarantees; costs less than IntServ and is easier to implement

graphics/alert_icon.gif

Remember the Best Effort, IntServ, and DiffServ information in Table 9.1.


Best Effort

Best Effort tries its very best to get information to a destination in a timely fashion, but doesn't provide any guarantees. It typically uses a FIFO (first-in-first-out) queuing method. FIFO doesn't provide any type of QoS the first packet or frame received is the first one queued. It is typically used for connections that don't require QoS, such as data transfers. FIFO is discussed later in this chapter in more depth.

IntServ

IntServ is defined in RFC 1633 and provides a guarantee for QoS for an application connection. This is different from DiffServ, which does this based on traffic classifications, not specific connections. IntServ is implemented using RSVP on all devices handling the connection, including the source and destination. RSVP uses signaling to set up the connection and to maintain QoS. When a new connection is being established, RSVP has to determine what paths and devices are used to support the connection. The Common Open Policy Service (COPS) is used to centralize the setup and maintenance of the connection.

The two main problems with IntServ are that it is not very scalable (you have to enable RSVP on all devices) and extra bandwidth is required for each connection to handle RSVP signaling. However, its main advantage is that it provides a guarantee for a data connection that DiffServ can't. For example, if you have a hospital application that sets up connections between devices that transmit data in a real-time fashion, and this data is monitoring someone's vital signs in an intensive care unit, you absolutely need to guarantee that each connection for this critical application is serviced so as not to cause any type of data disruption.

DiffServ

DiffServ uses a multiple-service model to implement QoS. With DiffServ, applications do not signal their QoS requirements before sending their data. Instead, DiffServ is implemented within your network infrastructure: routers and switches. This provides an advantage over IntServ because you don't need to modify any end stations.

DiffServ marks the Type of Service (TOS) field in the IP packet as well as the Tag field (three bits are used for Class of Service, or CoS) in an IEEE 802.1Q/P frame. When performing its marking, DiffServ can assign up to 64 traffic classifications called Differentiated Services Code Points (DSCPs), which are used to prioritize traffic. In the TOS field, the six higher-order bits are used for the DSCP value and the two lower-order bits are used to indicate congestion.

Each networking device along the way to the destination uses this information to handle the packet or frame, providing a hop-by-hop QoS implementation. This is different from IntServ, which implements QoS on a connection-by-connection basis. DiffServ is preferred in the campus backbone environment because it typically deals with types of traffic, versus the complex management of QoS on a connection-by-connection basis.

Assured Forwarding (AF), defined in RFC 2597, implements QoS per-hop behaviors. AF defines four classes, AF1x AF4x, where a DSCP number is associated with each class. Within each of these classes, a drop probability is assigned: high, medium, or low. For example, AF1 has four drop levels: AF11, AF12, AF13, and AF14. Based on the classification of traffic jitter, bandwidth, throughput, delay, and loss traffic is assigned to a particular class with the associated drop rate. Networking devices then examine frames and packets for the DSCP numbers and know the class that the traffic is associated with along with the likelihood of dropping this traffic during congestion problems.

Expedited Forwarding (EF), defined in RFC 2598, defines how to use DiffServ to construct an optimal QoS solution that provides guaranteed bandwidth, low latency, low jitter, and end-to-end services.



BCMSN Exam Cram 2 (Exam Cram 642-811)
CCNP BCMSN Exam Cram 2 (Exam Cram 642-811)
ISBN: 0789729911
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Richard Deal

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net