Network QoS Techniques
The normal behavior of the TCP/IP family of protocols is to give all traffic best-effort delivery service. This works well for the transfer of computer data when the network is moderately loaded. New applications have their own respective requirements for bandwidth, delay, jitter, and packet loss. QoS mechanisms help meet specific application requirements as network load
, improving on TCP/IP's best-effort delivery.
QoS techniques comprise a mix of classification and handling mechanisms. For the purposes of discussion, these have been grouped into several categories:
— Link-layer, or Layer 2, QoS schemes influence the traffic handling on individual data links. For example, ATM has QoS incorporated into its
architecture. In IEEE 802.1p/Q, bytes are inserted into Ethernet
each frame's priority. Ethernet switches can use this priority to decide which frames get switched ahead of others. Both of these schemes help at lower
, but without some correlation to a higher-layer QoS mechanism, they may provide little value to application users, whose traffic needs to be handled consistently across all the data links in a connection. Cisco
Link Fragmentation and Interleaving
(LFI) breaks up large packets so that a small voice packet does not get stuck behind a file-transfer packet on a WAN link.
Apply link-layer techniques to the "weakest links" in the network. Traffic can't flow across a network faster than the slowest or most
link in the
IP QoS techniques
— The Layer 3 QoS schemes, RSVP, DiffServ, and MPLS, work to meet applications' network requirements from end to end. RSVP
resources to meet requirements for bandwidth, jitter, and delay for a particular connection through a series of routers. RSVP works best when connections are long (such as those used by streaming video) and when only a few connections at a time require reserved resources. DiffServ marks a relative priority in each IP packet, to be honored by each router that handles the frame. Still needed are ways to ensure the consistent handling of the priorities.
Multiprotocol Label Switching
(MPLS) sets up a virtual circuit through an IP network by prefixing each frame with 4 bytes that tell how to get to the next router in the path. MPLS is getting a lot of attention today from equipment
for the core of large networks.
IP QoS schemes treat different classes of traffic differently. They don't
make one class of traffic move faster than another. Rather, they increase the
that traffic in a premium class gets a better guarantee of bandwidth, a better priority within routers, or a better route through a network than traffic in a lower class. APIs are starting to enable applications to do their own prioritizing; for example, recent versions of Windows offer TCP/IP applications an
application program interface
(API) for requesting the QoS they
. However, these APIs are likely to be little used or ignored because applications cannot necessarily be trusted, and network managers will want to look across the aggregate needs of all the applications on their networks when determining QoS schemes.
— In addition to QoS at Layers 2 and 3, routers and switches offer ways to prioritize traffic and handle congestion better. Examples of these include the acronyms WFQ, CBWFQ, LLQ, and WRED.
Weighted fair queuing
(WFQ) works to improve the handling of low-volume connections in the midst of high-volume traffic. WFQ can
beneficial when VoIP traffic is mixed with heavy file transfers.
Class-based weighted fair queuing
(LLQ) work together to give priority to delay-sensitive VoIP traffic.
Weighted random early detection
(WRED) works during congestion to avoid the mass
of all the TCP connections passing through a router. Options like these are quickly effective in a small network, but are hard to administer consistently across many devices.
— This new category of devices (also known as
) stand at the
and egress points in a network. These are the first network devices to begin implementing
for the traffic they handle. Although they were initially developed as proprietary solutions, they are evolving into local
for a broad set of rules implemented by policy servers. (Policy servers are introduced in the section "QoS Management" in Chapter 6.)
QoS mechanisms are discussed in more detail in the sections that follow.
Link-Layer QoS Techniques
Some data links provide built-in QoS mechanisms. These Layer 2 mechanisms provide ways of classifying and handling traffic for different kinds of links. The most popular Layer 2 QoS mechanisms are discussed in the following sections.
The Ethernet transmission medium is the LAN standard upon which most IP networks are built today. Ethernet has nearly reached ubiquity—most new computers come equipped with Ethernet card hardware. A couple of IEEE standards, 802.1p and 802.1Q, are used together to specify the built-in QoS mechanism for Ethernet networks. 802.1Q adds a 4-byte tag to each Ethernet
Media Access Control
(MAC) header. Sometimes referred to as LAN QoS, 802.1p defines, within this tag, 3 bits that make up the Priority field. The three Priority bits provide eight different
classes of service
(CoS). Figure 5-1 shows the Priority field in the Ethernet header.
Figure 5-1. 802.1p/Q Fields in the Ethernet Header
Ethernet switches that support the 802.1p/Q standard can prioritize Ethernet traffic based on the Priority field bit settings. You can run into interoperability problems if you are using older switches that don't understand what to do with these extra bits in the header. It is modern, "802.1p/Q-enabled"
network interface cards
(NICs) that are responsible for setting the Priority field bits. These enabled NICs are found in some IP phones and in softphone computers. Most non-VoIP data traffic sets the Priority field to
, which means "best effort," or no prioritization. VoIP RTP call traffic should usually have these 3 bits set to
(which is decimal value 5). The call-setup traffic for VoIP uses a Priority field setting of
(decimal value 3).
Asynchronous Transfer Mode
(ATM) is another example of a link-layer protocol that provides built-in QoS. Many carrier network backbones use ATM networks because of the services that can be
. ATM transfers data in 53-byte
: a 5-byte header and 48-byte payload. An ATM logical connection is set up through the ATM switches, which then negotiate a type of service for the duration of the connection. The
Virtual Path Identifier
Virtual Channel Identifier
(VCI) fields in the ATM header identify the ATM connection. Figure 5-2 shows the ATM header fields used for QoS with connection identifiers that are used to classify which type of service should be given to the cells in a connection.
Figure 5-2. ATM Header Fields with Built-In QoS
When an ATM connection is established, QoS parameters are negotiated and established. A type of service is
, with specific QoS parameters. ATM defines QoS parameters, such as
maximum cell transfer delay
(max CTD), and
cell loss ratio
(CLR). Resources and queues within the ATM switches are reserved to meet the service level requested. You can interpret the terms CDV as jitter and max CTD as latency.
ATM defines four different types of service:
Constant bit rate (CBR)
— Data that has a fixed data rate, sent in a steady stream. CBR traffic is usually low bandwidth and sensitive to delay and packet loss. Provides
delay, jitter, and cell loss.
Variable bit rate (VBR)
— Data that is
in nature, but is guaranteed a certain level of throughput. Provides guaranteed delay, jitter, and cell loss.
Unspecified bit rate (UBR)
— No guarantee of throughput. A "best-effort" service.
Available bit rate (ABR)
— A minimum capacity is guaranteed and, depending on network usage, bursts of traffic are allowed that exceed the minimum capacity. Provides guaranteed delay, but no guaranteed jitter and cell loss.
On ATM, VoIP traffic is best
by CBR connections with guaranteed delay, jitter, and cell loss. Figure 5-3 shows an ATM switch with different queues for different service classes.
Figure 5-3. ATM Switches with Queues forFour Types of Service
Frame Relay Links
Frame Relay is a data link layer protocol that is often used for WAN connections. In most cases, a Frame Relay link is set up as a permanent virtual connection, which means that a dedicated logical connection is
through the underlying network. A built-in form of QoS, the
committed information rate
the traffic that is transferred on Frame Relay links. CIR is often
in bits per second, and the rates may be different (asymmetric) for each direction of the link. Figure 5-4 shows CIR applied to traffic in different queues.
Figure 5-4. Traffic Is Shaped by the CIR on Frame Relay Links
If network congestion occurs, traffic that exceeds the CIR on certain links is discarded. For this reason, you should configure the CIR to be slightly above the average traffic volume for the link. As with any network link, a Frame Relay link may be oversubscribed. However, a CIR helps prevent ill-mannered applications from consuming too much bandwidth and thus starving VoIP traffic.
RTP Header Compression
The combined size of the IP, UDP, and RTP headers, 40 bytes, adds a significant amount of overhead to VoIP transmissions. The combined header size can be larger than the VoIP payload
, depending on the codec and the packet delay. This header overhead can
precious bandwidth on lower-capacity WAN links,
when you consider that VoIP traffic flows in both directions.
Closer inspection of the contents of the IP, UDP, and RTP headers reveals that many of the field values do not change during the course of a VoIP transmission. Router vendors have taken advantage of this fact to provide a feature known as
(cRTP). When cRTP is activated, the combined headers are compressed to between 2 and 5 bytes, as shown in Figure 5-5. The bandwidth savings can make room for more VoIP calls on a given link. However, cRTP exacts a trade-off; the compression consumes more CPU and adds delay. Because it incurs some extra CPU utilization and delay, cRTP should be
only for link speeds of 512 kbps or less.
Figure 5-5. Compression of IP, UDP, and RTP Headers Using cRTP
Link Fragmentation and Interleaving
LFI is a router technique for alleviating delay on slow links. Packets of all different sizes
at routers; some packets may be 64 bytes long, whereas others are 1500 bytes long. VoIP packets are
the small ones. On a slow link, you don't want a VoIP packet to get stuck behind a 1500-byte packet that is part of a file transfer, because the delay of the VoIP packets can reduce the call quality. Activating LFI in a router means that the router cuts the big packets into fragments, and interleaves smaller packets in between the larger
, which are reassembled at the other end. This interleaving avoids excessive delay for any small packet. Figure 5-6 shows how voice packets can be delayed behind larger data packets on a slow link.
Figure 5-6. Link Data Flow Before and After LFI (Source: Cisco Systems)
Consider LFI if you are experiencing high queuing delay due to serialization on slower-speed WAN links. The symptoms of excessive queuing delay are high end-to-end delay or high jitter for VoIP packets. For link speeds greater than 768 kbps, LFI is usually not needed.
IP QoS Techniques
An increasingly popular set of QoS mechanisms is found at Layer 3 in the TCP/IP protocol stack. These techniques—DiffServ, RSVP, and MPLS—are referred to as
because they take advantage of specific features of the IP protocol.
There is a 1-byte field in the header of every IP packet that has generally been unused for the past generation. That means every IP packet has a byte that is set to zero—
A widely used QoS technique involves setting the bits in this byte to a nonzero value. In the IP version 4 header specification, this field is called the
Type of Service
(TOS) byte. Most TCP/IP stacks have always set the TOS byte to zero, and consequently most network devices have ignored this byte. In recent
, this same byte has been
Differentiated Services field,
TOS and IP Precedence were a first attempt to provide IP QoS. Here, 4 of the bits in the TOS byte were designated type of service bits in RFC 1349
. These 4 bits create 4 service classes: minimize delay, maximize throughput, maximize reliability, and minimize
cost. In addition, RFC 791
and RFC 1812
define a QoS mechanism known as
. IP Precedence uses the first 3 bits in the TOS byte. Routers can interpret these 3 bits as 2
or 8 different classes of service. Figure 5-7 shows the TOS byte in the IP header.
Figure 5-7. The TOS and IP Precedence Bits, with Their Original Definitions
DiffServ is the latest attempt to provide QoS using the TOS byte. Defined by RFC 2474
, DiffServ uses the first 6 bits of the TOS byte, known as the
differentiated services code point
(DSCP). The 6 bits of the DSCP allow for 2
or 64 different classes of service. Most routers understand DiffServ, and there is little overhead involved with DiffServ classification because looking at bits in the IP header is something that routers do all the time. Figure 5-8 shows the DiffServ field in the IP header.
Figure 5-8. DiffServ Field, Present in Every IP Packet
can set the DSCP, but it is rarely done this way. It can also be set by a traffic shaper, which looks at something else in the frame, such as the port number, to decide how to set it. VoIP gateways commonly set this byte as they generate VoIP packets for calls based in the PSTN.
Most IP phones and VoIP gateways set the TOS byte to a nonzero value to denote the priority needed for VoIP. As discussed earlier, in the section "IEEE 802.1p/Q," VoIP call traffic commonly has this byte set to binary 10100000. Sometimes this setting is referred to as 5 because the first 3 bits represent a decimal value of 5. VoIP call-setup traffic generally uses a different value: binary 01100000. The DSCP field creates an efficient scheme for classifying different types of traffic. However, it is only as good as the weakest network link. If a single segment in the path from one codec to the other does not support DSCP handling, the entire path can only be considered best-effort.
Resource Reservation Protocol
RSVP reserves resources to meet requirements for bandwidth, jitter, and delay on a particular network path through a series of routers. Defined in RFC 2205
, RSVP is sometimes called
(IntServ). RSVP sends IP control flows from one end of the network to the other. These IP packets instruct intermediate routers to reserve a portion of their resources (bandwidth, queues, and so on) for forthcoming TCP/IP application traffic.
Applications use RSVP by making additional calls to their underlying TCP/IP stacks. The TCP/IP stacks communicate with the first router on their path, which, in
, communicates with the other routers on the path. RSVP can work in tandem with other QoS techniques, such as WFQ, to enforce the resource
Two main RSVP messages are exchanged between routers and
Reservation request (RESV)
— This message is sent from the receiver to the sender along the reverse data path. Each router along the way must accept or reject the reservation request (see Figure 5-9).
Figure 5-9. RSVP Messages Flow Between Routers and Hosts
— This message is sent to routers in between the sender and receiver. The path message helps the routers maintain state information in order to send RESV messages.
A drawback of RSVP is that it requires ongoing bandwidth and router resources. A good rule of thumb is that the extra IP flows add approximately 100 bps per connection in extra bandwidth usage and require approximately 1 KB of RAM per router per connection. In addition, it takes several seconds to set up the separate control flows. And RSVP is one of the most challenging QoS techniques to configure correctly.
On the other hand, RSVP can provide a guaranteed level of service for application traffic, which may be especially important for delay-sensitive traffic such as VoIP. Yet RSVP is another end-to-end QoS technique that is easily hindered by the weakest network link. If a single link along the data path does not accept the reservation request, the path is considered best effort. RSVP works best when used within
, on a campus, or within a privately owned WAN. It works well when network connections are long in duration (such as streaming video) and when only a few connections at a time require reserved resources. It is probably not the right technique to use for VoIP.
Multiprotocol Label Switching
MPLS is much more than just a QoS technique; it also provides network operators with a way to offer different classes of service. When packets enter an MPLS-aware network, they are "tagged" with a label that can contain a variety of information. MPLS-aware routers, known as
label switching routers
(LSRs), can forward the packet through the network using the label instead of the traditional address fields in the IP header. Different paths through the network,
(LSPs), can be configured for different label values. Figure 5-10 shows the MPLS tags that are added to the front of an IP packet.
Figure 5-10. MPLS Label Fields Prefixed to Front of IP Packet
By using different LSPs, network operators can set up routes for different classes of data traffic from different users. For instance, users who pay for premium network service may be given a less-congested path through the network. MPLS, a handling technique, can be used in tandem with other QoS techniques, such as DiffServ (a classifying technique). The MPLS labels could be assigned based on the bit settings of the DSCP so that the MPLS-enabled network would provide different paths for traffic with different bit settings.
A QoS technique that is better suited for very large network backbones with many routers, MPLS is often used by network
and Internet service providers. The complexity of MPLS makes it
for most enterprise networks.
of QoS techniques deals with queuing
within network devices. Queuing techniques generally provide different queue levels and handling for different classes of traffic.
Weighted Fair Queuing
WFQ is a commonly used, flow-based queuing algorithm. Different traffic flows are queued to prevent bandwidth
—that is the "fair" part. A flow is
of all packets with the same source address/port and destination address/port combination. A weight is assigned to flows to grant those flows priority queuing according to some scheme, usually another QoS mechanism. Different queue levels are provided for the weighted flows. Low-bandwidth streams, such as VoIP, are given priority over larger-bandwidth consumers such as file transfers. Figure 5-11 shows different queues using the DSCP field to assign a weight.
Figure 5-11. Different Queue Levels Using DSCP to Assign a Weight
WFQ may use IP Precedence or DiffServ bits to determine the weight of a particular flow. If all weights are equal, then the available bandwidth is divided equally.
Class-Based Weighted Fair Queuing
CBWFQ is an enhancement to the WFQ algorithm that includes
-defined traffic classes. Traffic classes can be defined based on protocol, port, access control, input queues, or DiffServ bits. Each traffic class gets its own queue. Traffic classes can have bandwidth and queue limits assigned to them. The bandwidth is provided to the class when congestion occurs. The queue limit is the maximum number of packets that are allowed in a class-based queue. If the queue fills up, then packets are dropped. Figure 5-12 shows the LLQ reserved for VoIP traffic.
Figure 5-12. CBWFQ with a Low-Latency Queue for VoIP Traffic
CBWFQ may be used with a feature called
(LLQ). LLQ offers delay-sensitive data, such as VoIP, priority handling over other types of traffic. With LLQ, VoIP traffic gets its own queue, and as packets are diverted to the low-latency queue, they are dequeued and
ahead of any other queues.
Weighted Random Early Detection
WRED is a little different from other queuing schemes. Instead of trying to deal with congestion after it occurs, WRED
to detect congestion before it happens and then avoid it. According to Tom Lancaster's Networking Tips, "The problem it
is called 'tail drop,' which happens when a burst of packets fills up a switch or router's buffer and the last few packets in the burst get dropped because there's no more room in the buffer."
Figure 5-13 shows the congestion-
scheme WRED. As a link is becoming congested, WRED
selects packets to discard—rather than dropping all that arrive after the queue is 100 percent full.
Figure 5-13. WRED Selects Packets to Discard Before Congestion Occurs
WRED tries to avoid congestion by randomly
selected packets before the queues fill up. Ideally, TCP packets are dropped; when a TCP packet is dropped, the protocol will slow down and retransmit. For UDP or RTP flows that do not retransmit, such as high-throughput video or VoIP, WRED is not as effective. WRED usually relies on IP Precedence bits to provide a
scheme to help decide which packets to drop. The higher the priority, the lower a packet's chance of being dropped.
Traffic shapers (sometimes known as bandwidth managers) are devices or software that can classify and prioritize traffic based on a predefined policy. Some operating systems, including Windows XP and Linux, have traffic shapers built in to their TCP/IP stack. Other hardware devices provide traffic shaping and bandwidth management as well. These techniques classify traffic based on common methods and prioritize based on rules that you provide. The goal of traffic shaping is to prescribe the bandwidth consumed by different types of traffic.
Traffic shaping is useful in the following situations:
— You have a fast network or link, such as a LAN, feeding into a slower-speed WAN link.
Oversubscription of links
— You have too many users and not enough bandwidth on a particular link.
Traffic patterns are too bursty
— You have traffic that comes in bursts from time to time. Traffic shaping can help smooth out the bursts and provide more consistent bandwidth requirements.
With a traffic shaper, you create rules that give certain kinds of traffic a specified amount of bandwidth. For example, you may give VoIP traffic most of the bandwidth available in a link and give music-download traffic only a small amount of the overall bandwidth. Figure 5-14 shows an example of 18 different classes of traffic before and after traffic shaping. In this case, the streams are
by port number. After the shaping is applied, each stream receives a specified amount of bandwidth.
Figure 5-14. Traffic Shaping Applied to Different Traffic Classes