Congestion Management


With congestion management, you can control the throughput of packets in your network when it is unable to accommodate the aggregate traffic without dropping packets. You can soften the effects of network congestion by configuring your routers and switches to intelligently queue flows of packets throughout your network. To do so, you can use the following congestion management techniques:

  • Layer 3 Router Packet Queuing

  • Layer 2 Switch Ethernet Frame QoS

Understanding Layer 3 Router Packet Queuing

First-In, First-Out (FIFO) queuing is an unintelligent queuing method that forwards packets in the order in which they arrive at the router. FIFO queuing causes high-bandwidth, delay-insensitive flows, such as file transfers, to take precedence over low-bandwidth, real-time flows, such as voice and video streaming. Packets of a file transfer normally occur in your network in the form of traffic trainsthat is, as collective groups of packets flowing through the network at roughly the same time. In contrast, real-time applications generate packets individually and send them as discrete entities through the network. The following congestion management features are available to maintain the quality of real-time flows in your network:

  • Priority Queuing

  • Custom Queuing

  • Weighted Fair Queuing with IP RTP Priority

  • Class-Based WFQ with Low Latency Queuing

Configuring Priority Queuing

Priority queuing (PQ) uses four FIFO queues of different priority to transmit data: high, medium, normal, and low PQ queues. You can configure your router to queue packets into the four queues using criteria, such as incoming router interfaces, source IP addresses, IP protocols, and packet sizes.

A queue at a higher priority receives absolute preferential service over the lower-priority queues. Not until the high queues are empty do low queues receive service from the packet dequeuing (removal) mechanism. As a result, lower-priority queues may starve as the router services high-priority queues. Figure 6-1 shows how a router services the four PQ queues.

Figure 6-1. The Priority Queuing Mechanism


To decrease the probability of queue starvation, you can increase the maximum number of packets allowed in the lower queues, decrease the maximum number of packets allowed in the higher queues, or both. The default values are 20, 40, 60, and 80 packets for the high, medium, normal, and low queues, respectively. To enable priority queuing on an interface, you can use the configuration in Example 6-4.

Example 6-4. Configuring Priority Queuing

 priority-list 1 interface fastethernet 0/0 high priority-list 1 interface fastethernet 0/1 medium priority-list 1 interface fastethernet 0/2 normal priority-list 1 interface fastethernet 0/3 low priority-list 1 queue-limit 5 30 60 90 interface fastethernet 0/4   priority-group 1 

In Example 6-4, the router queues packets from four different Fast Ethernet interfaces into the four PQs. The example decreases the packet count for the high-priority queue, and increases it for the lower-priority queues, so that the queue associated with interface FastEthernet 0/0 does not starve the remaining queues.

Note

With any of the congestion management techniques discussed in this section, if the queues become full, the router drops packets from the tail of the queue. To avoid tail drop, see the section "Configuring Weighted Random Early Detection" in this Chapter.


Configuring Custom Queuing

Custom queuing (CQ) provides fair treatment of the queues that PQ lacks. With CQ, you can configure up to 16 FIFO queues to which you can assign different priorities of traffic. The router allocates a single queue for system traffic, such as routing updates, and link keep-alives, and services it until empty. The router services the remaining 16 user queues in a round-robin fashion by transmitting a configurable number of bytes from each queue every round-robin cycle. Because a router cannot transmit partial packets, it sends the entire packet, even if the byte count for the queue is exceeded. To make up for overuse, the router subtracts any excess from the byte count of that queue for use during the next round-robin cycle. The default byte count for each queue is 1500 bytes. You can change the default byte count by using the following command:

 queue-list list-num queue queue-num byte-count count 


Figure 6-2 illustrates how CQ works.

Figure 6-2. The Custom Queuing Mechanism


By default, the router classifies a maximum of 20 packets for entry into the queues. You can classify packets by protocol, access lists, and router interface. You can change the default maximum number of packets using the command:

 queue-list list-num queue queue-num limit limit-number 


Example 6-5 illustrates how to enable CQ with two queues on a Fast Ethernet interface. This example decreases the byte count for Queue 2 to ensure that packets from Queue 1 receive more service per round-robin cycle.

Example 6-5. Configuring Custom Queuing

 queue-list 1 protocol ip 1 tcp 6001 queue-list 1 protocol ip 2 udp 5001 queue-list 1 queue 1 byte-count 1400 queue-list 1 queue 2 byte-count 570 interface fastethernet 0/0   custom-queue-list 1 

Configuring Weighted Fair Queuing and IP RTP Priority Queuing

WFQ is a flow-based scheduling algorithm that gives low-bandwidth, interactive flows priority over high-demand flows. The fairness of the algorithm comes from the ability to avoid starvation of high-demand flows while fulfilling the network demands of applications with lower bandwidth, smaller packets, and intermittent access requirements.

Fair queuing automatically categorizes traffic into flows with low- and high-demand bandwidth requirements. WFQ assumes that low-demand, interactive flows are those flows with small packet sizes and that high-demand traffic use large packets. WFQ sorts flows in terms of their demand and services them by the packet removal mechanism equally. WFQ gives the low-bandwidth flows priority, while the remaining available bandwidth is shared among the high-demand flows.

WFQ creates a dynamic set of queues for traffic flows, the number of which you can manually set. WFQ assigns each individual flow to a queue and services them with a bit-wise round-robin algorithm, which takes into consideration the size of the packets to decide the order of transmission of the flows. Otherwise, flows with larger packets may starve flows with smaller packets. For example, the packet-based round-robin that PQ uses gives preference over flows with larger packets. WFQ identifies flows by hashing TCP/IP information, such as source/ destination IP addresses, TCP/UDP ports, protocol, and IP Precedence. The hashed value provides an index into the individual queue housing the respective flow.

WFQ gives flows with higher precedence a lower weight and therefore a greater allocation of overall bandwidth. The router uses the IP Precedence field to calculate the ratio of the overall link bandwidth with the equation:

Link Proportion = Precedencecurrent/Precedencesum

For example, three flows that concurrently traverse a 1.544 Mbps T1 serial link have IP Precedences 3, 4, and 5, respectively. The sum of the IP Precedences equals 12. Therefore, the proportions of bandwidth for the three flows are 3/12th (25 percent), 4/12th (33 percent), and 5/12ths (42 percent), respectively. Figure 6-3 illustrates how WFQ treats these three flows within the WFQ queues.

Figure 6-3. The WFQ Queuing Mechanism


Although WFQ differentiates flows with weights based on IP Precedence, its drawback is that it cannot guarantee bandwidth service to any of the flows. WFQ is fair to all traffic flows, including high-demand applications, which may cause real-time applications to suffer. Furthermore, WFQ is not scalable to higher-speed links. Because the router assigns each flow an individual queue, the number of queues grows substantially for high-bandwidth links. WFQ works well for speeds of 2 Mbps or less, and is therefore enabled by default for links of serial speed or lower. To enable WFQ on a link, use the interface configuration command:

 fair-queue [congestive-discard-threshold [dynamic-queues [reservable-queues]]] 


When the proportion of the link bandwidth reaches the congestive discard threshold, the router randomly drops packets (see the section "Congestion Avoidance" for more information on the benefits of randomly dropping packets). The number of dynamic queues is the number of queues for regular flows, and the reservable queues are the number of queues you can configure for Resource Reservation Protocol (RSVP) flows. The default value for the congestive discard threshold is 64 messages. The default number of dynamic queues is dependant on the bandwidth of the interface, and the default for the RSVP queues is 0.

Compared to standard WFQ, PQ provides much better service to higher-priority traffic and is preferable for real-time applications using Real-Time Transport Protocol (RTP), where guaranteed delivery is essential. As a result, you should use IP RTP Priority, also called PQ/WFQ, instead of standard WFQ. With PQ/WFQ, the router assigns a single high-priority strict FIFO queue for RTP traffic, which preempts traffic in other queues. The router services the PQ until it is empty, and services the remaining traffic using a standard WFQ. The drawback of PQ/WFQ is that the PQ can starve the standard WFQ queues. Figure 6-4 illustrates the operation of PQ/WFQ.

Figure 6-4. Priority Queuing/Weighted Fair Queuing


To enable IP RTP priority queuing, use the interface configuration command:

 ip rtp priority starting-rtp-port-number port-number-range bandwidth 


Another drawback to PQ/WFQ is that the PQ serves only RTP traffic. Other real-time traffic that requires preferential treatment, including RTCP traffic, is served in the standard WFQ queues, enabling it to be delayed or starved by the PQ. To introduce a strict PQ for real-time traffic of any kind, you should instead use class-based WFQ with Low Latency Queuing (LLQ).

Note

For more information on RTP and RTCP, see Chapter 9,"Introducing Streaming Media".


Configuring Class-Based WFQ with Low Latency

Class-Based Weighted Fair Queuing (CBWFQ) provides more granularity and scalability than WFQ. With CBWFQ, you can configure different classes and specify the ratio of the available bandwidth for each class. As a result, CBWFQ no longer classifies traffic into flows (see Figure 6-5). CBWFQ is therefore more scalable because each flow does not require an individual queue.

Figure 6-5. Class-Based Weighted Fair Queuing with Low Latency Queuing


Low Latency Queuing (LLQ) provides a strict PQ to CBWFQ, similar to IP RTP priority. However, the PQ in CBWFQ is monitored to ensure that the fair queues are not forgotten, which differs from the starvation property of the IP RTP Priority mechanism.

Not only can you assign RTP traffic to the PQ using the RTP UDP port range; you can classify any type of traffic as highest priority. For example, you can use access lists to classify traffic based on various IP header fields, such as IP address, TCP/UDP port ranges, IP header DCSP/Precedence fields, IP protocols, and input interfaces. To enable CBWFQ with LLQ, you can use the configuration in Example 6-6.

Example 6-6. Configuring Class-Based WFQ with LLQ

 access-list 101 permit udp any any range 16384 65535 access-list 102 permit tcp any any eq telnet class-map video     match access-group 101 class-map telnet     match access-group 102 policy-map diffapps    class video       priority percent 60    class telnet       bandwidth percent 10    class class-default      fair-queue interface serial 0   service-policy output diffapps 

In Example 6-6, the router assigns video traffic matching the class "video" to the PQ. The UDP port range of 16,38465,535 is used to classify RTP streaming media traffic in this example. CBWFQ does not allocate the video traffic more than 60 percent of the available bandwidth on the Fast Ethernet interface. The router also creates a standard queue for Telnet traffic, within the telnet class. The router allocates 10 percent of the overall bandwidth for Telnet traffic. All other traffic is assigned to WFQ queues.

Understanding Layer 2 Switch Ethernet Frame QoS

You can configure your Layer 2 switches to classify, mark, police, and intelligently queue traffic flowing through switch ports. For traffic that you tag with 802.1P, the switch can trust the existing CoS field in the frame tag in order to classify the frame at incoming (or ingress) switch portsyou learned about the 802.1P field in Chapter 3. For untagged traffic, the switch can inspect the IP header and trust either the IP Precedence or IP DSCP value to classify the frame at ingress ports. Otherwise, if you decide that the switch should not trust the existing values, you can configure default CoS values on the individual ingress switch ports. Based on the priority that you give the frames at the ingress port, the switch queues the frame accordingly at the outgoing (or egress) port.

Table 6-7 gives the default association switches use between CoS and DSCP values. Alternatively, you can re-configure these mappings on your switch.

Table 6-7. DSCP Values to IP Precedence to CoS Value Mappings

IP DSCP

IP Precedence

CoS

Purpose

0

0

0

Best effort

8, 10

1

1

Class 1

16, 18

2

2

Class 2

24, 36

3

3

Class 3

32, 34

4

4

Class 4

40, 46

5

5

Express forwarding

48

6

6

Control

56

7

7

Control


The egress ports on Catalyst 29xx/35xx/37xx/4xxx series switches have four queues for the switch to choose from. The switch places traffic in egress queues based on the CoS, IP Precedence, or DSCP value of the frame. Queue 4 is a strict priority queue, and the other three queues are standard queues. The switch services Queue 4 until empty before it services the other queues. The three standard queues are subject to Weighted Round-Robin (WRR) queuing. You can assign the weights to the three standard queues or use the default queuing values, as indicated in Table 6-8.

Table 6-8. Weights Assigned to 29xx/35xx/37xx/4xxx Transmit Queues

Queue

CoS value

Queue Weight

4 (priority queue)

5

1

3

3, 6, 7

70

2

2, 4

20

1

1, 2

10


The ingress ports on 29xx/35xx/37xx/4xxx switches have only a single queue. You can configure the switch to classify, mark, and police traffic at the ingress port, but not at the egress switch port.

On the Catalyst 6000, the queuing architecture of a switch port is given using codes. Enter the show port capabilities command to see the queue architecture of your switch ports. For example, the code tx-(1p2q2t) indicates that the egress port has one strict priority queue (1p), and two standard queues (2q) with two drop thresholds (2t) for each queue. The code rx-(1q8t) indicates that the ingress port has one standard queue with eight drop thresholds. You can associate the queues threshold with values, so that low-priority traffic in your network is WRED-dropped before high-priority traffic. Refer to the next section for more information on WRED.

Your switch assigns frames to the queues based on their CoS values. If your switch ports have 1p2q2t/1p1q4t (Tx/Rx) transmit queues, the default CoS assignments and thresholds for your queuing architecture are listed in Tables 6-9 and 6-10.

Table 6-9. Default Threshold Settings for the 1p2q2t Transmit Queuing Structure

Transmit 1p2q2t Values

Strict Queue

Queue #1 Threshold #1

Queue #1 Threshold #2

Queue #2 Threshold #1

Queue #2 Threshold #2

CoS Values

5

0, 1

2, 3

4

6, 7

WRED-Drop (Minimum) Threshold

-

40%

70%

40%

70%

Tail-Drop (Maximum) Threshold

-

70%

100%

70%

100%


Table 6-10. Default Threshold Settings for the 1p1q4t Receive Queuing Structure

Receive 1p1q4t Values

Strict Queue

Queue #1 Threshold #1

Queue #1 Threshold #2

Queue #1 Threshold #3

Queue #1 Threshold #4

CoS Values

5

0, 1

2, 3

4

6, 7

Tail-Drop (Maximum) Threshold

-

50%

60%

80%

100%


To change the CoS assignments for the transmit queue structure in Table 6-9, you can use the command:

 set qos map 1p2q2t tx queue# threshold# cos coslist 


For example, the command set qos map 1p2q2t tx 2 2 cos 5 6 7 assigns the CoS values 5, 6, and 7 to Queue #2/Threshold #2.

To change the WRED-drop and tail-drop thresholds, you can use the command:

 set qos wred 1p2q2t tx queue queue# threshold-list 


For example, the command set qos wred 1p2q2t tx queue 1 30:60 60:100 sets the Queue #1/ Threshold #1 values to 30%:60% and Queue #1/Threshold #2 values to 60%:100%.

Table 6-10 gives the default thresholds for the 1p1q4t receive queuing structure.

Note

Notice that the receive queue structure in Table 6-10 does not have WRED-drop thresholds. Therefore, Catalyst 6500 series switches cannot WRED-drop packets at the ingress interface.


To change the tail-drop thresholds for the 1p1q4t receive structure, you can use the command:

 set qos drop-threshold 1p1q4t rx queue 1 threshold-list 


For example, the command set qos drop-threshold 1p1q4t rx queue 1 30 40 60 100 sets the Threshold #1 value to 20%, the Threshold #2 value to 40, Threshold #3 value to 75%, and Threshold #4 value to 100.



Content Networking Fundamentals
Content Networking Fundamentals
ISBN: 1587052407
EAN: 2147483647
Year: N/A
Pages: 178

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net