Dropping Packets and Congestion Avoidance

Imagine a queue that holds packets as they enter a network bottleneck. These packets carry data for many different applications to many different destinations. If the amount of traffic arriving is less than the available bandwidth in the bottleneck, then the queue just holds the packets long enough to transmit them downstream. Queues become much more important if there is not enough bandwidth in the bottleneck to carry all of the incoming traffic.

If the excess is a short burst, the queue will attempt to smooth the flow rate, delivering the first packets as they are received and delaying the later ones briefly before transmitting them. However, if the burst is longer, or if it is actually more like a continuous stream, the queue will have to stop accepting new packets while it deals with the backlog. The queue simply discards the overflowing inbound packets. This is called a tail drop.

Some applications and some protocols deal with dropped packets more gracefully than others. For example, if an application doesn't have the ability to re-send the lost information, then a dropped packet could be devastating. On the other hand, some real-time applications don't want their packets delayed. For these applications, it is better to drop the data than to delay it.

From the network's point of view, some protocols are better behaved than others. Applications that use TCP are able to adapt to dropping an occasional packet by backing off and sending data at a slower rate. However, many UDP-based protocols will simply send as many packets as they can stuff into the network. These applications will keep sending packets even if the network can't deliver them.

Even if all applications were TCP-based, however, there would still be some applications that take more than their fair share of network resources. If the only way to tell them to back off and send data more slowly is to wait until the queue fills up and starts to tail drop new packets, then it is quite likely that the wrong traffic flows will be instructed to slow down. However, an even worse problem, called global synchronization, can occur in an all-TCP network with a lot of tail drops.

Global synchronization happens when several different TCP flows all suffer packet drops simultaneously. Because the applications all use the same TCP mechanisms to control their flow rate, they will all back off in unison. TCP then starts to automatically increase the data rate until it suffers from more packet drops. Since all of the applications use the same algorithm for this process, they will all increase in unison until the tail drops start again. This whole wave-like oscillation of traffic rates will repeat as long as there is congestion.

Random Early Detection (RED) and its cousin, Weighted Random Early Detection (WRED), are two mechanisms that help avoid this type of problem, while at the same time keeping one flow from dominating. These algorithms assume that all of the traffic is TCP-based. This is important because UDP applications get absolutely no benefit from RED or WRED.

RED and WRED try to prevent tail drops by preemptively dropping packets before the queue is full. If the link is not congested, then the queue is always more or less empty, so these algorithms don't do anything. However, when the queue depth reaches a minimum threshold, RED and WRED start to drop packets at random. The idea is to take advantage of the fact that TCP applications will back off their sending rate if they drop a packet. By randomly thinning out the queue before it becomes completely full, RED and WRED keep the TCP applications from overwhelming the queue and causing tail drops.

The packets to be dropped are selected at random. This has a couple of important advantages. First, the busiest flow is likely to be the one with the most packets in the queue, and therefore the most likely to suffer packet drops and be forced to back off. Second, by dropping packets at random, the algorithm effectively eliminates the global synchronization problems discussed earlier.

The probability of dropping a packet rises linearly with the queue depth, starting from some specified minimum threshold up to a maximum value. A simple example should help to explain how this works. Suppose the minimum threshold happens when there are 5 packets in the queue, and the maximum when there are 15 packets. If there are fewer than 5 packets in the queue, RED will not drop anything. When the queue depth reaches the maximum threshold, RED will drop one packet in 10. If there are 10 packets in the queue, then it is exactly halfway between the minimum and maximum thresholds. So, at this depth, RED will drop half as many packets as it will at the maximum threshold: one packet in 20. Similarly, if there are 7 packets in the queue, that is 20 percent of the distance between the minimum and maximum thresholds, so the drop probability will be 20 percent of the maximum: one packet in 50.

If the queue fills up despite the random drops, then the router has no choice but to resort to tail dropsthe same as if there were no sophisticated congestion avoidance. So RED and WRED have a particularly clever way of telling the difference between a momentary burst and longer term heavy traffic volume, because they need to be much more aggressive with persistent congestion problems.

Instead of using a constant queue depth threshold value, these algorithms base the decision to drop packets on an exponential moving time averaged queue depth. If the queue fills because of a momentary burst of packets, RED will not start to drop packets immediately. However, if the queue continues to be busy for a longer period of time, the algorithm will be increasingly aggressive about dropping packets. In this way, the algorithm doesn't disrupt short bursts, but it will have a strong effect on applications that routinely overuse the network resources.

The WRED algorithm is similar to RED, except that it selectively prefers to drop packets that have lower IP Precedence values. Cisco routers achieve this by simply having a lower minimum threshold for lower precedence traffic. So, as the congestion increases, the router will tend to preferentially drop packets with lower precedence values. This tends to protect the important traffic at the expense of less important applications. However, it is also important to bear in mind that this works best when the amount of high precedence traffic is relatively small.

If there is a lot of high priority traffic in the queue, it will not tend to benefit much from the efficiency improvements typically offered by WRED. In this case, you will likely see only a slight improvement over the characteristics of ordinary tail drops. This is yet another reason for being careful in your traffic categorization, and not being too generous with the high precedence values.

Flow-based WRED is an interesting variant on WRED. In this case, the router makes an effort to separate out the individual flows in the router and penalize only the ones that are using more than their share of the bandwidth. The router does this by maintaining a separate drop probability for each flow based on their individual moving averages. The heaviest flows with the lowest precedence values tend to have the most dropped packets. However, it is important to note that the queue is congested by all the traffic, not just the heaviest flows. So the lighter flows will also have a finite drop probability in this situation. But the fact that the heavy flow will have more packets in the queue, combined with the higher drop probability for these heavier flows, means that you should expect them to contribute most of the dropped packets.


Our look is the result of reader comments, our own experimentation, and feedback from distribution channels. Distinctive covers complement our distinctive approach to technical topics, breathing personality and life into potentially dry subjects.

The animal on the cover of Cisco IOS Cookbook, Second Edition is a black jaguar (Panthera onca), sometimes called a black panther. While the color of black (melanastic) jaguars differs from that of the more common golden-yellow variety, they are of the same species. Jaguars of all types are native to the tropics, swamps, and grasslands of Central and South America (and rumored to still exist in parts of the Southwestern U.S.), but the black jaguar is usually found only in dense forests. They are between 4 and 6 feet long and have a long tail that is usually about 30 inches long. Males can weigh up to 250 pounds, while females are considerably smaller and rarely grow to more than 150 pounds. Although black jaguars often appear to be solid black in artistic renditions and photography, their coats still have the dark rings containing even darker spots that are a distinguishing feature of all jaguars. Also notable are their eyes, which are a shiny, reflective yellow.

Jaguars will eat almost any animal, including sloths, pigs, deer, monkeys, and cattle. Their hooked claws allow them to catch fish, frogs, turtles, and even small alligators. Although jaguars sit at the top of the rain forest food chain, humans are a large threat-it's estimated that only 15,000 jaguars are left in the wild, and the species is listed as near threatened. They are hunted for their coats (the black coat is greatly prized), and deforestation threatens their survival.

The black jaguar plays a large role in many South American religions and is often considered a wise and divine animal that is associated with the worlds of magic and spirit. The Aztecs believed that the jaguar was the earthbound representative of their deity, and both the Mayans and Toltecs believed that their sun god became a black jaguar at night in order to pass unseen through the underworld.

The cover image is a 19th-century engraving from the Dover Pictorial Archive. The cover font is Adobe ITC Garamond. The text font is Linotype Birka; the heading font is Adobe Myriad Condensed; and the code font is LucasFont's TheSans Mono Condensed.

About the Authors

Kevin Dooley is Director of Enterprise Networking at CNG Solutions. He has been designing and implementing networks for longer than he'd like to admit. In that time, he has built large-scale local and wide area networks for several of Canada's largest companies. He holds a Ph.D. in physics from the University of Toronto, numerous Cisco certifications, and is the author of Designing Large-Scale LANs (O'Reilly).

Ian J. Brown, CCIE #3372, is a managing consultant at Bell Nexxia in Toronto with more than 12 years of experience in the networking indusstry. His areas of expertise include TCP/IP and IP routing, as well as management, security, design, and troubleshooting for large-scale networks. He has had the privilege of working on some of Canada's largest and most complex networks. In his spare time, Ian enjoys scuba diving, working out, and traveling.

Router Configuration and File Management

Router Management

User Access and Privilege Levels


IP Routing





Frame Relay

Handling Queuing and Congestion

Tunnels and VPNs

Dial Backup

NTP and Time


Router Interfaces and Media

Simple Network Management Protocol





First Hop Redundancy Protocols

IP Multicast

IP Mobility




Appendix 1. External Software Packages

Appendix 2. IP Precedence, TOS, and DSCP Classifications


show all menu

Cisco IOS Cookbook
Cisco IOS Cookbook (Cookbooks (OReilly))
ISBN: 0596527225
EAN: 2147483647
Year: 2004
Pages: 505
Similar book on Amazon

Flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net