Hack 70 Traffic Shaping on FreeBSD


figs/expert.gif figs/hack70.gif

Allocate bandwidth for crucial services.

If you're familiar with your network traffic, you know that it's possible for some systems or services to use more than their fair share of bandwidth, which can lead to network congestion. After all, you have only so much bandwidth to work with.

FreeBSD's dummynet may provide a viable method of getting the most out of your network, by sharing bandwidth between departments or users or by preventing some services from using up all your bandwidth. It does so by limiting the speed of certain transfers on your network also called traffic shaping.

7.3.1 Configuring Your Kernel for Traffic Shaping

To take advantage of the traffic shaping functionality of your FreeBSD system, you need a kernel with the following options:

options IPFIREWALL options DUMMYNET options HZ=1000

dummynet does not require the HZ option, but its manpage strongly recommends it. See [Hack #69] for more about HZ and [Hack #54] for detailed instructions about compiling a custom kernel.

The traffic-shaping mechanism delays packets so as not to exceed the transfer speed limit. The delayed packets are stored and sent later. The kernel timer triggers sending, so setting the frequency to a higher value will smooth out the traffic by providing smaller delays. The default value of 100 Hz will trigger sends every 10 milliseconds, producing bursty traffic. Setting HZ=1000 will cause the trigger to happen every millisecond, resulting in less packet delay.

7.3.2 Creating Pipes and Queues

Traffic shaping occurs in three stages:

  1. Configuring the pipes

  2. Configuring the queues

  3. Diverting traffic through the queues and/or pipes

Pipes are the basic elements of the traffic shaper. A pipe emulates a network link with a certain bandwidth, delay, and packet loss rate.

Queues implement weighted fair queuing and cannot be used without a pipe. All queues connected to a pipe share the bandwidth of that pipe in a certain configurable proportion.

The most important parameter of a pipe configuration is its bandwidth. Set the bandwidth with this command:

# ipfw pipe 1 config bw 120kbit/s

This is a sample command run at the command prompt. However, as the hack progresses, we'll write the actual dummynet policy as rules within an ipfw rulebase.


This command creates pipe 1 if it does not already exist, assigning it 120 kilobits per second of bandwidth. If the pipe already exists, its bandwidth will be changed to 120 Kbps.

When configuring a queue, the two most important parameters are the pipe number it will connect to and the weight of the queue. The weight must be in the range 1 to 100, and it defaults to 1. A single pipe can connect to multiple queues.

# ipfw queue 5 config pipe 1 weight 20

This command instructs dummynet to configure queue 5 to use pipe 1, with a weight of 20. The weight parameter allows you to specify the ratios of bandwidth the queues will use. Queues with higher weights will use more bandwidth.

To calculate the bandwidth for each queue, divide the total bandwidth of the pipe by the total weights, and then multiply each weight by the result. For example, if a 120 Kbps pipe sees active traffic (called flows) from three queues with weights 3, 2, and 1, the flows will receive 60 Kbps, 40 Kbps, and 20 Kbps, respectively.

If the flow from the queue with weight 2 disappears, leaving only the flows with weights 3 and 1, those will receive 90 Kbps and 30 Kbps, respectively. (120 / (3+1) = 30, so multiply each weight by 30.)

The weight concept may seem strange, but it is rather simple. Queues with equal weights will receive the same amount of bandwidth. If queue 2 has double the weight of queue 1, it has twice as much bandwidth. Queues that have no traffic are not taken into account when dividing traffic. This means that in a configuration with two queues, one with weight 1 (for unimportant traffic) and the other with weight 99 (for important business traffic), having both queues active will result in 1%/99% sharing, but if there is no traffic on the 99 queue, the unimportant traffic will use all of the bandwidth.

7.3.3 Using Masks

Another very useful option is to create a mask by adding mask mask-specifier at the end your config line. Masks allow you to turn one flow into several flows; the mask will distinguish the different flows.

The default mask is empty, meaning all packets fall into the same flow. Using mask all would make all connections significant, meaning that every TCP or UDP connection would appear as a separate flow.

When you apply a mask to a pipe, each of that pipe's flows acts as a separate pipe. Yet, each of those flows is an exact clone of the original pipe, in that they all share the same parameters. This means that the three active flows from our example pipe will use 360 Kbps, or 120 Kbps each.

For a queue, the flows will act as several queues, each with the same weight as the original one. This means you can use the mask to share a certain bandwidth equally. For our example with three flows and the 120 Kbps pipe, each flow will get a third of that bandwidth, or 40 Kbps.

This hack assumes that you will integrate these rules in your firewall configuration or that you are using ipfw only for traffic shaping. In the latter case, having the IPFIREWALL_DEFAULT_TO_ACCEPT option in the kernel will greatly simplify your task.

In this hack, we sometimes limit only incoming or outgoing bandwidth. Without this option, we would have to allow traffic in both directions, traffic through the loopback interface, and through the interface we will not limit.

However, you should consider disabling the IPFIREWALL_DEFAULT_TO_ACCEPT option, as it will drop packets that your policy does not specifically allow. Additionally, enabling the option may cause you to accept potentially malicious traffic you hadn't considered. The example configurations in this hack were tested with an ipf-based firewall that had an explicit deny rule at the end.

When integrating traffic shaping into an existing ipfw firewall, keep in mind that an ipfw pipe or ipfw queue rule is equivalent to "ipfw accept after slow down . . . " if the sysctl net.inet.ip.fw.one_pass is set to 1 (the default). If the sysctl is set to 0, that rule is just a delay in a packet's path to the next rule, which may well be a deny or another round of shaping. This hack assumes that the default behavior of the pipe and queue commands is to accept or an equivalent action.

7.3.4 Simple Configurations

There are several ways of limiting bandwidth. Here are some examples that assume an external interface of ed0:

# only outgoing gets limited ipfw pipe 1 config bw 100kbits/s ipfw add 1 pipe 1 ip from any to any out xmit ed0

To limit both incoming and outgoing to 100 and 50 Kbps, respectively:

ipfw pipe 1 config bw 100kbits/s ipfw pipe 2 config bw 50kbits/s ipfw add 100 pipe 1 ip from any to any in  recv ed0 ipfw add 100 pipe 2 ip from any to any out xmit ed0

To set a limitation on total bandwidth (incoming plus outgoing):

ipfw pipe 1 config bw 100kbits/s ipfw add 100 pipe 1 ip from any to any in  recv ed0 ipfw add 100 pipe 1 ip from any to any out xmit ed0

In this example, each host gets 16 Kbps of incoming bandwidth (outgoing is not limited):

ipfw pipe 1 config bw 16kbits/s mask dst-ip 0xffffffff ipfw add 100 pipe 1 ip from any to any in recv ed0

7.3.5 Complex Configurations

Here are a couple of real-life examples. Let's start by limiting a web server's outgoing traffic speed, which is a configuration I have used on one of my servers. The server had some FreeBSD ISO files, and I did not want it to hog all the outgoing bandwidth. I also wanted to prevent people from gaining an unfair advantage by using download accelerators, so I chose to share the total outgoing bandwidth equally among 24-bit networks.

# pipe configuration, 2000 kilobits maximum ipfw pipe 1 config bw 2000kbits/s # the queue will be used to enforce the /24 limit mentioned above ipfw queue 1 config pipe 1 mask dst-ip 0xffffff00 # with this mask, only the first 24 bits of the destination IP # address are taken into consideration when generating the flow ID # divert outgoing traffic from the web server (at 1.1.1.1) ipfw add queue 1 tcp from 1.1.1.1 80 to any out

Another real-life example involves limiting incoming traffic by department. This configuration limits the incoming bandwidth for a small company behind a 1 Mbps connection. Before this was applied, some users were using peer-to-peer clients and download accelerators, and they were hogging almost all the bandwidth. The solution was to implement some weighted sharing between departments and let the departments take care of their own hogs.

# Variables we will use # External interface EXTIF=fxp0 # My IP address ME=192.168.1.1 # configure the pipe, 95% of total incoming capacity ipfw pipe 1 config bw 950kbits/s # configure the queues for the departments # departments 1 and 2 heavy net users ipfw queue 1 config pipe 1 weight 40 ipfw queue 2 config pipe 1 weight 40 # accounting, they shouldn't use the network a lot ipfw queue 3 config pipe 1 weight 5 # medium usage for others ipfw queue 4 config pipe 1 weight 20 # incoming mail (SMTP) to this server, HIGH priority ipfw queue 10 config pipe 1 weight 100 # not caught by the previous categories - VERY LOW bandwidth ipfw queue 11 config pipe 1 weight 1 # classify the traffic # only incoming traffic is limited, outgoing is not affected. ipfw add 10 allow ip from any to any out xmit via $EXTIF # department 1 ipfw add 100 queue 1 ip from any to 192.168.0.16/28 in via $EXTIF # department 2 ipfw add 200 queue 2 ip from any to 192.168.0.32/28 in via $EXTIF # accounting ipfw add 300 queue 3 ip from any to 192.168.0.48/28 in via $EXTIF # mail ipfw add 1000 queue 10 ip from any to $ME 25 in via $EXTIF # others ipfw add 1100 queue 11 ip from any to any in via $EXTIF

The incoming limit is set to 95% of the true available bandwidth. This will allow the shaper to delay some packets. If this were not the case and the pipe had the same bandwidth as the physical link, all of the delay queues for the pipe would have been empty. The extra 5% of bandwidth on the physical link fills the queues. The shaper chooses packets from the queues based on weight, passing through packets from queues with a higher weight before packets from queues with lower weight.

dummynet can limit incoming or outgoing bandwidth in multiple ways. Pairing it with well thought out ipfw rules can produce good results when your requirements are not extremely complex. However, keep in mind that dummynet cannot guarantee bandwidth or quality of service.


7.3.6 See Also

  • man dummynet

  • man ipfw

  • man ipf

  • "Using Dummynet for Traffic Shaping on FreeBSD" (http://www.bsdnews.org/02/dummynet.php)



BSD Hacks
BSD Hacks
ISBN: 0596006799
EAN: 2147483647
Year: 2006
Pages: 160
Authors: Lavigne

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net