How to Use the Linux Traffic Control

Traffic control (tc) is a very useful Linux utility that gives you the ability to configure the kernel packet scheduler. If you are looking for reasons to mess with the kernel scheduler, here are a few: Firstly, it’s fun to play with the different options and become familiar of all of Linux’s features. In addition, you can utilize Linux’s helpful tools to simulate packet delay and loss for UDP or TCP applications, or limit the bandwidth usage of a particular service to simulate Internet connections (DSL, Cable, T1, etc).

On Debian Linux, tc comes bundled with iproute, so in order to install it you have to run:

Delay

The first example is how to add constant delay to an interface. The syntax is as follows (run this as root):

Here is what each option means:

qdisc: modify the scheduler (aka queuing discipline)
add: add a new rule
dev eth0: rules will be applied on device eth0
root: modify the outbound traffic scheduler (aka known as the egress qdisc)
netem: use the network emulator to emulate a WAN property
delay: the network property that is modified
200ms: introduce delay of 200 ms

Note: this adds a delay of 200 ms to the egress scheduler, exclusively. If it were to add the delay to both the ingress and egress schedulers, the total delay would have totaled 400 ms. In general, all of these traffic control rules are applied to the egress scheduler only.

Here is how ping looks like before:

Here is what ping looks like after applying this rule:

In order to display the active rules use:

You can see that details of the existing rules that adds 200.0 ms of latency.

To delete all rules use the following command:

And now we can see what are the default rules of the linux scheduler:

Without going into too much detail, we see that the scheduler works under First In First Out (FIFO) rules which is the most basic and fair rule if you don’t want to set any priorities on specific packets. You can think about it like the line at the bank: customers are being taken care off in the order they arrive.

Note that if you have an existing rule you can change it by using “ tc qdisc change” and if you don’t have any rules you add rules with “ tc qdisc add...

Here are some other examples:

-Delay of 100ms and random +-10ms uniform distribution:
tc qdisc change dev eth0 root netem delay 100ms 10ms
-Delay of 100ms and random 10ms uniform variation with correlation value 25% (since network delays are not completely random):
tc qdisc change dev eth0 root netem delay 100ms 10ms 25%
-Delay of 100ms and random +-10ms normal distribution (other distribution options are pareto, and paretonormal):
add dev eth0 root netem delay 100ms 20ms distribution normal

Packet Loss and Packet Corruption

Without explaining the syntax in detail here is now to introduce a packet loss of 10%:
tc qdisc add dev eth0 root netem loss 10%

We can test this by running a ping test with 100 ICMP packets. Here is what the aggregate statistics look like:

As you can see, there was 11% packet loss which is very close to the value was set. Note that if you are ssh’ed to the Linux box that you are running these commands on, your connection might be lost due to having set the packet loss too high.

The following rule corrupts 5% of the packets by introducing single bit error at a random offset in the packet:
tc qdisc change dev eth0 root netem corrupt 5%

This one duplicates 1% of the sent packets:
tc qdisc change dev eth0 root netem duplicate 1%

Bandwidth limit

In order to limit the egress bandwidth we can use the following command:

tc qdisc add dev eth0 root tbf rate 1mbit burst 32kbit latency 400ms

tbf: use the token buffer filter to manipulate traffic rates
rate: sustained maximum rate
burst: maximum allowed burst
latency: packets with higher latency get dropped

The best way to demonstrate this is with an iPerf test. In my lab I get 95 Mbps of performance before applying any bandwidth rules:

And here is the performance after applying the 1 Mbps limit:

As you can see the measured bandwidth of 1.14 Mbps is very close to the limit that was configured.

Traffic control (tc), in combination with network emulator (netem), and token bucket filters (tbf), can perform much more advanced configurations than just tc alone.  A few examples of advanced configurations are maximizing TCP throughput on an asymmetric link, prioritizing latency sensitive traffic, or managing oversubscribed bandwidth. Some of these tasks can be performed effectively with other tools or services, but tc is a great utility to have in your arsenal when the need arises.

download tech overview button
  • Rick Steiner

    Can you provide examples of when this might be useful to use in testing?

    • Panos Vouzis

      Hi Rick, a customer had a high capacity but high latency link (If I recall correctly >100 ms). They wanted to use iperf to monitor the bandwidth, but the high latency needed a higher than the default TCP window size that iperf uses. I used traffic control to simulate their high latency link on my lab and find the appropriate window size they should use. Here is a related post: https://netbeez.net/blog/how-to-adjust-the-tcp-window-size-limit-on-linux/

      I use traffic control mostly to simulate different network conditions in my lab and test protocols and software on those conditions. For more use cases take a look at section 2.2: http://tldp.org/HOWTO/Traffic-Control-HOWTO/overview.html#o-what-is

      (Sorry for the late response. I didn’t get a notification about your comment)