Iperf is an open source network performance measurement tool that tests throughput. It sends traffic from one host to another and measures the amount of traffic that is transferred. In addition to the throughput measurement it can also report metrics such as packet loss and jitter. Iperf works for both TCP and UDP traffic with certain nuances that pertain to the specifics of each protocol. When in UDP mode, iperf can also generate multicast traffic.
This guide will review the following topics:
Iperf Versions
There are two versions of iperf which are being developed in parallel: version 2 and version 3. They are incompatible to each other, and maintained by different teams. However, both have the same set of functionalities, more or less.
Iperf version 2 was released in 2003 from the ground of the original iperf, and it’s currently in version 2.0.13. Iperf version 3 was released in 2014 and it’s current version is 3.7. Iperf3 is a rewrite of the tool in order to produce a simpler and smaller code base. The team that has taken the lead in iperf2 development is mainly focused on WiFi testing, while iperf3 is focused on research networks. However, most of their functionality overlaps, and they can be both used for general network performance and testing.
Both versions support a wide variety of platforms, including Linux, Windows, and MAC OS. And there is also a GUI version of iperf2 based in Java, called jperf. One subtle difference between the two version is the way they use the reverse option flag, and how traffic is exchanged. I invite you to read the blog post I liked to learn more about this.
Iperf Usage
In iperf, the host that sends the traffic is called client and the host that receives traffic is called server. Here is how the command line output looks for the two versions and for UDP and TCP tests, at their basic forms without any advanced options. Important to note is that in version 2, the default port where the server is listening is 5001 for both TCP and UDP protocols, while in version 3, the default port where the server is listening is 5201 for both protocols.
TCP
To run the iperf2 server using the Transmission Control Protocol, use the flag -s (iperf -s):
$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 172.31.0.25 port 5001 connected with 172.31.0.17 port 55082
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
To run the iperf2 client use the flag -c followed by the server’s IP address:
$ iperf -c 172.31.0.25
------------------------------------------------------------
Client connecting to 172.31.0.25, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.31.0.17 port 55082 connected with 172.31.0.25 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.09 GBytes 940 Mbits/sec
Similarly, to run the iperf3 server use the flag -s (iperf3 -s):
$ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 172.31.0.17, port 56342
[ 5] local 172.31.0.25 port 5201 connected to 172.31.0.17 port 56344
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 108 MBytes 907 Mbits/sec
[ 5] 1.00-2.00 sec 112 MBytes 941 Mbits/sec
...
[ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec
[ 5] 10.00-10.04 sec 4.21 MBytes 934 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 5] 0.00-10.04 sec 1.10 GBytes 938 Mbits/sec 0 sender
[ 5] 0.00-10.04 sec 1.10 GBytes 938 Mbits/sec receiver
To run the iperf3 client use the flag -c followed by the server’s IP address (iperf3 -c <client_IP>):
$ iperf3 -c 172.31.0.25
Connecting to host 172.31.0.25, port 5201
[ 4] local 172.31.0.17 port 56344 connected to 172.31.0.25 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 112 MBytes 943 Mbits/sec 0 139 KBytes
[ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0 139 KBytes
...
[ 4] 8.00-9.00 sec 112 MBytes 941 Mbits/sec 0 223 KBytes
[ 4] 9.00-10.00 sec 112 MBytes 941 Mbits/sec 0 223 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver
iperf Done.
UDP
To run the iperf2 server using the User Datagram Protocol you must add the flag -u that stands for UDP (iperf -s -u):
$ iperf -s -u ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 172.31.0.25 port 5001 connected with 172.31.0.17 port 54581 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.022 ms 0/ 893 (0%)
Same for the iperf2 client that needs the flag -u to specify that it’s an UDP test (iperf -c <client_IP> -u):
$ iperf -c 172.31.0.25 -u ------------------------------------------------------------ Client connecting to 172.31.0.25, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 208 KByte (default) ------------------------------------------------------------ [ 3] local 172.31.0.17 port 54581 connected with 172.31.0.25 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec [ 3] Sent 893 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.022 ms 0/ 893 (0%)
In iperf3 the server only needs the flag -s (iperf3 -s):
$ iperf3 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 172.31.0.17, port 56346 [ 5] local 172.31.0.25 port 5201 connected to 172.31.0.17 port 51171 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 5] 0.00-1.00 sec 120 KBytes 983 Kbits/sec 1882.559 ms 0/15 (0%) [ 5] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 670.381 ms 0/16 (0%) ... [ 5] 9.00-10.00 sec 128 KBytes 1.05 Mbits/sec 0.258 ms 0/16 (0%) [ 5] 10.00-10.04 sec 0.00 Bytes 0.00 bits/sec 0.258 ms 0/0 (-nan%) - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 5] 0.00-10.04 sec 1.25 MBytes 1.04 Mbits/sec 0.258 ms 0/159 (0%)
The iperf3 client will need to use the flag -u to select UDP as testing protocol (iperf3 -c <client_IP> -u):
$ iperf3 -c 172.31.0.25 -u Connecting to host 172.31.0.25, port 5201 [ 4] local 172.31.0.17 port 51171 connected to 172.31.0.25 port 5201 [ ID] Interval Transfer Bandwidth Total Datagrams [ 4] 0.00-1.00 sec 128 KBytes 1.05 Mbits/sec 16 [ 4] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 16 ... [ 4] 8.00-9.00 sec 128 KBytes 1.05 Mbits/sec 16 [ 4] 9.00-10.00 sec 128 KBytes 1.05 Mbits/sec 16 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 4] 0.00-10.00 sec 1.25 MBytes 1.05 Mbits/sec 0.258 ms 0/159 (0%) [ 4] Sent 159 datagrams iperf Done.
As you can see, the output format is very similar and both versions gave the same measurements in this example. However, in TCP mode iPerf tries to achieve the maximum tcp bandwidth available, while in UDP it defaults to 1Mbps. As you can see in the previous example, when tested with TCP, it almost saturated the gigabit link that connects the two hosts. Still, in UDP, you can specify the target throughput, and if you don’t give any bandwidth input value iPerf targets 1Mbps.
Common Iperf Options
Option | Description |
---|---|
-p, –port n | The server port for the server to listen on and the client to connect to. |
-f, –format [kmKM] | A letter specifying the format to print bandwidth numbers in. Supported formats: ‘k’ = Kbits/sec ‘K’ = KBytes/sec ‘m’ = Mbits/sec ‘M’ = MBytes/sec. |
-i, –interval n | Sets the interval time in seconds between periodic bandwidth reports. If this value is set to zero, no interval reports are printed. Default is zero. |
-B, –bind host | Bind to host, one of this machine’s addresses. For the client this sets the outbound interface. For a server this sets the incoming interface. |
-v, –version | Show version information and quit. |
-D, –daemon | Run the server in background as a daemon. |
-b, –bandwidth n[KM] | Set target bandwidth to n bits/sec (default 1 Mbit/sec for UDP, unlimited for TCP). If there are multiple streams (-P flag), the bandwidth limit is applied separately to each stream. |
-t, –time n | The time in seconds to transmit for. iPerf normally works by repeatedly sending an array of len bytes for time seconds. Default is 10 seconds. See also the -l, -k and -n options. |
-n, –num n[KM] | The number of buffers to transmit. Normally, iPerf sends for 10 seconds. The -n option overrides this and sends an array of len bytes num times, no matter how long that takes. See also the -l, -k and -t options. |
-P n | Number of parallel client threads to run. |
Advanced iperf capabilities
Iperf offers several advanced configuration options to manipulate the test traffic generated. For example, users can run multiple simultaneous connections, change the default TCP window size, or even run multicast traffic when in UDP mode.
Multiple simultaneous connections
iperf2 has the capability to facilitate multiple connections concurrently, making it a preferred choice for assessing concurrent network throughput and from multiple clients to a single sever. With iperf2, users can establish multiple client-server connections simultaneously, enabling comprehensive testing scenarios that closely simulate real-world network conditions. This ability to handle multiple connections efficiently empowers users to evaluate network performance under diverse loads, such as concurrent data transfers or simultaneous user activities.
Parallel connections
In alternative to simultaneous connections from different clients, iperf also supports parallel threads from the same client. By using the -P option followed by the number of parallel threads, network testers can ensure to maximize the throughput across a single link.
Manipulating the TCP window size
Understanding TCP window size is crucial for accurate iperf measurements. This setting, adjustable via the -w flag, controls the amount of data sent before waiting for an acknowledgment. Larger windows (set with -w) can boost reported bandwidth, especially on high-latency networks. However, limitations exist: operating systems restrict window size, and overly large settings can overwhelm the receiver, leading to inaccurate results. Finding the optimal window size involves balancing performance and resource usage. Experimenting with different -w values helps reveal your network’s true performance capabilities.
NetBeez and Iperf
The NetBeez is a network monitoring solution that runs TCP and UDP iperf tests and supports both versions 2 and 3. The iperf tests are configured on the NetBeez dashboard and then pushed to the agents. The NetBeez agents can be hardware or virtual appliances as well as cloud instances and Linux machines. NetBeez agents can run tests between agents or between a NetBeez agent and an iperf server.
NetBeez supports two ways to run iperf tests: ad-hoc when needed and scheduled for continuous monitoring and testing. If you want to test NetBeez request a demo or start a trial.
Ad-Hoc Tests
The Ad-Hoc testing enables you to run spontaneous one-off tests between two selected agent or between an agent and an iperf server. You can view results from multiple tests and multiple types of tests, allowing for easy viewing, comparison, and troubleshooting. The following screenshot shows an ad-hoc iperf tests running between two wired agents based on Raspberry Pi.
Scheduled Tests
NetBeez provides a simple way to run iperf. Tests can be scheduled to run at a user-defined intervals. During the configuration of a test, the user can pick different options, such as:
- Support for iperf2 and iperf3 tests
- Support for TCP or UDP tests, including multicast
- Mode: agent to agent or agents to server
- Definition of number of parallel flows, bandwidth, and TCP maximum segment size
- Type of Service marking
- TCP/UDP port used
- Test duration
- Run schedule
- Conditions upon which the test results are marked
The user can then review the historical data about the configured tests including the test conditions that were crossed. Not only NetBeez allows to automatically run iperf tests, but also to develop a performance baseline, highlight when throughput is reduced, and generate reports.
In the following screenshot, you can see the results of an iperf test that reports the UDP bandwidth, jitter, and packet loss.
If you want to automatically orchestrate iperf tests at scale, check out NetBeez and request a demo. NetBeez supports both iperf versions and different options.
Iperf References
This is just an introduction on the basics of iPerf. If you want to read more about the topic, can are other articles we wrote:
iPerf Performance Testing on Single Board Computers
Speedtest Comparison: Ookla, NDT, NetFlix, HTML, iPerf
Iperf WiFi: Raspberry Pi 3, ASUS, Hawking, LinkSys & TP-LINK