Testing Multi-Path TCP (MPTCP) with iPerf3

We’ve covered many aspects of iPerf on our blog, and recently I found that that iPerf3 version 3.19 added native Multi-Path TCP (MPTCP) support. In this post we’ll explain what MPTCP is, why it matters, and walk through a hands-on demo using two Raspberry Pis to show its resilience in action.

What is MPTCP?

Multi-Path TCP is an extension to standard TCP (defined in RFC 8684) that allows a single TCP connection to use multiple network paths at the same time.

RFC 8684

With regular TCP, a connection is tied to a single pair of IP addresses. If that path degrades or fails, the connection drops. MPTCP solves this by splitting one logical connection across multiple physical paths, each as a separate “subflow.”

The main benefits are:

  • Resilience – if one path fails, traffic shifts to another without dropping the connection
  • Throughput aggregation – on truly independent paths, bandwidth can be combined
  • Better resource utilization – traffic shifts dynamically toward less congested paths

The protocol was developed by Olivier Bonaventure and his team at UCLouvain in Belgium, and its first major real-world deployment came from Apple. Many people use Siri while walking or driving. As they move farther away from a WiFi access point, the TCP connection used by Siri to stream voice eventually fails, resulting in error messages. To address this, Apple has been using MPTCP since iOS 7 — when a user issues a Siri voice command, iOS establishes a connection over both WiFi and cellular, so if WiFi drops, the connection hands over to cellular seamlessly as described on this video about MPTCP at Apple.

MPTCP is now used on all iPhones to provide seamless handovers and improve performance for Siri, Apple Music, and other applications. This deployment has also encouraged 3GPP to adopt MPTCP for the ATSSS service, which will allow future 5G smartphones to seamlessly switch between WiFi and cellular networks. Cloudflare has also written about how it is changing connectivity more broadly. On the infrastructure side, the Linux kernel has supported MPTCPv1 natively since kernel 5.6 (2020).

Installing iPerf3 3.19 or Later from Source

One way to demonstrate MPTCP is by using iPerf3 3.19 or later. Most Linux distributions lag behind on this. Debian Bookworm’s repository ships iPerf3 3.12, so you need to build from source.

First install the kernel headers matching your running kernel. This step is required for MPTCP to be detected at compile time:

apt install linux-headers-$(uname -r) git build-essential

Then clone and build:

git clone https://github.com/esnet/iperf.git
cd iperf
git checkout 3.21
./configure
make
make install

Verify MPTCP support was compiled in:

iperf3 --help | grep mptcp
  -m, --mptcp               use MPTCP rather than plain TCP

If the ‘–mptcp’ flag appears, you’re good. 

You also need a kernel with MPTCP support enabled:

iperf3 --help | grep mptcp
  -m, --mptcp               use MPTCP rather than plain TCP

Raspberry Pi OS Bookworm (kernel 6.6+) has MPTCP enabled by default. Older Raspbian kernels do not.

Lab Setup

For this demo we used two Raspberry Pi 4s running Raspberry Pi OS Bookworm, each with a wired (eth0) and wireless (wlan0) interface on the same 172.31.0.0/24 network:

Deviceeth0wlan0
Client172.31.0.133172.31.0.173
Server172.31.0.218172.31.0.237

Configuring MPTCP Endpoints

MPTCP needs to know which local interfaces to use as subflow endpoints. These settings are not persistent across reboots, so a reboot will cleanly reset them if something goes wrong.

On the client (172.31.0.133):

ip mptcp endpoint flush
ip mptcp endpoint add 172.31.0.133 dev eth0 subflow
ip mptcp endpoint add 172.31.0.173 dev wlan0 subflow
ip mptcp limits set subflows 2 add_addr_accepted 4

On the server (172.31.0.218):

ip mptcp endpoint flush
ip mptcp endpoint add 172.31.0.133 dev eth0 subflow
ip mptcp endpoint add 172.31.0.173 dev wlan0 subflow
ip mptcp limits set subflows 2 add_addr_accepted 4

The ‘signal’ ’flag on the server tells MPTCP to advertise the server’s additional addresses to the client via the ADD_ADDR option. This allows the client to open subflows to both server interfaces.

Running the Test

Start the server with an explicit IPv4 bind address. Without ‘-B’, iPerf3 defaults to an IPv6 listener and MPTCP on Linux currently only supports IPv4:

iperf3 -s -B 0.0.0.0

On the client, open two terminals. In the first, monitor MPTCP subflow events:

ip mptcp monitor

In the second, run the test:

iperf3 -c 172.31.0.218 -t 10 --mptcp

Watching MPTCP Negotiate Subflows

As soon as the connection starts,’ip mptcp monitor’ shows the subflow negotiation in real time:

$>ip mptcp monitor
[       CREATED] token=26d472c7 remid=0 locid=0 saddr4=172.31.0.133 daddr4=172.31.0.218 sport=56420 dport=5201
[   ESTABLISHED] token=26d472c7 remid=0 locid=0 saddr4=172.31.0.133 daddr4=172.31.0.218 sport=56420 dport=5201
[     ANNOUNCED] token=26d472c7 remid=2 daddr4=172.31.0.237 dport=5201
[SF_ESTABLISHED] token=26d472c7 remid=0 locid=2 saddr4=172.31.0.173 daddr4=172.31.0.218 sport=58871 dport=5201 backup=0 ifindex=3

Step by step:

  1. The main subflow is created and established over eth0: 172.31.0.133 to 172.31.0.218
  2. The server advertises its wlan0 address (172.31.0.237) via ADD_ADDR
  3. A second subflow is established over the client’s wlan0: 172.31.0.173 to 172.31.0.218

Two independent paths are now active for a single TCP connection.

The Failover Demo

With the test running, we brought down wlan0 on the client at around the 3-second mark:

ip link set wlan0 down

Here is the iPerf3 output:

$>iperf3 -c 172.31.0.218 -t 10 --mptcp
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   113 MBytes   947 Mbits/sec   93    208 KBytes
[  5]   1.00-2.00   sec   110 MBytes   924 Mbits/sec   66    212 KBytes
[  5]   2.00-3.00   sec   108 MBytes   904 Mbits/sec   75    209 KBytes
[  5]   3.00-4.00   sec  34.4 MBytes   288 Mbits/sec   22    192 KBytes  <-- wlan0 down
[  5]   4.00-5.00   sec  85.2 MBytes   715 Mbits/sec    0    342 KBytes  <-- recovering
[  5]   5.00-6.00   sec   110 MBytes   925 Mbits/sec    0    407 KBytes  <-- fully recovered
[  5]   6.00-7.00   sec   110 MBytes   919 Mbits/sec    0    421 KBytes
[  5]   7.00-8.00   sec   109 MBytes   914 Mbits/sec    0    427 KBytes
[  5]   8.00-9.00   sec   109 MBytes   918 Mbits/sec    0    430 KBytes
[  5]   9.00-10.00  sec   110 MBytes   919 Mbits/sec    0    431 KBytes

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   998 MBytes   837 Mbits/sec  256            sender

At second 3, throughput dropped from around ~900 Mbits/sec to 288 Mbits/sec. MPTCP detected the subflow loss, retransmitted in-flight data, and shifted all traffic to the surviving eth0 subflow. Within two seconds the connection was back to full speed, without dropping.

With regular TCP, taking down the active interface would have killed the connection outright.

A Note on Bandwidth Aggregation

You might expect MPTCP to show higher throughput than regular TCP since it uses two interfaces. In our lab it did not. MPTCP averaged 837 Mbits/sec compared to 937 Mbits/sec for regular TCP. Both interfaces share the same upstream switch, so there are no truly independent paths to aggregate. MPTCP also adds some overhead for managing subflows and resequencing data.

Bandwidth aggregation with MPTCP requires genuinely independent paths, for example a wired connection and a cellular link on separate ISPs. The resilience benefit however works even on a shared LAN, as shown above.

Conclusion

MPTCP support in iPerf3 3.19 makes it straightforward to test and validate MPTCP deployments on Linux. The setup requires a kernel with MPTCP enabled (Linux 5.6+), kernel headers installed before building iPerf3 from source, and endpoint configuration via `ip mptcp`.

The main takeaway from this demo is that MPTCP’s value in most deployments is not bandwidth aggregation but connection resilience. A two-second dip and full recovery is a very different outcome from a dropped connection, which is why Apple, Cloudflare, and a growing number of network operators are deploying it.

decoration image

Get your free trial now

Monitor your network from the user perspective

You can share

Twitter Linkedin Facebook

Let's keep in touch

decoration image