Network Performance and End-User Experience

What Impacts the End-User Experience?

In this article, we’ll analyze how network performance impacts the end-user experience. Network metrics like round-trip time, packet loss, and throughput affect the response time of a web application and how the user interacts with it. Other factors related to an application back-end (like as web server or database performance) will not be covered – primarily because they are outside a network engineer’s control.

End-User Experience of Web Applications

A study performed by the Stanford Linear Accelerator Center showed a clear correlation between round trip time and HTTP response time. In the following figure you can see that, for each data point, the lower bound of an HTTP GET is twice its round trip time.

Figure 1 – Correlation between HTTP GET response time and ping RTT (source

This is caused by the fact that a minimal TCP transaction involves two round trips, one to establish the connection between the client and the server, and the second to send the HTTP GET request and receive the response.

Round-Trip Time

Round-trip time represents the time that it takes for a data packet to go back and forth to a specific destination. This value is tightly correlated to the distance between the two hosts (plus the delay at each intermediate hop). Delay is the time that it takes for a packet to be processed at intermediate hops between a source and destination. Network latency is the time that it takes for a packet to travel across one network link.

In this table you can see different values of RTT based on the type of network. Typically, round-trip time of a Local Area Network is between 1 to 5 milliseconds, while a metropolitan area network is between 3 and 10 milliseconds.

Link Type

RTT range
Local Area Network1 – 5 ms
Metropolitan Area Network3 – 10 ms
Regional10 – 20 ms

70 – 80 ms

International100+ ms

Packet Loss

Packet loss is a good measure for link quality. Packet loss is calculated as the percentage of packets that have been received malformed or not received at all by the destination host. When a packet is malformed, it means that somehow along the way some parts of the packet were not correctly transmitted, causing the TCP checksum at the destination not to match the value calculated at the source.

So how does packet loss affect the end-user experience? The TCP congestion control and avoidance algorithms rule the throughput that can be achieved between a source and a destination host. So when packet loss happens in a TCP transfer, the throughput is reduced and stabilized to match the speed that the network can sustain.

Figure 2 – Correlation between TCP congestion window size and round trips.


Throughput is the amount of data successfully transferred between a source and a destination. This parameter should not be confused with bandwidth or data rate. Bandwidth, like data rate, is the maximum throughput achievable on a network or link. It is generally established by the the media and encoding technologies. Throughput is the actual transfer rate achieved between two hosts.

Since TCP is the protocol used to browse web pages and download files, it’s very important to familiarize yourself with the network metrics that have an impact on web browsing and download speeds. There are three core parameters that affect throughput in a TCP connection: MSS, which is a Maximum Segment Size, Round-Trip Time, and packet loss.

Bandwidth Delay Product

Bandwidth delay product is an important parameter of a TCP connection. TCP (Transmission Control Protocol) is a connection oriented protocol where every packet sent by the sender must be acknowledged by the receiver. Unlike UDP, which doesn’t require acknowledgement, TCP will ensure that all packets are successfully received by the destination host.

In TCP, if the sending source doesn’t receive an acknowledgement for one packet within a certain time window, it will retransmit the packet until it’s successfully received and acknowledged by the receiver.

The amount of data sent by the source on the circuit (but not yet acknowledged) is called Bandwidth Delay Product. This value can be easily calculated by multiplying the data rate of a link by its round-trip time.

Throughput as a function of latency and packet loss

In the below image, you can see a real example of the impact of packet loss and round trip time on TCP throughput. This chart was created by the staff of ESnet when they were dealing with a dirty optic on one of their routers. The optic was causing 0.0046% of packet loss. That corresponds to one packet to every 220,000 packets: A very small number! Yet, if you follow the trend of the blue, red, and green dotted lines, you can see how the throughput quickly decreases as the round trip time value increases.

Figure 3 – Impact of latency and packet loss on TCP throughput (source


Do end-users care about packet loss, round trip time, and throughput? Not really…what end users care mostly about is that the websites and cloud applications that they use load quickly, that video streaming and voice calls are clear, and that they can download files in the shortest amount of time possible. Network engineers can monitor key network performance metrics to correct the network quickly so that the best end-user experience can be delivered seamlessly.