1. Library
  2. Tools and Commands
  3. Network Analysis

Updated 10 hours ago

How fast can this network actually go?

Not "what does the spec sheet say" or "what did the provider promise"—but right now, between these two machines, how much data can actually flow?

iperf answers this by becoming the traffic itself. It floods the connection with controlled test data and measures what arrives. No sampling, no estimation. The definitive answer.

The Client-Server Model

iperf needs two endpoints: a server waiting for connections and a client generating traffic. On one machine:

iperf -s

This listens on TCP port 5001. On the other:

iperf -c server.example.com

The client blasts data at the server for 10 seconds and reports what got through. That's it. Two commands, one answer.

To test longer (more stable measurements on variable networks):

iperf -c server.example.com -t 60

Both Directions Matter

The basic test measures upload: client to server. But networks are often asymmetric—your download speed might differ dramatically from upload.

Test both directions sequentially:

iperf -c server.example.com -r

Or simultaneously, which reveals whether the path can sustain full throughput in both directions at once:

iperf -c server.example.com -d

UDP: When Packets Can't Wait

TCP retransmits lost packets. That's fine for file transfers but fatal for video calls—by the time the retransmission arrives, the moment has passed.

UDP tests reveal what real-time applications will experience:

iperf -c server.example.com -u -b 100M

This attempts to send 100 Mbps of UDP traffic. The server reports what actually arrived:

[  3] 0.0-10.0 sec  117 MBytes  98.0 Mbits/sec  0.028 ms  150/85000 (0.18%)

Those 150 lost packets out of 85,000? That's a 0.18% loss rate. In a video call, that's the frozen frames, the audio glitches, the "can you repeat that?" The 0.028 ms jitter tells you how consistently packets arrive—high jitter means choppy playback even without loss.

Parallel Streams

A single TCP stream might not saturate a fast connection, especially over long distances. TCP's congestion control can be conservative.

iperf -c server.example.com -P 4

Four parallel streams often achieve higher aggregate throughput than one. If quadrupling streams doubles your throughput, you've found a TCP tuning issue, not a capacity limit.

Reading the Output

TCP tests show throughput over time:

[  3] 0.0- 1.0 sec  112 MBytes  941 Mbits/sec
[  3] 1.0- 2.0 sec  113 MBytes  945 Mbits/sec
[  3] 2.0- 3.0 sec  112 MBytes  940 Mbits/sec
...
[  3] 0.0-10.0 sec  1.10 GBytes  942 Mbits/sec

Steady numbers mean a stable connection. Wild swings mean congestion, interference, or competing traffic.

A 1 Gbps Ethernet link won't hit 1000 Mbps—protocol overhead consumes ~5%. Expect 940-950 Mbps for TCP.

TCP Tuning

On high-bandwidth, high-latency paths (think: transcontinental links), TCP window size matters:

iperf -c server.example.com -w 256K

Larger windows keep more data in flight while waiting for acknowledgments. The optimal size depends on bandwidth × latency (the bandwidth-delay product).

For detailed TCP internals:

iperf -c server.example.com -e

This shows retransmissions, congestion window dynamics—useful when diagnosing why throughput is lower than expected.

iperf3 vs. iperf2

iperf3 is a complete rewrite with cleaner code and JSON output for automation:

iperf3 -c server.example.com --json

The two versions are incompatible—an iperf3 client cannot talk to an iperf2 server. Ensure both ends match.

Output Formats

iperf adapts its units by default. Force specific formats:

  • -f b: bits per second
  • -f m: megabits per second
  • -f M: megabytes per second
  • -f g: gigabits per second

Choose what makes comparison easiest.

Security Note

An iperf server accepts connections and consumes massive bandwidth. Don't leave one running on production systems. If you must:

sudo iptables -A INPUT -p tcp --dport 5001 -s 192.168.1.100 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 5001 -j DROP

When Results Seem Wrong

iperf can saturate CPU on fast connections—check with top during tests. If CPU hits 100%, you're measuring compute limits, not network limits.

On Linux, check TCP buffer sizes:

sysctl net.ipv4.tcp_rmem
sysctl net.ipv4.tcp_wmem

Small buffers throttle high-bandwidth paths.

What iperf Tells You

iperf measures what the network path can deliver under ideal conditions: tuned parameters, dedicated traffic, no application overhead. Real applications may see less due to their own inefficiencies.

But when you need to know whether the network is the bottleneck or something else is—iperf gives you the answer.

Frequently Asked Questions About iperf

Was this page helpful?

😔
🤨
😃