Updated 10 hours ago
How fast can this network actually go?
Not "what does the spec sheet say" or "what did the provider promise"—but right now, between these two machines, how much data can actually flow?
iperf answers this by becoming the traffic itself. It floods the connection with controlled test data and measures what arrives. No sampling, no estimation. The definitive answer.
The Client-Server Model
iperf needs two endpoints: a server waiting for connections and a client generating traffic. On one machine:
This listens on TCP port 5001. On the other:
The client blasts data at the server for 10 seconds and reports what got through. That's it. Two commands, one answer.
To test longer (more stable measurements on variable networks):
Both Directions Matter
The basic test measures upload: client to server. But networks are often asymmetric—your download speed might differ dramatically from upload.
Test both directions sequentially:
Or simultaneously, which reveals whether the path can sustain full throughput in both directions at once:
UDP: When Packets Can't Wait
TCP retransmits lost packets. That's fine for file transfers but fatal for video calls—by the time the retransmission arrives, the moment has passed.
UDP tests reveal what real-time applications will experience:
This attempts to send 100 Mbps of UDP traffic. The server reports what actually arrived:
Those 150 lost packets out of 85,000? That's a 0.18% loss rate. In a video call, that's the frozen frames, the audio glitches, the "can you repeat that?" The 0.028 ms jitter tells you how consistently packets arrive—high jitter means choppy playback even without loss.
Parallel Streams
A single TCP stream might not saturate a fast connection, especially over long distances. TCP's congestion control can be conservative.
Four parallel streams often achieve higher aggregate throughput than one. If quadrupling streams doubles your throughput, you've found a TCP tuning issue, not a capacity limit.
Reading the Output
TCP tests show throughput over time:
Steady numbers mean a stable connection. Wild swings mean congestion, interference, or competing traffic.
A 1 Gbps Ethernet link won't hit 1000 Mbps—protocol overhead consumes ~5%. Expect 940-950 Mbps for TCP.
TCP Tuning
On high-bandwidth, high-latency paths (think: transcontinental links), TCP window size matters:
Larger windows keep more data in flight while waiting for acknowledgments. The optimal size depends on bandwidth × latency (the bandwidth-delay product).
For detailed TCP internals:
This shows retransmissions, congestion window dynamics—useful when diagnosing why throughput is lower than expected.
iperf3 vs. iperf2
iperf3 is a complete rewrite with cleaner code and JSON output for automation:
The two versions are incompatible—an iperf3 client cannot talk to an iperf2 server. Ensure both ends match.
Output Formats
iperf adapts its units by default. Force specific formats:
-f b: bits per second-f m: megabits per second-f M: megabytes per second-f g: gigabits per second
Choose what makes comparison easiest.
Security Note
An iperf server accepts connections and consumes massive bandwidth. Don't leave one running on production systems. If you must:
When Results Seem Wrong
iperf can saturate CPU on fast connections—check with top during tests. If CPU hits 100%, you're measuring compute limits, not network limits.
On Linux, check TCP buffer sizes:
Small buffers throttle high-bandwidth paths.
What iperf Tells You
iperf measures what the network path can deliver under ideal conditions: tuned parameters, dedicated traffic, no application overhead. Real applications may see less due to their own inefficiencies.
But when you need to know whether the network is the bottleneck or something else is—iperf gives you the answer.
Frequently Asked Questions About iperf
Was this page helpful?