Updated 10 hours ago
Your ISP advertises 100 Mbps. You run a speed test and get 72 Mbps. Are they lying?
No. They're speaking a different language. They're talking about bandwidth. You're measuring throughput. These are fundamentally different things, and the gap between them explains most network performance confusion.
Bandwidth: The Size of the Pipe
Bandwidth is capacity—the maximum amount of data a connection could theoretically carry. When your ISP says "100 Mbps," they mean: under perfect conditions, with no overhead, no congestion, no latency, this link could move 100 megabits every second.
It's a ceiling, not a promise.
Bandwidth is determined by physics and infrastructure. A fiber optic cable supports different bandwidth than a copper wire. A 5 GHz WiFi channel supports different bandwidth than 2.4 GHz. These are fixed characteristics of the medium itself.
Throughput: What Actually Flows
Throughput is reality—the actual amount of data successfully transferred. It's what you measure when you run a speed test. It's what determines how long your download takes.
Throughput is always less than bandwidth. Always. The question is how much less, and why.
The Gap: Where Your Speed Goes
Protocol overhead consumes bandwidth before your data even starts moving. Every packet needs headers—TCP adds addressing and sequencing information, IP adds routing information, Ethernet adds framing. A typical packet carries about 1,460 bytes of your data wrapped in 40+ bytes of headers. That's 3% gone immediately, and we haven't even left your computer yet.
Latency creates waiting. TCP, the protocol underlying most Internet traffic, requires acknowledgment. Your computer sends data, then waits for confirmation it arrived before sending more. If that confirmation takes 100 milliseconds (common for cross-country connections), you spend 100 milliseconds doing nothing productive. High bandwidth with high latency is like a wide pipe that's very long—capacity exists, but filling it takes time.
This is captured in a concept called the bandwidth-delay product. A 100 Mbps connection with 100ms latency can only have 10 megabits "in flight" at any moment. If your application can't keep that much data moving continuously, you'll never reach your bandwidth ceiling. Distance matters, even when your connection is "fast."
Packet loss forces retransmission. If 2% of packets vanish and must be re-sent, you've lost 2% of your throughput just to stay even. Worse, TCP interprets packet loss as congestion and deliberately slows down. A small amount of loss can crater throughput.
Congestion divides capacity. Your 100 Mbps connection shares infrastructure with your neighbors. Your home network shares bandwidth among your devices. When everyone streams video at 8 PM, each stream gets a fraction of the total.
Processing limits create bottlenecks. Your router has a CPU. If it can't process packets as fast as they arrive, throughput drops regardless of what the network itself supports. Many routers advertise gigabit ports but achieve 600-700 Mbps in practice.
The Third Metric: Goodput
There's an even more honest number: goodput. This is the useful data transferred—excluding headers, excluding retransmissions, excluding everything that isn't your actual content.
When you download a 100 MB file, goodput determines how long it takes. Not bandwidth. Not even throughput. The data that actually constitutes your file, arriving intact, ready to use.
For most purposes, goodput is what you actually care about. The rest is accounting.
Same Bandwidth, Different Throughput
Consider a 1 Gbps fiber connection—genuine, verified, 1,000 Mbps of bandwidth. Actual throughput might be:
- 950 Mbps downloading from a nearby server with minimal overhead
- 600 Mbps downloading from across the country, where latency limits TCP's ability to fill the pipe
- 300 Mbps downloading from a server that simply can't send faster
- 100 Mbps during peak evening hours when the ISP's upstream links are saturated
- 50 Mbps over WiFi from two rooms away, where signal degradation is the bottleneck
Same connection. Same bandwidth. Five completely different experiences.
Closing the Gap
You can't change bandwidth without changing your plan or infrastructure. But you can improve throughput:
Reduce latency by choosing closer servers, using wired instead of wireless connections, and ensuring your router isn't adding unnecessary delay.
Eliminate packet loss with quality equipment, proper cable management, and adequate WiFi coverage. Even 0.5% packet loss visibly degrades TCP performance.
Manage congestion with Quality of Service rules that prioritize important traffic. If video calls matter more than background downloads, tell your router.
Upgrade bottlenecks when your router or modem can't keep up. Hardware from five years ago may not handle today's speeds.
Why This Matters
When your network feels slow, the problem could be:
- Insufficient bandwidth — the pipe is too narrow
- Poor throughput — the pipe is wide but something's blocking the flow
- High latency — the pipe is long and responses take time to return
- Packet loss — data is vanishing and must be re-sent
Each problem has different solutions. Adding bandwidth doesn't help if latency is the bottleneck. Reducing latency doesn't help if you genuinely need a bigger pipe.
When you measure 72 Mbps on your "100 Mbps" connection, you're not being cheated. You're seeing the difference between theoretical capacity and practical reality. The 28 Mbps went to protocol overhead, latency delays, and the accumulated friction of moving data across a network.
Understanding where it went is the first step to getting more of it back.
Frequently Asked Questions About Bandwidth vs. Throughput
Was this page helpful?