Updated 8 hours ago
When a packet takes 300 milliseconds to arrive, that delay isn't one thing—it's a dozen things stacked on top of each other. Understanding which type of latency is hurting you tells you whether you can fix it, or whether you're fighting physics.
Propagation Latency: The Speed of Light Tax
Propagation latency is the time light takes to travel the physical distance between two points. This is the tax physics charges on every packet, and there's no negotiating with it.
In fiber optic cables, light travels at about 200,000 kilometers per second—two-thirds the speed of light in a vacuum. A 1,000-kilometer cable imposes at least 5 milliseconds of delay, purely from photons traversing distance.
For intercontinental connections, this tax becomes substantial. The undersea cable from New York to London spans roughly 5,500 kilometers. That's 28 milliseconds minimum, one way. No optimization, no caching, no clever engineering can reduce this. Light simply takes that long.
Satellite connections reveal how brutal propagation latency can be. Geostationary satellites orbit 35,786 kilometers above Earth. A round trip—up to space and back down—covers about 72,000 kilometers. That's 240 milliseconds one way, 480 milliseconds round trip, just from the time radio waves spend traveling. Your packet literally goes to space and back.
What you can do: Move endpoints closer together. Use CDNs. Accept that some distances impose unavoidable delay.
Transmission Latency: Pushing Bits Onto the Wire
Transmission latency is how long it takes to push all the bits of a packet onto the network link. It's packet size divided by link bandwidth—pure arithmetic.
A 1,500-byte packet on a 100 Mbps link: 0.12 milliseconds. On a 1 Gbps link: 0.012 milliseconds. On modern high-speed connections, this is usually negligible.
But it explains something important: bandwidth matters more for bulk transfers than for interactive applications. Downloading a 1 GB file benefits enormously from higher bandwidth. Loading a web page with many small requests? Bandwidth barely registers—other latency types dominate.
What you can do: Increase bandwidth for bulk transfers. For interactive traffic, focus elsewhere.
Processing Latency: Every Hop Takes Its Cut
Every network device between you and your destination—routers, switches, firewalls, load balancers—spends time examining and forwarding your packet. Each one takes its cut.
Modern routers process packets in microseconds. But a packet might traverse 15-20 hops, and the delays accumulate. More complex processing takes longer:
- Switches examine MAC addresses: a few microseconds
- Routers consult routing tables and potentially fragment packets: microseconds to low milliseconds
- Firewalls inspect packets against security rules: potentially several milliseconds for deep packet inspection
- Load balancers make distribution decisions: variable based on algorithm complexity
What you can do: Reduce hop count. Use faster hardware. Simplify firewall rules. Accept that some processing is unavoidable.
Queuing Latency: Where Chaos Lives
This is the one that ruins everything.
Queuing latency happens when packets arrive faster than a device can process or forward them. They wait in buffers. And the wait time varies wildly based on current congestion.
On an uncongested network, queuing latency is near zero. During congestion, packets might wait hundreds of milliseconds—or get dropped entirely when queues overflow.
Queuing latency is why your video call turns to mud at 7 PM when your neighbor starts streaming. A large file transfer fills the queues, and your small, latency-sensitive packets get stuck behind megabytes of bulk data. This problem has a name: bufferbloat.
Unlike propagation latency (fixed by physics) or transmission latency (fixed by bandwidth), queuing latency is chaotic. It changes moment to moment. It's the primary cause of intermittent performance problems.
What you can do: Implement Quality of Service (QoS) to prioritize latency-sensitive traffic. Use congestion control algorithms that don't fill buffers. Reduce network utilization.
DNS Latency: The Hidden First Step
Before your browser can connect to www.example.com, it must discover the IP address. This requires querying DNS servers—potentially root nameservers, top-level domain servers, and authoritative nameservers in sequence.
An uncached DNS lookup typically takes 10-100 milliseconds. That sounds small until you realize modern web pages reference dozens of different domains. Each unique domain pays this tax.
DNS caching helps enormously for repeat visits. But the first access to any domain pays the full lookup cost.
What you can do: Use DNS prefetching. Reduce the number of domains your pages reference. Use fast DNS resolvers.
TLS Handshake Latency: The Security Tax
Establishing a secure HTTPS connection requires a TLS handshake before any application data can flow. TLS 1.2 needs two round trips. TLS 1.3 improved this to one.
With 50ms of network latency, the TLS handshake adds 100-200ms of delay before you can request the first byte of actual content. This is why connection reuse matters—you pay this cost once per connection, not once per request.
What you can do: Use TLS 1.3. Enable connection reuse and session resumption. Use 0-RTT resumption where appropriate.
Server and Application Latency: After the Network
Strictly speaking, these aren't network latency—but they're part of what users experience.
Server processing latency is how long the server spends generating a response. A static file: microseconds. A complex database query: hundreds of milliseconds.
Application latency includes parsing, rendering, and executing code in the browser. Modern web applications might make dozens of sequential API calls to render a single page, each one accumulating network plus server latency.
What you can do: Optimize server code and database queries. Parallelize API calls. Reduce the number of round trips required.
Jitter: Latency's Evil Twin
Jitter is variation in latency between packets. Some packets arrive in 50ms, others in 150ms, even though they're part of the same stream.
For bulk transfers, jitter barely matters. For real-time applications like video calls, it's often worse than high absolute latency. You can compensate for consistent 200ms delay with buffering. You can't compensate for unpredictable variation without introducing even more delay.
What you can do: Use jitter buffers for real-time applications. Prioritize real-time traffic with QoS. Choose network paths with more consistent latency.
The Compound Effect
A typical web request accumulates all these delays:
| Type | Example |
|---|---|
| DNS lookup | 20ms |
| Propagation (outbound) | 40ms |
| Processing (across hops) | 5ms |
| Queuing (variable) | 10ms |
| TLS handshake | 80ms |
| Server processing | 100ms |
| Propagation (return) | 40ms |
| Total | 295ms |
This is for a relatively fast, uncongested connection. Understanding which types dominate tells you where to focus:
- High propagation latency? Move closer (CDN) or accept it
- High queuing latency? Reduce congestion or implement QoS
- High DNS latency? Cache more aggressively or reduce domains
- High server latency? Optimize backend code
You can't outrun light, but you can stop waiting in line.
Frequently Asked Questions About Types of Latency
Was this page helpful?