Updated 8 hours ago
You can adapt to delay. If every word in a phone conversation arrives 200 milliseconds late, you learn to wait before responding. The rhythm feels different, but you find it. The conversation works.
But what if the delay keeps changing? One word arrives in 50ms, the next in 400ms, the next in 150ms. Now there's no rhythm to find. You interrupt each other. Awkward silences appear where none should exist. The conversation falls apart—not because of the delay, but because of the inconsistency.
That inconsistency is jitter.
What Jitter Actually Is
Jitter is variation in latency over time. While latency measures how long packets take to arrive, jitter measures how much that timing varies.
In a perfect network, if you send packets every 20 milliseconds, they arrive every 20 milliseconds. In reality, some take 18ms, others 25ms, occasionally one takes 50ms. Jitter captures this irregularity—typically expressed as the average variation from expected timing.
Network engineers track several flavors:
Average jitter gives a general sense of timing consistency but can hide occasional large spikes.
Peak jitter captures the worst-case variation—the outliers that matter most for real-time applications.
Jitter distribution reveals the pattern. Consistent small variations are easier to compensate for than rare but extreme spikes.
Where Jitter Comes From
Jitter emerges from variability in the same factors that cause latency:
Queuing delays are the primary culprit. Network devices have buffers. When traffic arrives faster than it can be forwarded, packets wait. One packet might sail through empty queues while the next waits behind a burst of traffic. The queue depth is constantly changing, so the wait time is constantly changing.
Route changes create sudden shifts. Internet paths are dynamic—when routing changes, packets suddenly take longer or shorter paths. Until routing stabilizes, latency fluctuates.
Wireless networks are inherently jittery. Devices must wait for a clear channel before transmitting. The wait varies depending on interference, competing networks, and how many other devices want to talk. WiFi produces more timing variation than wired connections simply because the medium is shared and contested.
Processing variations occur when routers experience varying CPU loads. A router handling a surge of routing updates might process some packets more slowly than others.
Why Jitter Breaks Things
Different applications respond to jitter very differently:
Voice calls encode audio in small chunks, typically every 20-30 milliseconds. Those chunks need to arrive at roughly the same rate they were sent. When timing varies, you hear choppy speech, robotic artifacts, or awkward pauses as the system struggles to reconstruct smooth audio.
VoIP systems use jitter buffers to smooth variations—but buffers add latency. If jitter is 50ms, the buffer might need to be 100-150ms, adding noticeable delay to compensate for the unpredictability.
Video conferencing compounds the problem because audio and video must stay synchronized. Jitter in one stream relative to the other creates that uncanny effect of lips moving out of sync with words.
Online gaming demands both low latency and low jitter. Players can adapt to consistent delay—they learn to lead their shots, to account for the lag. But jitter makes the game feel random. Your character responds instantly to one command, then lags on the next. Precise control becomes impossible.
Streaming video cares less because buffers are large. Netflix might hold 30 seconds of video, which easily absorbs typical network variations. Live streaming with low-latency requirements is another story.
File transfers and web browsing are largely immune. TCP handles reordering and retransmission, presenting clean data to applications regardless of packet-level chaos.
What Good Looks Like
VoIP needs jitter below 30ms for acceptable quality. Under 20ms is good. Under 10ms is excellent. Above 30ms, degradation becomes obvious.
Video conferencing has similar requirements for audio, though video tolerates slightly more variation because frames are larger and buffering is more practical.
Online gaming varies by type. Competitive shooters need very low jitter—under 10ms. Turn-based games can tolerate far more.
General browsing functions fine with 100ms or more, though things might feel slightly less snappy.
Reducing Jitter
Quality of Service (QoS) prioritizes real-time traffic so it doesn't queue behind bulk transfers. VoIP packets skip the line. This doesn't reduce jitter in the network—it shields priority traffic from experiencing it.
Bandwidth headroom reduces congestion. A link running at 50% capacity has less queue variation than one running at 90%.
Traffic shaping smooths bursts into steady flows, reducing the queue spikes that create jitter.
Better wireless helps: less-congested channels, optimal access point placement, 5 GHz over 2.4 GHz. All reduce the contention that creates wireless timing variations.
Stable routing prevents the latency swings from path changes.
Jitter Buffers: Trading Delay for Consistency
Since jitter can't be eliminated entirely, real-time applications buffer packets before processing them. Early packets wait. Late packets find their predecessors already queued.
The buffer must be sized to accommodate expected jitter. If variation is typically 20ms with occasional 40ms peaks, a 50-60ms buffer ensures packets arrive before they're needed.
Adaptive buffers adjust dynamically—shrinking during calm periods to minimize latency, expanding during turbulent periods to maintain smoothness.
The tradeoff is always latency versus consistency. A larger buffer handles more jitter but adds more delay. The right balance depends on what you're doing.
When Jitter Becomes Loss
Jitter and packet loss often travel together—congestion and interference cause both—but they're distinct problems. Jitter means irregular timing. Loss means packets never arrive.
Here's the strange part: extreme jitter can become loss. If a packet arrives after the jitter buffer has given up waiting for it, the application discards it as "too late." Functionally, it's gone. The packet made the journey but arrived after it mattered.
This is why jitter thresholds exist for real-time applications. It's not just about smoothness—packets that arrive outside the acceptable window are packets that don't count.
Frequently Asked Questions About Jitter
Was this page helpful?