1. Library
  2. Computer Networks
  3. Tcp and Udp
  4. Tcp

Updated 9 hours ago

Imagine trying to drink from a fire hose. That's what happens when a gigabit server sends data to a smartphone on congested Wi-Fi without any coordination. The phone's buffer overflows, packets get dropped, and both sides waste time on retransmissions.

TCP's sliding window protocol exists to prevent this. It gives the receiver a simple, continuous way to say: "Here's how much I can handle right now." The sender must respect that number.

The Core Idea: Advertised Window

Every TCP segment the receiver sends back includes a 16-bit window field. This number—called the receive window (rwnd)—tells the sender exactly how many bytes of buffer space are available. The sender can transmit up to that many bytes beyond what's already been acknowledged, but no more.

The receiver doesn't ask the sender to slow down. It simply tells the sender how much room is left—and the sender must respect that number.

As the receiver's application reads data from the buffer, space opens up. The next acknowledgment advertises a larger window. As unprocessed data accumulates, the window shrinks. This creates a continuous feedback loop: the sender always knows the receiver's current capacity.

How the Window Slides

The "sliding" in sliding window refers to how the transmission boundaries move forward as data flows.

Suppose the receiver has a 64 KB buffer and advertises rwnd=65536. The sender transmits 32 KB. The receiver's application processes 16 KB and sends an acknowledgment with rwnd=49152 (64 KB minus the 16 KB still waiting in the buffer). The sender can now transmit another 16 KB.

As acknowledgments arrive and data gets processed, the window slides forward. Multiple segments stay in flight simultaneously—far more efficient than sending one segment and waiting for its acknowledgment before sending the next.

The 64 KB Problem and Window Scaling

That 16-bit window field creates a hard limit: 65,535 bytes maximum. In 1981, this was generous. On modern networks, it's crippling.

Consider a connection with 100 milliseconds of round-trip latency. With a 64 KB window, you can never send more than 64 KB per round trip—about 5 Mbps, no matter how fast your actual connection is. The bandwidth sits unused because the protocol won't let you fill the pipe.

Window scaling (RFC 1323) fixes this by negotiating a multiplier during the handshake. Both sides exchange a scale factor (0-14), and all subsequent window values get left-shifted by that amount. A scale factor of 7 means window advertisements are multiplied by 128. Suddenly that 65,535 byte limit becomes 8 MB—enough to saturate even fast links.

The catch: window scaling must be negotiated during the initial three-way handshake. You can't enable it mid-connection, and both sides must support it.

Zero Windows and the Deadlock Problem

Sometimes the receiver's buffer fills completely. The application isn't reading fast enough, and there's no room for new data. The receiver advertises rwnd=0: stop sending.

This is flow control working exactly as designed. But it creates a subtle problem.

Imagine the receiver's application catches up and buffer space opens. The receiver sends a window update: rwnd=16384. But that packet gets lost. Now both sides are waiting. The sender waits for a non-zero window. The receiver waits for data that will never come. Deadlock.

TCP prevents this with window probes. When the sender sees rwnd=0, it starts a persist timer. When the timer fires, it sends a tiny probe—one byte of data—just to force a response. The receiver acknowledges the probe and re-advertises its current window. If it's still zero, the sender backs off exponentially and probes again later. If space has opened, normal transmission resumes.

The probe is TCP saying "Hello? Still there?" with a single byte, just to break the silence.

Flow Control vs. Congestion Control

These sound similar but solve different problems.

Flow control is receiver-driven. The receiver explicitly tells the sender its capacity through window advertisements. The constraint is local: how fast can this specific receiver process data?

Congestion control is sender-driven. The sender infers network capacity by watching for packet loss and latency spikes. The constraint is the path: how much can the network between these two endpoints handle?

TCP uses both simultaneously. The sender's actual transmission limit is whichever is smaller: the receiver's advertised window (rwnd) or the sender's congestion window (cwnd). If your connection is slow because rwnd is tiny, you have a receiver-side problem—the application isn't reading fast enough, or you need window scaling. If cwnd is the bottleneck, the network is congested.

Different problems, different solutions. Knowing which one limits your connection is the first step to fixing it.

Frequently Asked Questions About TCP Flow Control

Was this page helpful?

😔
🤨
😃