1. Library
  2. TCP and UDP
  3. Protocol Fundamentals

Updated 1 day ago

Your web browser doesn't know how to talk to the Internet. Neither does your email client, your video player, or any other application on your machine. They know how to talk to the transport layer—and the transport layer handles everything else.

This is Layer 4 of the OSI model. It solves a problem that seems impossible: how do two applications on different machines, separated by unknown distances and unpredictable networks, have a coherent conversation?

The Problem the Transport Layer Solves

When you load a webpage, your browser wants to request a file. That request must travel across networks your browser knows nothing about—through routers it can't see, over links with varying speeds, alongside traffic from millions of other conversations.

The network layer (Layer 3) can move packets from your machine to a server. But packets can arrive out of order. They can get lost. They can arrive faster than the receiver can process them. The network layer doesn't care—its job is routing packets, not ensuring they form a coherent conversation.

The transport layer bridges this gap. It takes an application's desire to communicate and translates it into something the network can deliver. Then it takes whatever the network delivers—out of order, with gaps, in bursts—and reconstructs what the application expected to receive.

The network says "I'll try." The transport layer decides whether to say "I promise" or "good luck."

Port Numbers: How Applications Share a Connection

Your device has one IP address, but dozens of applications want to use the network simultaneously. How does incoming data find the right application?

Port numbers. Every transport layer connection is identified by four things: source IP, source port, destination IP, destination port. When data arrives, the transport layer checks the destination port and delivers the data to whichever application is listening there.

Port numbers are 16-bit integers—0 through 65535. The ranges matter:

  • 0–1023: Well-known ports. HTTP lives on 80, HTTPS on 443, SSH on 22. These are reserved for standard services.
  • 1024–49151: Registered ports. Applications can request these from IANA.
  • 49152–65535: Ephemeral ports. When your browser connects to a web server, it picks a random port from this range as its return address.

When you connect to a website, the conversation might be: your machine port 52847 talking to the server's port 443. The transport layer on both ends uses these four numbers to route data to the correct application.

TCP: The Promise of Reliable Delivery

TCP makes a promise. When an application sends data over TCP, the protocol guarantees: every byte will arrive, in the correct order, exactly once.

Keeping this promise requires machinery:

Establish a connection first. Before any data flows, TCP performs a three-way handshake. Your machine sends a SYN (synchronize). The server responds with SYN-ACK (synchronize-acknowledge). Your machine replies with ACK (acknowledge).

Why three steps? Because both sides need to confirm the other can both send and receive. The first SYN proves you can send. The SYN-ACK proves the server can send and received your message. The final ACK proves you received their message. Two steps would leave one direction unconfirmed.

Number every byte. TCP assigns sequence numbers to track exactly where each byte belongs in the stream. When data arrives, the receiver knows precisely where to put it, even if packets arrived out of order.

Acknowledge receipt. The receiver tells the sender what it got. If acknowledgments don't come back, the sender assumes the data was lost and retransmits.

Control the pace. TCP uses two mechanisms to avoid overwhelming the receiver or the network:

  • Flow control: The receiver advertises a window—how much buffer space it has available. The sender won't exceed this. The receiver is saying "I can handle this much, no more."

  • Congestion control: The sender monitors for signs of network congestion (lost packets, increasing delays) and backs off when the network struggles. This keeps TCP from making a bad situation worse.

The cost of these guarantees is overhead. Connection setup takes time. Acknowledgments add latency. Retransmissions delay delivery. For applications that need reliability, this cost is worth paying.

UDP: Speed Over Safety

UDP sends your data and moves on.

No connection setup. No acknowledgments. No retransmission. No ordering guarantees. No congestion control. UDP wraps your data in a minimal header (source port, destination port, length, checksum) and hands it to the network layer.

This sounds reckless until you consider what some applications actually need.

Video conferencing doesn't want TCP's reliability. If a packet carrying 20 milliseconds of video gets lost, the last thing you want is to pause the conversation while TCP retransmits. By the time the retransmitted packet arrives, that moment has passed. Better to lose the frame and keep the conversation flowing.

Online gaming faces the same constraint. If a packet describing a player's position gets lost, you don't want the old position—you want the next update. TCP would hold everything waiting for the lost packet. UDP lets you drop it and move on.

DNS uses UDP because queries are tiny and fast. The overhead of TCP's connection setup would dominate the actual work. If a DNS query gets lost, send another one.

Applications that use UDP either accept occasional loss or implement their own reliability mechanisms tailored to their specific needs—keeping what helps, discarding what doesn't.

Segmentation: Breaking Data Into Pieces

Networks have size limits. Ethernet typically caps packets at 1500 bytes. But applications work with much larger data—images, files, video streams.

The transport layer handles this through segmentation. TCP divides application data into segments small enough for the network to carry. Each segment gets a header with sequence numbers so the receiving TCP can reassemble them in order.

UDP handles this differently: each datagram is independent. If an application sends more data than fits in one packet, the network layer fragments it. UDP doesn't track the pieces or guarantee they all arrive.

The Sliding Window

TCP's flow control is elegant. The receiver maintains a buffer for incoming data and tells the sender: "My window is X bytes." The sender can have up to X bytes in flight—sent but not yet acknowledged.

As the receiving application reads data from the buffer, space opens up. The receiver advertises a larger window. The sender sends more.

If the receiver's buffer fills—the application isn't reading fast enough—the window shrinks to zero. The sender stops. When space opens, transmission resumes.

Two machines coordinating their pace with no external referee. The receiver states what it can handle. The sender respects the limit. The conversation flows as fast as the slower party can manage.

Why This Layer Exists

The transport layer embodies a principle that shaped the Internet: keep the network simple, push complexity to the edges.

Routers don't track connections. They don't retransmit lost packets. They don't manage flow control. They forward packets based on destination addresses. This keeps them fast and simple.

The sophisticated behavior—reliable delivery, congestion management, multiplexing—happens at the endpoints, in the transport layer. Your machine and the server coordinate their conversation, recover from losses, and adapt to network conditions. The routers between them remain blissfully unaware.

The network provides best-effort delivery. The transport layer builds whatever guarantees applications need on top of that foundation.

Frequently Asked Questions About the Transport Layer

Was this page helpful?

😔
🤨
😃