Updated 10 hours ago
Your web browser doesn't know how to talk to the Internet. Neither does your email client, your video player, or any other application on your machine. They know how to talk to the transport layer—and the transport layer handles everything else.
This is Layer 4 of the OSI model, and it solves a problem that seems impossible when you first consider it: how do two applications on different machines, separated by unknown distances and unpredictable networks, have a coherent conversation?
The Problem the Transport Layer Solves
Consider what happens when you load a webpage. Your browser wants to request a file. That request needs to travel across networks your browser knows nothing about—through routers it can't see, over links with varying speeds, alongside traffic from millions of other conversations happening simultaneously.
The network layer (Layer 3) can move packets from your machine to a server. But packets can arrive out of order. They can get lost. They can arrive faster than the server can process them. The network layer doesn't care—its job is just to route packets, not to ensure they form a coherent conversation.
The transport layer bridges this gap. It takes an application's desire to communicate and translates it into something the network can actually deliver. Then it takes whatever the network delivers and reconstructs what the application expected to receive.
Port Numbers: How Applications Share a Connection
Your device has one IP address, but you're running dozens of applications that all want to use the network. How does incoming data find the right application?
Port numbers. Every transport layer connection is identified by four things: source IP, source port, destination IP, destination port. When data arrives, the transport layer looks at the destination port and delivers the data to whichever application is listening there.
Port numbers are 16-bit integers—0 through 65535. The ranges matter:
- 0-1023: Well-known ports. HTTP lives on 80, HTTPS on 443, SSH on 22. These are reserved for standard services.
- 1024-49151: Registered ports. Applications can request these from IANA.
- 49152-65535: Ephemeral ports. When your browser connects to a web server, it picks a random port from this range as its "return address."
So when you connect to a website, the conversation might be: your machine port 52847 talking to the server's port 443. The transport layer on both ends uses these numbers to route data to the correct application.
TCP: The Promise of Reliable Delivery
TCP is a promise. When an application sends data over TCP, the protocol guarantees: every byte will arrive, in the correct order, exactly once.
This is harder than it sounds. To keep this promise, TCP must:
Establish a connection first. Before any data flows, TCP performs a three-way handshake. Both sides agree they're ready to talk. This seems wasteful until you realize the alternative—sending data into the void and hoping someone's there.
Number every byte. TCP assigns sequence numbers to track exactly where each byte belongs in the stream. When data arrives, the receiver knows precisely where to put it, even if packets arrived out of order.
Acknowledge receipt. The receiver tells the sender what it got. If acknowledgments don't come back, the sender assumes the data was lost and retransmits.
Control the pace. TCP uses two mechanisms to avoid overwhelming the receiver or the network:
-
Flow control: The receiver advertises a "window"—how much buffer space it has available. The sender won't send more than this. It's the receiver saying "I can handle this much, no more."
-
Congestion control: The sender monitors for signs of network congestion (lost packets, increasing delays) and backs off when the network is struggling. This keeps TCP from making a bad situation worse.
The cost of these guarantees is overhead. Connection setup takes time. Acknowledgments add latency. Retransmissions delay delivery. For applications that need reliability, this cost is worth paying.
UDP: The Shrug
UDP is a shrug. It sends your data and hopes for the best.
No connection setup. No acknowledgments. No retransmission. No ordering guarantees. No congestion control. UDP just wraps your data in a header (source port, destination port, length, checksum) and hands it to the network layer.
This sounds irresponsible until you consider what some applications actually need. Video conferencing doesn't want TCP's reliability. If a packet carrying 20 milliseconds of video gets lost, the last thing you want is to pause the conversation while TCP retransmits it. By the time it arrives, it's useless—the moment has passed. Better to lose that frame and keep the conversation flowing.
Online gaming has the same constraint. If a packet describing a player's position gets lost, you don't want the old position—you want the next update. TCP would hold everything waiting for the lost packet. UDP lets you drop it and move on.
DNS uses UDP because queries are tiny and the overhead of TCP connection setup would dominate the actual work. If a DNS query gets lost, just send another one.
The applications that use UDP either don't care about reliability or implement their own reliability mechanisms tailored to their specific needs.
Segmentation: Breaking Data Into Pieces
Networks have limits. Ethernet typically caps packets at 1500 bytes. But applications work with much larger data—images, files, video streams.
The transport layer handles this mismatch through segmentation. TCP divides application data into segments small enough for the network to carry. Each segment gets a header with sequence numbers so the receiving TCP can reassemble them in order.
UDP handles this more simply: each datagram is independent. If an application sends more data than fits in one packet, the network layer fragments it. UDP doesn't track the pieces or guarantee they all arrive.
The Sliding Window
TCP's flow control mechanism is elegant once you see it. The receiver maintains a buffer for incoming data. It tells the sender: "My window is X bytes." The sender can have up to X bytes in flight—sent but not yet acknowledged.
As the receiving application reads data from the buffer, space opens up. The receiver advertises a larger window. The sender can send more.
If the receiver's buffer fills up—the application isn't reading fast enough—the window shrinks to zero. The sender stops. When space opens up, the sender resumes.
This is genuinely how two machines coordinate their pace without any external referee. The receiver says what it can handle. The sender respects the limit. The conversation flows as fast as the slower party can manage.
Why This Layer Exists
The transport layer embodies a principle that shaped the Internet: keep the network simple, push complexity to the edges.
Routers don't track connections. They don't retransmit lost packets. They don't manage flow control. They just forward packets based on destination addresses. This keeps them fast and simple.
The sophisticated behavior—reliable delivery, congestion management, multiplexing—happens at the endpoints, in the transport layer. This is where your machine and the server coordinate their conversation, recover from losses, and adapt to network conditions.
The network provides best-effort delivery. The transport layer builds whatever guarantees applications need on top of that foundation.
Frequently Asked Questions About the Transport Layer
Was this page helpful?