1. Library
  2. Tcp and Udp
  3. Fundamentals

Updated 10 hours ago

When your browser loads this page, something invisible happens: two computers agree to have a conversation. They shake hands. They confirm receipt of every sentence. They wait for each other.

When you're in a video call, something different happens: one computer just starts talking. No handshake. No confirmation. If words get lost, they're gone.

These are TCP and UDP—the two protocols that carry almost all Internet traffic. They're not competitors. They're answers to different questions.

Two Philosophies of Trust

TCP (Transmission Control Protocol) asks: "Did you get that?"

UDP (User Datagram Protocol) says: "I hope you got that."

That's the fundamental difference. TCP establishes a connection, tracks every packet, demands acknowledgment, and retransmits anything that goes missing. UDP just sends data and moves on.

Think of it this way: TCP is a phone call. You establish a connection, you know when the other person is listening, and if they miss something, they ask you to repeat it. UDP is shouting across a crowded room. You say your piece and hope it arrives. If it doesn't, you've already moved on to the next thing.

How TCP Guarantees Delivery

Before TCP sends a single byte of your data, it performs a ritual called the three-way handshake:

  1. SYN: "I want to talk to you."
  2. SYN-ACK: "I hear you. I want to talk too."
  3. ACK: "Great. Let's begin."

Only after this exchange does data start flowing. And once it does, TCP tracks everything obsessively.

Every chunk of data gets a sequence number. The receiver sends back acknowledgments: "Got bytes 1-1000. Got bytes 1001-2000." If an acknowledgment doesn't arrive within a timeout window, TCP assumes the data was lost and sends it again.

This creates guarantees that feel almost magical:

  • Reliability: Lost packets get retransmitted automatically
  • Ordering: Even if packets arrive scrambled, TCP reassembles them correctly
  • Flow control: A fast sender won't overwhelm a slow receiver
  • Congestion control: TCP detects network congestion and backs off

The cost? Overhead. Latency. Every acknowledgment is a round trip. Every lost packet means waiting and retrying. The three-way handshake alone adds latency before any data moves.

How UDP Achieves Speed

UDP has almost no features. That's the point.

No connection setup. No acknowledgments. No retransmission. No ordering guarantees. No flow control. An application using UDP can just start sending data immediately.

The UDP header is eight bytes. TCP's is at least twenty. UDP contains exactly four fields: source port, destination port, length, and a checksum. That's it.

Each UDP datagram is independent—fire and forget. The sender has no idea whether it arrived. The receiver has no mechanism to request missing data. If packets arrive out of order, that's the application's problem.

This minimalism translates directly into performance:

  • No handshake delay before sending
  • No round-trip waits for acknowledgments
  • No retransmission delays when packets vanish
  • No connection state to maintain

UDP is as fast as your network allows.

When You Need TCP

Choose TCP when the data must arrive intact and in order—when loss or corruption would break something.

Web pages: A missing byte in an HTML file can break the entire page structure. A CSS rule that arrives before the HTML it styles is useless.

Email: "I love you" with a dropped packet might become "I ove you." Or worse.

File downloads: When you download software, you need every byte. A corrupted executable doesn't run.

Financial transactions: When your banking app transfers money, you need absolute certainty that the request arrived and the confirmation returned.

API requests: Applications need to know definitively: did my request succeed or fail?

TCP handles all of this automatically. The application just sends data and trusts that it arrives.

When You Need UDP

Choose UDP when speed matters more than perfection—when stale data is worse than missing data.

Online gaming: If a packet containing your player's position is lost, the next packet (arriving milliseconds later) contains your new position anyway. Pausing the game to retransmit old coordinates would make it unplayable. Players tolerate occasional glitches; they can't tolerate input lag.

Video calls: A lost video frame causes a brief glitch. But retransmitting it is pointless—by the time it arrives, you're supposed to be displaying frames from 200 milliseconds in the future. Better to skip the bad frame and keep the conversation flowing.

Live streaming: Same principle. Smooth playback of current content beats perfect playback of stale content.

DNS lookups: When your computer looks up "google.com," the query and response each fit in a single packet. UDP's speed means faster lookups. If a packet is lost, just retry—still faster than TCP's handshake would have been.

Real-time sensors: If a temperature reading is lost, the next reading (seconds later) is more current anyway. Guaranteed delivery of stale sensor data is guaranteed delivery of useless information.

The pattern: UDP wins when the newest data is the only data that matters.

The Hybrid Reality

Real systems often use both.

Video streaming services use TCP for control messages ("pause," "skip," "change quality") because those commands must arrive. But they use UDP for the actual video frames, where smooth playback beats perfect frames.

Modern protocols sometimes build custom reliability on top of UDP. QUIC—the protocol powering much of the modern web—uses UDP as its foundation but adds selective reliability and sophisticated congestion control. It can optimize for web traffic patterns in ways TCP's fixed behavior cannot.

This reveals something important: TCP vs. UDP isn't binary. Sometimes the answer is "UDP, plus exactly the reliability features we actually need."

Performance Under Pressure

On a perfect network—low latency, no packet loss—TCP and UDP perform similarly. The overhead difference is negligible.

But networks aren't perfect. As latency increases, TCP's acknowledgment round-trips accumulate. On a satellite link with 600ms round-trip time, every lost packet triggers a retransmission that adds another 600ms of delay. TCP's congestion control may throttle throughput dramatically.

UDP maintains consistent performance regardless of network conditions. It doesn't wait. It doesn't back off. It just keeps sending.

The flip side: applications using UDP must handle packet loss themselves. That's complexity TCP would have absorbed. Whether that trade-off makes sense depends entirely on what you're building.

The Essential Trade-off

TCP trades speed for certainty. UDP trades certainty for speed.

TCP guarantees your data arrives intact and in order. The cost is overhead, latency, and complexity in the protocol. The benefit is simplicity in your application—you just send data and trust the protocol.

UDP guarantees almost nothing. The cost is complexity in your application—you must handle loss gracefully. The benefit is raw speed and the freedom to implement exactly the reliability you need, no more.

Neither is better. They're answers to different questions.

When you're downloading a file, the question is: "Did every byte arrive?" TCP answers that.

When you're in a video call, the question is: "What's happening right now?" UDP answers that.

Knowing which question your application is really asking—that's the whole decision.

Frequently Asked Questions About TCP and UDP

Was this page helpful?

😔
🤨
😃
TCP vs. UDP: Reliable vs. Fast • Library • Connected