1. Library
  2. HTTP and the Web
  3. HTTP Fundamentals

Updated 1 day ago

HTTP Versions: 0.9, 1.0, 1.1, 2, and 3

Every time you click a link, photons race through fiber—sometimes across oceans. A packet from New York to Tokyo takes at least 140 milliseconds round-trip, and that's on optimized financial networks1. Standard connections take longer. HTTP's entire history is humanity's refusal to accept that delay.

The protocol has been rewritten five times. Not because engineers were bored, but because each version hit a wall. Understanding those walls—and the clever hacks that got around them—explains why modern websites feel fast even when the physics hasn't changed.

HTTP/0.9: The One-Line Protocol

In 1991, Tim Berners-Lee needed a way to fetch documents. He built the simplest possible thing: connect to a server, send GET /index.html, receive HTML, disconnect. No headers. No error codes. No way to send images or know if something went wrong. Just text in, text out.

This was HTTP/0.9—sometimes called the "one-line protocol" because that's literally all it was. One line of request, one stream of response, connection closed.

The limitation was obvious: every single resource required a brand new connection. A page with ten images meant ten separate TCP handshakes—each one adding round trips of latency before any data could flow. But it worked. It proved the concept. The Web existed.

HTTP/1.0: Headers Change Everything

By 1996, people wanted to send more than just HTML. They wanted images, different document types, error messages that made sense. HTTP/1.0 added the machinery to make this possible: headers.

Headers are metadata. Content-Type: image/png tells the browser what it's receiving. User-Agent identifies who's asking. Status codes—200 OK, 404 Not Found, 500 Internal Server Error—gave servers a vocabulary to explain what happened.

HTTP/1.0 also added POST, so forms could submit data, and HEAD, so you could check if a resource existed without downloading it.

But the fundamental problem remained: one connection, one request. A page with thirty resources meant thirty TCP handshakes. The protocol itself had become the bottleneck.

HTTP/1.1: Keep the Line Open

HTTP/1.1, released in 1997, had one big idea: stop hanging up.

Persistent connections—also called keep-alive—let browsers reuse the same TCP connection for multiple requests. Request the HTML, get it, then immediately request the CSS over the same connection. No new handshake. The server just... waits for the next request.

This simple change cut latency dramatically. But it introduced a new problem: head-of-line blocking.

Picture a single-file line at a coffee shop. The person in front orders something complicated. Everyone behind them waits, even if they just want a black coffee. HTTP/1.1 worked the same way—requests processed in order, one at a time. A slow response blocked everything behind it.

Browsers worked around this by opening multiple connections in parallel—typically six per domain. But this was a hack, not a solution. Six lines at the coffee shop is better than one, but you're still waiting behind complicated orders.

HTTP/1.1 also made the Host header mandatory, which enabled virtual hosting—multiple websites sharing one IP address. Before this, every website needed its own IP. After this, a single server could host thousands of domains by checking which one you asked for.

For nearly two decades, HTTP/1.1 powered the Internet. It was good enough. Until it wasn't.

HTTP/2: Stop Waiting in Line

HTTP/2, standardized in 2015, attacked head-of-line blocking directly.

The insight was this: why process requests in order at all? If you're waiting for a large image, why should a tiny CSS file wait behind it? HTTP/2 introduced multiplexing—multiple requests and responses interleaved on a single connection, delivered as they're ready.

To make this work, HTTP/2 switched from text to binary framing. Messages get split into small frames, tagged with stream identifiers, and interleaved freely. The browser reassembles them on arrival. It's like having multiple conversations over one phone line, each sentence tagged with which conversation it belongs to.

Header compression via HPACK reduced another source of waste. HTTP headers are repetitive—the same cookies, the same user agent, the same host, request after request. HPACK maintains a table of previously sent headers and references them by index instead of retransmitting.

HTTP/2 also introduced server push: if the server knows you'll need style.css right after index.html, why wait for you to ask? Just send it. In practice, this proved harder to get right than expected—servers often guessed wrong—and adoption remained limited.

But HTTP/2 had a deeper problem it couldn't solve. The multiplexing happened at the HTTP layer, but underneath, everything still ran on TCP. And TCP has its own head-of-line blocking.

When a TCP packet gets lost, the protocol stalls the entire connection until that packet is retransmitted. Every stream, every resource, every frame—all waiting for one lost packet that might only affect one resource. HTTP/2 fixed the line at the restaurant, but everyone was still stuck in the same revolving door.

HTTP/3: Change the Door

HTTP/3, standardized in June 20222, does something radical: it abandons TCP entirely.

The replacement is QUIC, a transport protocol built on UDP and standardized by the IETF in 20213. QUIC implements reliability per-stream rather than per-connection. If a packet for one stream goes missing, only that stream stalls. Everything else keeps flowing.

This is true multiplexing with no head-of-line blocking at any layer. The problem HTTP/2 couldn't solve is simply... gone.

QUIC also combines the transport and encryption handshakes. Traditional HTTPS requires a TCP handshake, then a TLS handshake—multiple round trips before any application data can flow. QUIC does both at once, often establishing connections in a single round trip. For returning visitors, it can resume with zero round trips.

Because QUIC runs over UDP, it can evolve in application space. TCP is baked into operating system kernels—changing it requires coordinated updates across every device on the Internet. QUIC is just software. Updates deploy in weeks, not years.

QUIC also handles network changes gracefully. When your phone switches from WiFi to cellular, TCP connections break—the IP address changed, and TCP identifies connections by IP. QUIC uses connection IDs instead. The connection migrates seamlessly. You don't notice. Your video call doesn't drop.

The transition has been gradual but steady. As of late 2024, HTTP/3 is supported by over 95% of major browsers and roughly a third of the top 10 million websites4. The improvements matter most where conditions are worst—lossy mobile networks, high-latency connections, exactly the situations where every millisecond of wasted waiting hurts.

The Pattern

Each version maintained backward compatibility in meaning—the same methods, the same headers, the same semantics. What changed was the wire format, the transport, the mechanics of getting bits from here to there faster.

VersionYearKey ChangeWall Hit
HTTP/0.91991One request, one connectionEvery resource needs a new connection
HTTP/1.01996Headers and status codesStill one request per connection
HTTP/1.11997Persistent connectionsRequests wait in line
HTTP/22015Multiplexed streamsTCP blocks everything on packet loss
HTTP/32022QUIC replaces TCPNo more blocking at any layer

The speed of light hasn't changed. Physics still wins. But HTTP has gotten remarkably clever at not wasting the time physics gives us.

Frequently Asked Questions About HTTP Versions

Sources

Sources

  1. BSO Upgrades London – Tokyo – New York FX Circuit Speeds

  2. RFC 9114 - HTTP/3

  3. QUIC is now RFC 9000 - Fastly

  4. HTTP/3 - Wikipedia

  5. Remove HTTP/2 Server Push from Chrome - Chrome for Developers

Was this page helpful?

😔
🤨
😃