1. Library
  2. Ports
  3. Configuration

Updated 10 hours ago

In the early web, each website needed its own IP address. As IPv4 addresses grew scarce, this became impossible. Virtual hosting solved the problem: one IP address, one port, hundreds of websites.

The trick is simple. A TCP connection tells the server where you're going—an IP address and port. Virtual hosting adds who you're looking for—a domain name. The server uses that name to route you to the right website.

Name-Based Virtual Hosting

When your browser requests http://example.com/page.html, it doesn't just connect to an IP address. It includes a Host header: Host: example.com. The web server reads this header and routes your request to the right site.

This is why DNS records for blog.example.com, shop.example.com, and api.example.com can all point to the same IP address. The server distinguishes them by the Host header, serving each from different directories or proxying to different backends.

The Host header was introduced in HTTP/1.1 specifically for this purpose. HTTP/1.0 had no way to specify which domain you wanted—if multiple sites shared an IP, the server had to guess.

Web servers like Nginx and Apache configure this with virtual host blocks. Each block specifies a server name and what to do when requests arrive for that name. Wildcards work too: *.example.com catches all subdomains with one rule.

The HTTPS Problem

HTTPS breaks this model. The TLS handshake happens before any HTTP headers are exchanged. The server needs to present an SSL certificate, but which one? It can't read the Host header yet—the connection isn't encrypted, so there's no HTTP to read.

This is a genuine chicken-and-egg problem. The server needs to know the domain to pick the right certificate. But it can't know the domain until after the encrypted connection is established. And it can't establish the encrypted connection without picking a certificate.

Server Name Indication (SNI) solves this by bending the rules. During the TLS handshake, before encryption begins, the client whispers the domain name in plaintext. The server hears it, picks the right certificate, and proceeds with encryption. Once the secure tunnel exists, HTTP continues normally with Host header routing.

SNI is now universal in browsers and servers. Legacy clients that don't support it force administrators to use wildcard certificates or certificates listing multiple domains (Subject Alternative Names). Otherwise, those clients see security warnings when the certificate doesn't match what they expected.

IP-Based Virtual Hosting

The alternative is giving each website its own IP address. The server binds multiple IPs to one network interface, all listening on the same port. Routing happens by destination IP, no headers required.

This works with any protocol, not just HTTP. It supports ancient clients that don't send Host headers or SNI. Some compliance requirements mandate dedicated IPs for isolation.

The cost is obvious: IPv4 addresses are exhausted. Getting multiple addresses is expensive or impossible in many regions. IPv6 changes this equation—addresses are effectively unlimited—but IPv4 still dominates most networks.

Reverse Proxies and Load Balancers

Modern infrastructure adds another layer. Reverse proxies like Nginx, HAProxy, or Traefik sit in front of application servers, receiving all traffic and routing it based on rules far more sophisticated than simple Host header matching.

These systems route by URL path (/api/* goes to API servers, /static/* goes to a CDN), by HTTP method, by custom headers, even by request body content. A single load balancer on port 443 can direct traffic to dozens of different backend services.

They also handle SSL termination—decrypting HTTPS at the edge and forwarding plain HTTP to backends on the private network. This centralizes certificate management and reduces load on application servers. The proxy reads SNI to pick the right certificate, then passes the decrypted request (Host header intact) to whatever backend should handle it.

In containerized environments, this routing updates automatically. As services scale up or down, service discovery systems like Kubernetes update the load balancer configuration. No manual changes required.

Frequently Asked Questions About Virtual Hosting

Was this page helpful?

😔
🤨
😃