1. Library
  2. Http and the Web
  3. Status Codes

Updated 10 hours ago

A 502 Bad Gateway is the Internet's way of saying: "Don't blame me—blame the server behind me."

When you see a 502, the server you reached (the gateway) is working perfectly. It received your request, tried to forward it to another server, and got back garbage—or nothing usable. The gateway is just the messenger delivering bad news about someone else.

What's Actually Happening

Modern web infrastructure rarely involves a single server. Your request typically passes through layers:

You → Load Balancer → Application Server → Database

A 502 occurs when one of these middle servers (the gateway) can't get a valid response from the server behind it (the upstream). The gateway tried. It failed. It's telling you.

HTTP/1.1 502 Bad Gateway

That's it. No elaborate explanation—just an admission that the chain broke somewhere downstream.

Why Upstreams Fail

The upstream crashed. The application server was running, received the forwarded request, and died mid-response. The gateway got half an answer or a connection reset.

The upstream isn't running. Someone deployed broken code, the process ran out of memory, or the server rebooted. The gateway reaches out and nobody's home.

The upstream is speaking gibberish. The server responded, but not with valid HTTP. Maybe it dumped a stack trace directly to the socket. Maybe a misconfigured proxy returned HTML when JSON was expected. The gateway can't make sense of it.

The connection was refused. The upstream server exists but isn't listening on the expected port. The gateway knocks on door 3000, but nobody's listening there.

Network problems. Firewalls, DNS failures, or network partitions prevent the gateway from reaching the upstream at all.

502 vs. Other 5xx Errors

These get confused constantly. Here's the distinction:

500 Internal Server Error: The server you reached broke. It's the server's own fault.

502 Bad Gateway: The server you reached is fine, but the server behind it broke or responded nonsensically.

503 Service Unavailable: The server is intentionally refusing requests—usually because it's overloaded or in maintenance mode. This is deliberate, not a failure.

504 Gateway Timeout: The server behind the gateway didn't respond in time. A 502 means it responded with garbage; a 504 means it responded with silence.

The difference between 502 and 504 is the difference between someone speaking nonsense and someone not speaking at all.

Finding the Problem

When you see a 502, the gateway is fine. Look behind it.

Is the upstream running?

systemctl status your-app-service
netstat -tulpn | grep :3000

If the service is down or the port isn't listening, you've found your problem.

Can you reach the upstream directly?

curl -v http://localhost:3000/health

If this works but the gateway returns 502, the problem is in the gateway configuration—wrong port, wrong hostname, wrong protocol.

What do the logs say?

Gateway logs tell you what went wrong:

tail -f /var/log/nginx/error.log

Common messages:

  • upstream prematurely closed connection — The upstream crashed mid-response
  • no live upstreams — All backend servers are marked as failed
  • upstream sent invalid header — The response wasn't valid HTTP

Application logs tell you why the upstream failed:

tail -f /var/log/your-app/app.log

Look for crashes, out-of-memory errors, or unhandled exceptions.

Is something blocking the connection?

telnet backend-server 3000
nc -zv backend-server 3000

If these fail, check firewalls and network configuration.

Preventing 502 Errors

Run multiple upstreams

A single backend server is a single point of failure:

upstream backend {
    server backend1.example.com:3000;
    server backend2.example.com:3000;
    server backend3.example.com:3000;
}

If one fails, the gateway routes to another.

Configure health checks

Don't send traffic to dead servers:

upstream backend {
    server backend1.example.com:3000 max_fails=3 fail_timeout=30s;
    server backend2.example.com:3000 max_fails=3 fail_timeout=30s;
}

After 3 failures, the server is marked down for 30 seconds.

Enable automatic failover

location /api {
    proxy_pass http://backend;
    proxy_next_upstream error timeout http_502;
    proxy_next_upstream_tries 2;
}

If the first upstream returns a 502, try the next one automatically.

Set reasonable timeouts

proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

Too short and you'll get false 502s during slow responses. Too long and users wait forever.

Handling 502s as a Client

502 errors are often transient—a server restarting, a brief network hiccup. Retry with exponential backoff:

async function fetchWithRetry(url, maxRetries = 3) {
    for (let attempt = 0; attempt < maxRetries; attempt++) {
        const response = await fetch(url);
        
        if (response.status === 502 && attempt < maxRetries - 1) {
            const waitTime = Math.pow(2, attempt) * 1000;
            await new Promise(resolve => setTimeout(resolve, waitTime));
            continue;
        }
        
        return response;
    }
}

Wait 1 second, then 2, then 4. Often the problem resolves itself.

The Essential Point

A 502 Bad Gateway is not the gateway's fault. The gateway is working correctly—it's telling you that something behind it isn't. When you see a 502, look past the messenger to find the real problem: a crashed application server, a network partition, or a backend that's returning something other than valid HTTP.

The gateway is just doing its job: faithfully reporting that the server behind it has failed.

Frequently Asked Questions About 502 Bad Gateway

Was this page helpful?

😔
🤨
😃