1. Library
  2. Computer Networks
  3. Http and the Web
  4. Web Architecture

Updated 9 hours ago

When you click a button, something has to think about what to do next. The question is: where does that thinking happen?

For most of the Internet's history, the answer was simple—in a data center, probably in Virginia or Oregon. Your click travels there, a server thinks, and the answer travels back. This works. It's how we built the modern web.

But here's the thing: light has a speed limit, and your users can feel it.

The Speed of Light Problem

A round-trip from San Francisco to Virginia takes 60-80 milliseconds—and that's just the travel time, before any actual computation. For users in Sydney accessing American servers, latency can exceed 200 milliseconds.

200 milliseconds doesn't sound like much. But humans perceive delays above 100 milliseconds as lag. Video calls feel unnatural. Games become unplayable. Even scrolling feels wrong when the server is far away.

This isn't a software problem you can optimize away. It's physics. The only solution is to move the computation closer to the user.

That's edge computing: instead of centralizing all your thinking in one place, you distribute it to locations around the world, near the people who need it.

What Actually Happens at the Edge

Edge computing isn't one thing—it's a spectrum of computation happening outside centralized data centers.

Content delivery is the oldest form. CDNs have cached static files at edge locations for decades. When you load an image, it comes from a server nearby rather than crossing oceans. Simple, effective, limited to static content.

Code execution is the modern edge. Platforms like Cloudflare Workers, Vercel Edge Functions, and Fastly Compute let you run actual code—JavaScript, TypeScript, WebAssembly—at edge locations worldwide. Your function executes in whatever data center is closest to the user making the request.

Request processing happens before traffic reaches your origin servers. Authentication, rate limiting, URL routing, bot detection—all can run at the edge, protecting and optimizing traffic before it travels to your main infrastructure.

Data filtering reduces what needs to travel. An IoT deployment with thousands of sensors can aggregate data at edge nodes, sending only relevant summaries to the cloud instead of raw streams.

When Edge Makes Sense

Edge computing shines in specific scenarios:

Personalization without round-trips. An edge function reads a cookie, determines the user's language or location, and serves customized content—all in single-digit milliseconds, without touching your origin servers.

Authentication at the perimeter. Verify JWT tokens, check API keys, enforce rate limits at the edge. Invalid requests never reach your backend.

Image optimization on demand. Request an image, and edge code resizes it for the requesting device, converts formats, applies compression—generated instantly near the user.

A/B testing without infrastructure changes. Edge functions route users to different experiences based on experiments, transparently, without modifying your origin.

Geofencing. Restrict content by location, enforced by edge nodes that already know where the request originated.

The pattern: anything that benefits from low latency, doesn't require heavy computation, and can work with cached or minimal data is a candidate for the edge.

The Tradeoffs

Edge computing isn't free, and it's not universally better.

Statelessness is required. Each request might hit a different edge node. You can't rely on in-memory state. Edge-friendly applications are stateless or use distributed storage designed for eventual consistency.

Computation is constrained. Edge functions run with limited CPU time and memory. Heavy processing—machine learning training, complex data transformations, video encoding—belongs in the cloud where resources are abundant.

Database access defeats the purpose. If your edge function queries a database in Virginia, you've added latency instead of removing it. Edge computing works best when data is cached locally or doesn't require database access at all.

Consistency is eventual. Edge storage systems prioritize availability and speed over strong consistency. Data replicated globally may be slightly out of sync. For many use cases this is fine; for others it's disqualifying.

Debugging is harder. Your code runs in dozens of locations simultaneously. Logs are distributed. Reproducing issues that only occur at specific edge nodes requires different tooling and thinking.

Edge and Cloud Together

The best architectures use both.

Edge handles the immediate: request validation, authentication, content delivery, personalization, lightweight transformations. These run in milliseconds, close to users.

Cloud handles the complex: database operations, business logic, heavy computation, long-running processes. These have the resources they need, even if latency is higher.

The edge becomes a smart layer between users and your infrastructure—fast, distributed, handling what it can and forwarding what it can't.

The Storage Question

Computation at the edge often needs data. Several patterns have emerged:

Key-value stores replicate data globally. Cloudflare Workers KV, for example, provides fast reads everywhere with eventual consistency. Good for configuration, user preferences, session data.

Caching stores computed results at edge nodes. Database query results, API responses, rendered HTML—anything expensive to generate but safe to serve slightly stale.

Edge databases are emerging. Globally distributed, with read replicas at edge locations, they offer low-latency reads everywhere while routing writes to maintain consistency.

The common constraint: you're trading consistency for speed. Edge storage systems accept that data might be milliseconds or seconds out of date in exchange for being everywhere, fast.

Security at the Edge

Edge nodes are inherently exposed—they're the first thing Internet traffic touches. This requires careful thinking:

Input validation is critical. Edge code processes untrusted input directly. Validate everything.

Secrets need protection. API keys and credentials must be encrypted and access-controlled. Edge platforms provide secure environment variables, but you must use them correctly.

Simplicity reduces risk. The less code running at the edge, the smaller the attack surface. Keep edge functions focused and minimal.

Data residency matters. Some data can't leave certain jurisdictions. Edge computing must respect these constraints—not all edge locations may be valid for all data.

What's Coming

Edge computing is still maturing. Several trends are shaping its future:

WebAssembly enables running any language that compiles to Wasm at the edge, not just JavaScript.

Edge ML is making inference models viable at edge locations—low-latency AI features without cloud round-trips.

Stateful edge is emerging through distributed databases and state management systems designed for global deployment.

5G convergence pairs ultra-low-latency networks with nearby edge compute, enabling applications—autonomous vehicles, industrial automation—that need single-digit millisecond responses.

The trajectory is clear: more computation moving closer to where it's needed, with fewer constraints on what's possible at the edge.

The Core Insight

Edge computing is ultimately about a simple truth: proximity matters.

No amount of optimization can overcome the speed of light. If your server is far away, there's a floor on how fast you can respond. The only way through is moving the computation.

The question isn't whether to use edge computing—it's which parts of your application benefit from being everywhere at once.

Frequently Asked Questions About Edge Computing

Was this page helpful?

😔
🤨
😃