1. Library
  2. Computer Networks
  3. Servers and Infrastructure
  4. Virtualization and Containers

Updated 8 hours ago

Serverless computing is a deal: you give up control over how your code runs in exchange for not having to care.

You write a function. You deploy it. When something triggers it—an HTTP request, a file upload, a scheduled time—the cloud provider wakes up a computer somewhere, runs your code, and bills you for the milliseconds it took. Then that computer goes back to doing whatever cloud computers do when you're not looking.

Despite the name, servers absolutely exist. You just never see them, never patch them, never wake up at 3am because one of them caught fire. That's the deal.

The Evolution of Not Caring

Serverless is the latest step in a long retreat from infrastructure:

Physical servers: You bought them, racked them, replaced failed hard drives, and hoped the cooling held.

Virtual machines: Someone else owned the hardware. You still managed the operating system, patched security vulnerabilities, and provisioned capacity.

Containers: You packaged your application and its dependencies together. Still had to orchestrate where they ran and how many.

Serverless: You write functions. Everything else is someone else's problem.

Each step trades control for simplicity. Serverless takes this to its logical conclusion.

How It Actually Works

You write a function in a supported language—JavaScript, Python, Go, Java, whatever your platform supports. You deploy it. Then you wait.

When an event occurs—someone hits your API endpoint, a file lands in storage, a timer fires—the platform:

  1. Finds or creates a compute environment
  2. Loads your code
  3. Runs it
  4. Captures the response
  5. Tears everything down (eventually)

You pay for the time between steps 3 and 4. Not for the waiting. Not for the infrastructure sitting idle. Just the work.

The math: If your function runs for 200 milliseconds and uses 512MB of memory, you're billed for 0.1 GB-seconds. Do that a million times in a month and you're still likely under AWS Lambda's free tier.

When Serverless Shines

Sporadic workloads are serverless's sweet spot. A function that processes uploaded images might run a thousand times during a product launch, then twice a day for weeks. With servers, you pay for 24/7 availability. With serverless, you pay for the actual processing.

Event-driven architectures map naturally to serverless. File uploaded? Function runs. Payment processed? Function runs. Database row changed? Function runs. The trigger-and-respond pattern is what serverless was built for.

API backends without traffic predictions work well. Your endpoint might get ten requests today and ten thousand tomorrow. Serverless scales from zero to whatever, automatically.

Glue code connecting services doesn't deserve its own server. A function that reformats data from one API before sending it to another is a perfect serverless citizen.

Scheduled tasks replace cron jobs running on servers you have to maintain. The function wakes up, does its work, and disappears.

The Cold Start Problem

Here's what the deal costs you: cold starts.

Your function sits frozen in digital amber until someone needs it. Then it scrambles awake—loading the runtime, initializing your code, establishing connections. This takes time. Hundreds of milliseconds for lightweight functions. Seconds for heavier ones.

For many applications, this latency is invisible. For latency-sensitive applications—real-time gaming, financial trading, interactive UIs expecting instant response—it's a dealbreaker.

Platforms offer workarounds. Provisioned concurrency keeps functions warm (but you pay for the warmth). Smaller deployment packages initialize faster. But the fundamental tension remains: serverless optimizes for cost efficiency, not consistent latency.

What You Give Up

Long-running processes: Most platforms cap execution at 15 minutes. If your job takes longer, serverless isn't your model.

State: Functions are stateless by design. They wake up, run, forget everything. Persistent state lives in external databases and storage.

Control: You can't tune the operating system, install custom software, or optimize the hardware. You get what the platform gives you.

Portability: Serverless platforms use proprietary APIs, deployment models, and event formats. Code written for AWS Lambda doesn't run on Azure Functions without changes. You're not locked in—but migration isn't free.

Debuggability: Distributed functions are harder to trace than monolithic applications. When something fails, the problem might span multiple functions, event queues, and external services.

The Major Platforms

AWS Lambda dominates. It integrates with everything AWS offers—API Gateway, S3, DynamoDB, EventBridge—and supports the most languages. If you're in AWS, Lambda is the default choice.

Google Cloud Functions offers similar capabilities in Google's ecosystem. Tight BigQuery and Firebase integration.

Azure Functions serves Microsoft shops with .NET-first support and Visual Studio integration.

Cloudflare Workers runs at the edge—your code executes in data centers near your users rather than in a central region. Lower latency, different programming model.

Vercel and Netlify Functions focus on web developers. Deploy a Next.js or static site and add backend functions without leaving the platform.

Serverless vs. Containers

This isn't either/or. Many teams use both.

Containers give you control. You define the environment, manage scaling (or let Kubernetes do it), and pay for running time whether requests arrive or not. Better for sustained workloads, complex applications, and teams that need specific configurations.

Serverless gives you simplicity. You define the function, let the platform handle everything else, and pay only for execution. Better for event-driven workloads, variable traffic, and teams that want to focus purely on code.

The decision point: predictability. If you know your workload runs constantly and can estimate capacity, containers often cost less. If traffic is spiky, sporadic, or unpredictable, serverless wins.

Writing Good Serverless Functions

Keep functions small. One function, one job. This makes them easier to test, faster to deploy, and cheaper to run.

Make functions idempotent. Running the same function twice with the same input should produce the same result. Cloud platforms retry failed invocations—your code needs to handle this gracefully.

Minimize dependencies. Every library you include extends cold start time. Include what you need, nothing more.

Externalize state. Use managed databases, object storage, and caches. Don't try to maintain state between invocations—it won't work.

Instrument everything. Distributed systems fail in distributed ways. Logging, tracing, and monitoring are essential, not optional.

The Real Question

Serverless isn't about servers. It's about what you want to think about.

If infrastructure is interesting to you—if you want to optimize, tune, and control—serverless will frustrate you. It hides the machine.

If infrastructure is a distraction—if you want to write code that solves problems and let someone else worry about keeping it running—serverless might be exactly the deal you want.

The servers are still there. You just don't have to care.

Frequently Asked Questions About Serverless Computing

Was this page helpful?

😔
🤨
😃