1. Library
  2. Servers and Infrastructure
  3. Virtualization and Containers

Updated 10 hours ago

Functions as a Service flips the traditional model of computing on its head.

Traditionally, you provision servers first, then deploy code onto them. The servers sit there—running, costing money—whether anyone's using them or not. FaaS inverts this: you write code first, and infrastructure materializes around it exactly when needed, then vanishes.

Your code exists in a superposition—potentially everywhere, actually nowhere—until an event collapses it into execution.

What FaaS Actually Is

You write small, focused functions. Each does one thing: process an image, validate data, send an email, update a record. You deploy these functions to a platform—AWS Lambda, Google Cloud Functions, Azure Functions, Cloudflare Workers.

Then you wait.

When an event occurs—an HTTP request, a file upload, a database change, a scheduled time—the platform receives it and invokes your function. The platform provisions resources, loads your code, executes it, returns the result, and tears everything down. Between invocations, nothing runs. Nothing costs money.

One event triggers one function instance. A thousand simultaneous events trigger a thousand instances. You don't configure this. You don't think about it. It happens.

The Cost of Not Existing

This model has an interesting trade-off called cold starts.

When your function hasn't run recently, the platform must conjure it from nothing. It provisions a container, loads your runtime, initializes your code, and only then executes your logic. This takes time—100 milliseconds to several seconds depending on the language and dependencies.

If your function runs frequently, the platform keeps it "warm"—ready to execute immediately. But if traffic is sporadic, each request might hit a cold start.

You're paying in latency for the efficiency of not paying for idle servers. For most workloads, this trade-off is overwhelmingly favorable. For real-time applications where every millisecond matters, it's a problem.

What Functions Can and Cannot Do

FaaS functions have constraints that shape what you can build:

Execution time limits. Most platforms cap runtime at 5-15 minutes. Your function must complete its work within that window. Long-running processes—video encoding, large data migrations, complex ML training—don't fit.

Statelessness. Functions don't remember anything between invocations. No in-memory state persists. Any data you need to keep lives in external services: databases, caches, storage buckets.

Resource boundaries. Memory typically ranges from 128MB to 10GB. CPU scales proportionally with memory. Local disk is ephemeral and limited. Heavy computation may not fit.

These constraints aren't bugs—they're features. They force you to write small, focused, composable functions. They push state to purpose-built storage systems. They make your code naturally scalable.

Where FaaS Shines

HTTP APIs. Functions behind API gateways handle web requests. Traffic spikes? More instances spawn. Traffic dies? Everything scales to zero. You pay for actual requests, not potential capacity.

Event processing. A file lands in storage—a function processes it. A message arrives in a queue—a function handles it. A database record changes—a function reacts. The event-driven model maps naturally to FaaS.

Scheduled tasks. Replace cron jobs running on servers you have to maintain. A function can run every hour, every day, every minute—the platform handles scheduling.

Integration glue. Systems need to talk to each other. FaaS functions are perfect for translating between APIs, transforming data formats, routing events to destinations.

Experimentation. Want to try an idea? Write a function, deploy it, see if it works. No infrastructure investment. If it fails, delete it. If it succeeds, it scales automatically.

Where FaaS Struggles

Long-running processes. Anything exceeding time limits needs a different approach—containers, VMs, or breaking work into chunks that functions can process incrementally.

Latency-sensitive applications. If cold starts are unacceptable, you need to keep functions warm (which costs money and complexity) or use a different model.

Consistently high traffic. When you're running at scale 24/7, the per-execution pricing model can exceed the cost of dedicated resources. At some volume, it's cheaper to pay for servers that run constantly.

Complex applications with many moving parts. A hundred functions calling each other becomes hard to reason about, debug, and monitor. The architectural complexity can outweigh the operational simplicity.

The Platforms

AWS Lambda is the dominant player—mature, extensively integrated with AWS services, large community, well-documented.

Google Cloud Functions integrates tightly with Google's ecosystem and offers competitive pricing.

Azure Functions fits naturally in Microsoft environments with strong enterprise tooling.

Cloudflare Workers runs at edge locations globally, offering ultra-low latency by executing code close to users.

Self-hosted options like OpenFaaS and Apache OpenWhisk let you run FaaS on your own infrastructure, useful for compliance requirements or avoiding vendor lock-in.

Developing for FaaS

Local development requires emulators or frameworks that mimic cloud platform behavior. You can't truly test FaaS code without simulating the event-driven, ephemeral execution model.

Deployment happens through platform-specific tools or frameworks like the Serverless Framework that abstract across providers.

Debugging is harder than traditional applications. Your code runs in ephemeral containers you can't SSH into. You rely on logs, distributed tracing, and careful error handling. When something fails, you're reconstructing state from telemetry, not attaching a debugger.

Monitoring matters more because you're paying per execution. An infinite loop in traditional hosting burns CPU. In FaaS, it burns money directly.

The Mental Model

FaaS works best when you think in events and reactions, not processes and state.

Something happens → Your code runs → Something else happens

Each function is a pure transformation: event in, action out. State lives elsewhere. Coordination happens through events. The platform handles everything about running code; you handle everything about what the code does.

This constraint is liberating. You stop thinking about servers, capacity, availability zones, load balancers, auto-scaling groups. You start thinking purely about logic: when this happens, do that.

For the right problems, FaaS is the closest we've come to code that just runs—infrastructure that appears when needed and vanishes when done, leaving behind only the work it accomplished.

Frequently Asked Questions About Functions as a Service

Was this page helpful?

😔
🤨
😃