1. Library
  2. Computer Networks
  3. Servers and Infrastructure
  4. Virtualization and Containers

Updated 8 hours ago

You don't run Kubernetes. You negotiate with it.

You state your terms: "I want three copies of this application, always running, spread across my servers." Kubernetes accepts the contract. From that moment on, it spends every waking moment—which is every moment, because it never sleeps—trying to honor that agreement.

A container crashes? Kubernetes notices within seconds and starts a replacement. A server dies? Kubernetes reschedules those containers elsewhere. Traffic spikes? If you've set the terms right, Kubernetes spins up more copies. You declared what you wanted. Kubernetes handles the how.

This is the heart of it: Kubernetes is a system for making reality match your declarations.

The Problem It Solves

Running one container is easy. Running a hundred containers across twenty servers while handling failures, updates, scaling, and networking—that's where things fall apart.

Without orchestration, you're manually tracking which containers run where, restarting crashed processes at 3 AM, coordinating deployments across servers, and hoping nothing breaks while you're updating. You become the orchestration layer. You become the thing that never sleeps.

Kubernetes takes that job. It's an operations team that never forgets, never gets tired, and checks on your applications every few seconds to make sure they're still running the way you specified.

The Core Concepts

Pods are Kubernetes' smallest unit—one or more containers that live and die together. Usually one container per pod. Think of a pod as "the thing that runs."

Nodes are the machines running your pods. Physical servers, VMs, cloud instances—Kubernetes doesn't care. It just needs somewhere to schedule work.

Deployments are your declarations. "Run three replicas of this container image." The deployment is the contract; Kubernetes is the enforcer.

Services solve a tricky problem: pods come and go (they're ephemeral by design), but other parts of your system need a stable way to find them. A service provides a consistent address that routes to whatever pods are currently healthy.

Namespaces let multiple teams or applications share one cluster without stepping on each other. Logical separation within physical infrastructure.

How It Actually Works

You write YAML files describing what you want:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: myapp:1.0
        ports:
        - containerPort: 8080

This says: "I want three copies of myapp:1.0, listening on port 8080, labeled as 'web'."

You apply it:

kubectl apply -f deployment.yaml

Kubernetes reads your declaration, compares it to current reality, and takes action. No pods running? Start three. Only two healthy? Start one more. Five running but you asked for three? Terminate two.

This loop—compare desired state to actual state, take corrective action—runs continuously. Kubernetes is constantly paranoid, constantly checking, constantly adjusting.

What You Get

Self-healing. Crashed containers restart. Dead nodes trigger rescheduling. Failed health checks remove pods from service. You declare "always three," and Kubernetes fights to maintain that.

Scaling. Change replicas: 3 to replicas: 10 and apply. Kubernetes starts seven more pods. Or configure automatic scaling based on CPU, memory, or custom metrics—Kubernetes adjusts replica count as load changes.

Rolling updates. Change your container image and apply. Kubernetes gradually replaces old pods with new ones, maintaining availability throughout. If the new version fails health checks, it stops the rollout and can automatically roll back.

Load balancing. Services distribute traffic across healthy pods. No separate load balancer configuration—it's built into the abstraction.

Configuration management. ConfigMaps hold configuration data. Secrets hold sensitive data. Both inject into containers without baking values into images.

The Architecture

A Kubernetes cluster has two parts:

Control plane makes decisions. The API server receives your declarations. The scheduler decides which nodes run which pods. Controllers watch for drift between desired and actual state. etcd stores all cluster state.

Worker nodes do the work. Each runs kubelet (the Kubernetes agent) and a container runtime. Kubelet receives instructions from the control plane and ensures the right containers are running.

You interact with the cluster through kubectl, which talks to the API server. You never SSH into nodes to manage containers—you declare intent, and the system handles execution.

When Kubernetes Makes Sense

Kubernetes earns its complexity when you have:

Many services that need to find each other, scale independently, and update without coordination.

High availability requirements where automatic failover matters more than operational simplicity.

Variable load where scaling up and down saves money or maintains performance.

Multiple environments where consistent deployment across dev, staging, and production reduces errors.

When It Doesn't

Kubernetes is overkill for:

Single applications on one or a few servers. Docker Compose handles this with far less complexity.

Stable workloads that don't change often. If you deploy monthly and scale yearly, the orchestration overhead isn't worth it.

Teams without investment in learning. Kubernetes has a steep curve. Half-understood Kubernetes causes more problems than it solves.

The honest advice: start with Docker. Run containers. Feel the pain of managing them manually as you grow. Add Kubernetes when that pain becomes real, not when it seems cool.

Managed Services

Running your own Kubernetes control plane is operational overhead most teams don't need. Cloud providers offer managed Kubernetes:

  • Amazon EKS (Elastic Kubernetes Service)
  • Google GKE (Google Kubernetes Engine)
  • Azure AKS (Azure Kubernetes Service)

They run the control plane. You run your applications. You still need to understand Kubernetes—but you don't need to keep the control plane alive at 3 AM.

The Mental Model

Forget the YAML syntax. Forget the component names. Remember this:

Kubernetes is a system that takes your declarations and relentlessly tries to make them true. You say what you want. It figures out how to deliver it. When reality drifts from your declaration—and it always does, because servers fail and containers crash and networks partition—Kubernetes notices and corrects.

You're not managing containers. You're managing contracts. Kubernetes is the enforcement mechanism.

That's the whole thing.

Frequently Asked Questions About Kubernetes

Was this page helpful?

😔
🤨
😃