Port 2379 carries client traffic to etcd, a distributed key-value store that serves as the memory of modern cloud infrastructure. If you run Kubernetes, your entire cluster state lives here: every pod, every service, every secret, every configuration change. When you run kubectl apply, you're ultimately writing to port 2379.
What Port 2379 Does
Port 2379 is where etcd listens for client requests.1 When applications need to store or retrieve data from etcd, they connect here using gRPC over HTTP/2.2 The protocol is efficient: binary rather than textual, multiplexed over a single TCP connection, with header compression to minimize overhead.3
The traffic on port 2379 follows a simple pattern: clients send requests to read or write key-value pairs, and etcd responds. But beneath that simplicity lies something remarkable. Every write must be agreed upon by a majority of nodes in the etcd cluster before it's committed. This is the Raft consensus algorithm at work, ensuring that the data on port 2379 is always consistent, even when servers fail.4
Port 2379 has a sibling: port 2380. While 2379 handles client traffic, port 2380 carries server-to-server communication, the internal chatter between etcd nodes as they elect leaders and replicate data.5
The Raft Consensus Protocol
etcd's superpower is the Raft consensus algorithm, developed by Diego Ongaro and John Ousterhout at Stanford University. The paper, "In Search of an Understandable Consensus Algorithm," won Best Paper at USENIX ATC 2014.6 The title tells you everything: Raft was designed specifically to be understandable, because its predecessor, Paxos, was notoriously difficult to implement correctly.
Raft works through leader election. At any moment, one node in the etcd cluster is the leader. All writes go through the leader, which replicates them to followers. Once a majority of nodes acknowledge a write, it's committed. If the leader fails, the remaining nodes hold an election. Each node can vote once, and the first to receive a majority becomes the new leader.7
This is why etcd clusters typically run 3 or 5 nodes. With 3 nodes, you can tolerate 1 failure. With 5, you can tolerate 2. An odd number is recommended because it tolerates the same failures as the next even number but uses fewer resources.8
The History
In July 2013, Brandon Philips, Alex Polvi, and their team at CoreOS needed to solve a problem: how do you safely coordinate unattended automatic software updates across a cluster of Linux nodes?9 They needed a way to store cluster configuration reliably, consistently, and in a way that could survive node failures.
The name "etcd" is, in Brandon Philips' words, "trying to be really cute." On Unix systems, /etc is where you store configuration for a single machine. But this was a daemon (a background service) that stores configuration for an entire cluster. Hence: etc-d. The configuration daemon.10
CoreOS, the company, was founded by Philips and Polvi through Y Combinator in 2013. They'd been friends since meeting at the open source lab at Oregon State University, where they helped run infrastructure for projects like Kernel.org and Apache.org.11 Philips was a Linux kernel developer at SUSE before starting CoreOS.
etcd was written in Go and open sourced in 2014. CoreOS shipped the first stable version in 2015. Red Hat acquired CoreOS in 2018, and later that year donated etcd to the Cloud Native Computing Foundation (CNCF), where it was accepted as an incubating project. In November 2020, etcd graduated to full project status.12
"Etcd has become successful beyond our wildest aspirations when we started the project five years ago," Philips said in 2018.13
Why Kubernetes Depends on Port 2379
Kubernetes uses etcd as its backing store for all cluster data.14 This isn't incidental. Kubernetes stores two things in etcd: the desired state of the cluster (what you want) and the actual state (what exists). Then it uses etcd's watch functionality to monitor changes to both.15
When you create a pod definition, Kubernetes writes it to etcd through port 2379. Controllers watch for these changes and work to make reality match desire. When a pod actually starts, Kubernetes updates the actual state in etcd. This reconciliation loop is the heartbeat of Kubernetes, and it all flows through port 2379.
The data in etcd includes:
- Pod definitions: The desired and current state of every running pod
- Service configurations: Information about services, endpoints, and load balancing
- Cluster metadata: Data about nodes, namespaces, and roles
- Secrets: Passwords, tokens, and keys that applications need16
etcd has a storage limit, configurable but defaulting to 2 GiB, with a recommended maximum of 8 GiB. If etcd exceeds this limit, it becomes read-only, which prevents Kubernetes from creating or updating any objects.17 This is why backing up etcd is critical. If you lose etcd, you lose your cluster's memory.
The Port Number History
etcd didn't always use port 2379. Earlier versions used ports 4001 for client traffic and 7001 for peer communication. The transition to 2379/2380 came when etcd received official IANA port assignments.18
IANA now lists:
- 2379/tcp: etcd-client (Client facing communication)
- 2380/tcp: etcd-server (Server to server communication)19
For backward compatibility, some etcd documentation still references the legacy ports, but all new deployments should use the IANA-assigned ports.
Security Considerations
etcd has had its share of security vulnerabilities, and many of them are serious because of what etcd stores.
CVE-2018-16886 (CVSS 8.1, High): When role-based access control (RBAC) is enabled alongside client certificate authentication, an attacker could authenticate as any valid RBAC user if their client certificate's Common Name matched that username. This affected etcd 3.2.x prior to 3.2.26 and 3.3.x prior to 3.3.11.20
CVE-2020-15136: The gateway TLS authentication only applied to endpoints discovered via DNS SRV records, not endpoints specified with the --endpoints flag. Fixed in versions 3.4.10 and 3.3.23.21
CVE-2020-15114: etcd didn't enforce password length requirements, allowing single-character passwords that could be trivially brute-forced.22
Unauthenticated API Access: By default, the etcd HTTP API is accessible without authentication. This has a severity rating of 9 (Critical).23 If your etcd is exposed to untrusted networks without authentication, anyone can read and write your cluster's entire state.
The lesson: always run etcd with TLS enabled, use strong authentication, and never expose port 2379 to the public Internet. Your Kubernetes secrets flow through this port.
Related Ports
| Port | Service | Relationship |
|---|---|---|
| 2380 | etcd-server | Server-to-server communication for Raft consensus |
| 4001 | etcd (legacy) | Old client port, deprecated in favor of 2379 |
| 7001 | etcd (legacy) | Old peer port, deprecated in favor of 2380 |
| 6443 | Kubernetes API | The API server that talks to etcd on your behalf |
Frequently Asked Questions
Was this page helpful?