Updated 10 hours ago
Every packet that crosses the Internet faces the same interrogation at every router: Where are you going? The router examines the destination IP address, consults its routing table, performs longest-prefix matching, and finally decides where to send the packet. Then the next router does the same thing. And the next. Every hop, the same questions.
MPLS asks once and moves on.
Multiprotocol Label Switching assigns a simple label to packets when they enter the network. From that point forward, routers don't examine IP addresses—they just read the label, swap it for a new one, and forward. The packet stops being interrogated and starts being moved.
The Problem MPLS Solves
IP routing is like asking for directions at every intersection. You stop, explain where you're going, wait while someone thinks about it, then follow their advice to the next intersection—where you do it all again.
MPLS is like getting on a highway with clearly marked exits. Once you're on, you just follow the signs. No thinking required at each junction. The thinking happened once, when you got on.
This matters because routers are fast, but thinking takes time. Examining a 32-bit IP address and finding the best match in a routing table with hundreds of thousands of entries—that's computationally expensive. Looking up a 20-bit label in a small, fixed table? Trivial. Hardware can do it at line rate without breaking a sweat.
But speed isn't even the main benefit. The real power is control.
How Labels Flow
When a packet enters an MPLS network, the first router—called the ingress Label Switching Router—makes all the important decisions. It looks at the destination, considers quality of service requirements, checks traffic engineering policies, and assigns a label. This label gets inserted between the Layer 2 and Layer 3 headers, which is why people call MPLS "Layer 2.5." It doesn't fit neatly into the OSI model because it's solving a problem the model didn't anticipate.
The packet now travels along a predetermined path called a Label Switched Path (LSP). Each router along the way performs the same simple operation: look up the incoming label, swap it for an outgoing label, send the packet out the appropriate interface. No routing table consultation. No longest-prefix matching. Just swap and forward.
At the network's edge, the egress router strips the MPLS label and delivers the original packet to its destination. The packet never knew it was on a highway—it just arrived faster.
The Label Itself
An MPLS label is 32 bits, and every bit earns its place:
The Label Value (20 bits) identifies which LSP this packet belongs to and determines forwarding.
The Traffic Class (3 bits) indicates priority for quality of service—which packets get preferential treatment when links get congested.
The Bottom of Stack bit (1 bit) matters because labels can stack. A packet can carry multiple labels, peeled off one by one as it traverses different network segments. This single bit says "I'm the last one."
The TTL (8 bits) works like IP's Time To Live—decrementing at each hop, killing packets that loop forever.
This simplicity is the point. Twenty bits gives you over a million possible labels, but each router only cares about the labels it assigned. The lookup table stays small. The forwarding stays fast.
Why "Multiprotocol"?
MPLS doesn't care what's inside the packet. IP, Ethernet, ATM, whatever—once the label is attached, the payload is irrelevant. Routers forward based on labels, not contents.
This lets service providers build unified networks. The same MPLS infrastructure can carry enterprise IP traffic, carrier Ethernet services, and legacy protocols. One network, many services. The economics are compelling.
Traffic Engineering: The Real Power
Traditional IP routing finds the shortest path. That sounds good until you realize that "shortest" often means "most congested" because everyone else found the same path. Meanwhile, alternate routes sit underutilized.
MPLS Traffic Engineering breaks free from shortest-path tyranny. Network operators can establish LSPs along specific paths based on actual requirements:
Bandwidth guarantees. This LSP needs 100 Mbps. Route it along links that can provide that capacity, even if they're not the shortest path.
Latency constraints. This LSP carries voice traffic that can't tolerate delay. Avoid the satellite link, even though it's technically fewer hops.
Redundancy. This backup LSP must not share any links or routers with the primary. If one fails, the other survives.
Load distribution. Spread traffic across multiple paths to actually use the network capacity we paid for.
This control is impossible with pure IP routing, where packets are autonomous agents making independent decisions at each hop. MPLS packets are obedient—they follow the path they're assigned.
MPLS VPNs
Here's a common problem: You're a service provider. You have a hundred enterprise customers who all want private WANs connecting their offices. They all use private IP addresses. Many use the same addresses—10.0.0.0/8 is popular. How do you carry all this traffic on shared infrastructure without it mixing?
MPLS VPNs solve this elegantly.
Each customer gets a separate routing table at the provider's edge routers—called a VRF (Virtual Routing and Forwarding instance). Customer A's routes never see Customer B's routes. They might both have a route to 10.1.1.0/24, but those routes exist in completely separate universes.
When a packet arrives from Customer A's office, the provider router looks it up in Customer A's VRF, finds the appropriate LSP, and attaches labels that identify both the destination and the customer. The packet travels across the provider network—potentially crossing paths with Customer B's packets—but the labels keep everything separated.
This gives customers the isolation of private networks at the cost of shared infrastructure. They don't know or care that their packets travel alongside competitors'. The labels keep them apart.
Quality of Service
Not all packets are created equal. A dropped video frame causes a brief glitch. A dropped voice packet causes noticeable audio distortion. A dropped database query causes a retry. The consequences differ, so treatment should differ.
MPLS QoS uses the Traffic Class bits and differentiated LSPs to provide guarantees:
Real-time traffic (voice, video conferencing) gets low-latency LSPs with strict priority queuing. These packets jump the line.
Business-critical traffic (database replication, financial transactions) gets bandwidth guarantees. The capacity is reserved.
Best-effort traffic (email, web browsing, backups) gets whatever's left. No complaints when it's slow.
Providers can sell these service levels at different prices. Customers pay for the guarantees they need. The Traffic Class bits tell every router along the path how to treat each packet.
Fast Reroute: Surviving Failure
Links fail. Routers crash. In traditional IP networks, recovering from failure means waiting for routing protocols to converge—detecting the failure, propagating the information, recalculating paths, updating forwarding tables. This takes seconds. Sometimes tens of seconds.
For voice calls, that's an eternity. For financial transactions, it's a disaster.
MPLS Fast Reroute (FRR) pre-plans for failure. Backup LSPs are established in advance, ready to carry traffic the instant a failure is detected. The switchover happens in under 50 milliseconds—fast enough that voice calls don't drop and TCP connections don't timeout.
Two approaches exist:
Link protection establishes backups around specific links. If Link A fails, traffic instantly shifts to a pre-computed path around it.
Node protection goes further, establishing backups around entire routers. If Router B fails (taking all its links with it), traffic routes around the whole thing.
The backup paths aren't necessarily optimal. They're available. When milliseconds matter, available beats optimal.
Segment Routing: MPLS Simplified
Traditional MPLS requires establishing LSPs before traffic can flow. Protocols like LDP and RSVP-TE distribute label information, routers maintain state for every LSP, and scaling means managing more state.
Segment Routing flips this model. Instead of pre-establishing paths, source routers encode the entire path as a stack of labels (segments) in the packet header. Each segment represents a forwarding instruction: "go to this router" or "use this link."
Intermediate routers don't maintain per-path state. They just execute the instructions in the segment stack. The complexity moves to the network edge, where it's manageable. The core stays simple and scales.
Segment Routing can use MPLS labels (SR-MPLS) or IPv6 extension headers (SRv6). Many providers are migrating from traditional MPLS TE to Segment Routing, keeping the benefits while shedding the complexity.
Where MPLS Lives
Service provider cores. The traffic engineering and fast failover capabilities are essential when you're carrying millions of customers' traffic.
Enterprise WANs. Connecting dozens of offices with guaranteed performance and VPN isolation. MPLS VPNs remain the gold standard.
Data center interconnects. When microseconds of latency matter and five-nines availability is the baseline, MPLS delivers.
Mobile backhaul. Cell towers need reliable, low-latency connections to the core network. MPLS provides the guarantees.
In each case, the common thread is control. MPLS lets operators dictate how traffic flows rather than leaving it to autonomous routing decisions.
The Evolution
MPLS isn't disappearing, but it is transforming. SD-WAN offers simpler enterprise connectivity for many use cases. Internet performance improvements reduce MPLS's relative advantage. Segment Routing provides MPLS benefits with less operational burden.
What persists is the fundamental insight: sometimes you want traffic to stop thinking for itself and follow instructions. Sometimes the network should be a highway, not a maze of intersections. That insight remains valuable wherever control, predictability, and performance matter.
The labels change. The need for them doesn't.
Frequently Asked Questions About MPLS
Was this page helpful?