MPLS Fundamentals

Traditional IP routing makes a completely independent forwarding decision at every router a packet traverses — each hop performs a routing table lookup, selects a next-hop, and forwards the packet. This works, but it is computationally expensive, does not easily support traffic engineering, and makes it difficult to build scalable VPN services across a shared network infrastructure. Multiprotocol Label Switching (MPLS) solves all three problems by replacing per-hop IP lookups with fast label swapping: the ingress router classifies the packet once, assigns a short fixed-length label, and every subsequent router in the MPLS domain simply swaps or pops that label and forwards — no IP lookup required in the core.

The result is a forwarding plane that is faster, more scalable, and far more versatile than pure IP routing. MPLS is the foundation of modern service-provider networks and enterprise WAN designs: it enables Layer 3 VPNs (L3VPN/BGP-MPLS VPN) that carry hundreds of customer VRFs over a shared backbone, Traffic Engineering (MPLS-TE) that routes traffic along explicitly controlled paths regardless of the IGP shortest path, and QoS-aware forwarding through the EXP (experimental/TC) bits in the label header. For a conceptual overview before this lab, see MPLS Overview.

This lab covers the foundational layer that all of these advanced services depend on: how labels are structured, how the Label Distribution Protocol (LDP) discovers neighbours and distributes label bindings, how the Label Forwarding Information Base (LFIB) is built and used to forward labelled packets, the roles of PE, P, and CE routers in a service-provider architecture, and how to configure and verify basic MPLS label switching on a Cisco IOS backbone using the essential verification commands show mpls ldp neighbor and show mpls forwarding-table.

Before starting this lab, ensure you are comfortable with IGP routing fundamentals at OSPF Single-Area Configuration and OSPF Multi-Area Configuration — MPLS requires a working IGP in the core before LDP can distribute labels. For VRF concepts that underpin MPLS VPNs, see VRF-Lite Configuration. For BGP that carries VPN routing information over MPLS, see BGP Basics & eBGP.

1. MPLS Core Concepts

The MPLS Label

An MPLS label is a 32-bit value inserted between the Layer 2 header (Ethernet, Frame Relay, ATM) and the Layer 3 IP header. It is sometimes called the "shim header" because it shims between L2 and L3. Multiple labels can be stacked — the label stack is processed from the top (outermost) label downward.

  MPLS Label Format (32 bits per label):
  ┌────────────────────────────┬─────┬───┬──────────┐
  │  Label Value (20 bits)     │ TC  │ S │  TTL     │
  │  (0 – 1,048,575)           │(3b) │(1b)│ (8 bits) │
  └────────────────────────────┴─────┴───┴──────────┘

  Label Value : 20-bit forwarding identifier. Values 0–15 are reserved.
                Common reserved labels:
                  0  = IPv4 Explicit NULL (pop label, forward as IPv4)
                  2  = IPv6 Explicit NULL
                  3  = Implicit NULL (signals PHP — pop the label)
  TC (3 bits) : Traffic Class — formerly called EXP (experimental).
                Carries QoS/DSCP markings for MPLS-aware queuing.
  S  (1 bit)  : Bottom-of-Stack flag. Set to 1 on the LAST (innermost)
                label in the stack. Zero on all other labels.
                After popping a label where S=1, the next header is IP.
  TTL (8 bits): Time-to-Live — decremented at each label swap hop,
                same function as IP TTL. Prevents forwarding loops.

  Packet structure with one MPLS label:
  ┌─────────────┬──────────────────┬───────────────────────────┐
  │  L2 Header  │  MPLS Label(s)   │  IP Header + Payload      │
  │ (Ethernet)  │  [Label|TC|S|TTL]│                           │
  └─────────────┴──────────────────┴───────────────────────────┘

  Packet structure with a two-label stack (e.g., MPLS VPN):
  ┌─────────────┬──────────────────┬──────────────────┬────────┐
  │  L2 Header  │  Outer Label     │  Inner Label     │  IP    │
  │             │  [transport lbl] │  [VPN lbl] S=1   │  Pkt   │
  └─────────────┴──────────────────┴──────────────────┴────────┘
                  ↑ swapped by P routers   ↑ used by egress PE
  

MPLS Forwarding Operations

Operation Performed By Description Analogy
PUSH (impose) Ingress LSR (typically the ingress PE router) Classify an incoming unlabelled IP packet, look up the FEC, push one or more MPLS labels onto the packet, and forward it into the MPLS domain Adding a mailing label to a parcel at the post office entrance
SWAP Transit LSR (P routers in the core) Replace the incoming top label with a new outgoing label and forward out the correct interface. No IP lookup — label swap is the entire forwarding decision Swapping the routing sticker at a sorting depot
POP (dispose) Egress LSR or penultimate LSR (PHP) Remove the top label from the stack. If S=1 (bottom of stack), the next header is IP and the router forwards based on the IP header. If S=0, the next label becomes the new top and is processed Removing the routing sticker at the final sorting depot before delivery

Forwarding Equivalence Class (FEC)

A Forwarding Equivalence Class (FEC) is a group of packets that receive identical MPLS forwarding treatment — they are all assigned the same label and follow the same Label Switched Path (LSP). In basic MPLS, each IGP prefix in the routing table corresponds to one FEC. The ingress router maps an incoming IP packet to a FEC by looking up the destination IP in the routing table, then assigns the label associated with that FEC. All packets destined for the same prefix share the same label and the same LSP through the MPLS core.

Label Switched Path (LSP)

An LSP is the end-to-end unidirectional path through the MPLS network that packets belonging to a particular FEC follow. LSPs are unidirectional — forward and reverse traffic uses separate LSPs. In LDP-based MPLS, LSPs are built hop-by-hop as each router receives label bindings from its downstream neighbours and advertises its own bindings to its upstream neighbours. The ingress router has a complete view of the LSP only in the sense that it knows the first label to push — the path is determined by the label bindings at each successive hop following the IGP's shortest path.

MPLS Tables — LIB, LFIB, and FIB

Table Full Name Contents Show Command Used For
LIB Label Information Base All label bindings received from all LDP neighbours for all prefixes — including bindings that are not currently being used for forwarding (non-best-path neighbours) show mpls ldp bindings Control-plane database. Contains every label binding received from every LDP peer. The router selects the best binding (from the next-hop peer according to the IGP) to install into the LFIB
LFIB Label Forwarding Information Base The active forwarding entries used for label-switched packet forwarding — incoming label, operation (swap/pop), outgoing label, and outgoing interface. Only best-path entries from the LIB are installed here show mpls forwarding-table Data-plane forwarding table. Used by the router's hardware/software to forward labelled packets without any IP lookup. The direct equivalent of the IP routing table for labelled traffic
FIB Forwarding Information Base Derived from the IP routing table — contains IP prefixes with their next-hop and the outgoing label to push for each prefix (for unlabelled packets arriving at the ingress PE) show ip cef / show mpls forwarding-table Used at the ingress LSR to classify unlabelled IP packets into FECs and push the correct label before forwarding into the MPLS domain

Penultimate Hop Popping (PHP)

PHP is an MPLS optimisation where the penultimate (second-to-last) router in the LSP — the P router immediately before the egress PE — pops the transport label rather than the egress PE itself. This saves the egress PE from having to perform two lookups: a label lookup (LFIB) to pop the transport label and then an IP lookup (or VPN label lookup) to forward the packet. By popping the transport label one hop early, the egress PE receives an already-unlabelled IP packet (or a packet with only the VPN/inner label remaining) and performs only a single lookup. PHP is signalled by the egress PE advertising label value 3 (Implicit NULL) for its own prefixes — when the penultimate router sees label 3 as the next-hop label, it pops the top label rather than swapping it.

2. Label Distribution Protocol (LDP)

LDP Overview

LDP (RFC 5036) is the most common label distribution protocol for basic MPLS forwarding. It runs between directly connected MPLS-enabled routers and distributes label bindings for all IGP prefixes in the routing table. Each router independently generates a local label for every prefix in its routing table and advertises those bindings to all LDP neighbours — this is called Liberal Label Retention: bindings from all peers are stored in the LIB even if only the best-path peer's binding is installed in the LFIB.

LDP Session Establishment Process

  Phase 1 — LDP Discovery (Hello):
  ─────────────────────────────────────────────────────────────
  ● Router sends UDP Hello messages to 224.0.0.2 (All Routers
    multicast) on port 646 every 5 seconds (hello interval)
  ● Hello messages contain the router's LDP Router ID
    (highest IP on a loopback, or highest interface IP if no loopback)
  ● Adjacent routers on the same subnet receive the Hello and
    learn each other's LDP Router IDs → become LDP "basic discovery" peers

  Phase 2 — TCP Session Setup:
  ─────────────────────────────────────────────────────────────
  ● The router with the higher LDP Router ID initiates a TCP
    connection to the other router on port 646
  ● Both routers must be able to reach each other's LDP Router ID
    (typically the loopback) — if there is no route to the peer's
    LDP Router ID, the TCP session fails even if the Hellos succeeded
  ● TCP session establishes → LDP Initialisation messages exchanged
    (protocol version, label space, keepalive timer)

  Phase 3 — Label Binding Distribution:
  ─────────────────────────────────────────────────────────────
  ● After session UP, each router sends Label Mapping messages
    for every prefix in its IP routing table
  ● Each Label Mapping message contains: prefix, local label
  ● Peer stores all received bindings in its LIB
  ● The binding from the next-hop peer (per IGP) for each prefix
    is installed in the LFIB as the outgoing label

  Ongoing:
  ─────────────────────────────────────────────────────────────
  ● Keepalive messages sent every 60 seconds (default hold time 180s)
  ● New prefixes added to the IGP trigger new Label Mapping messages
  ● Withdrawn prefixes trigger Label Withdraw messages
  ● Session failure → all bindings from that peer removed from LIB
  

LDP Router ID Selection

Priority Source Notes
1 (highest) Manually configured: mpls ldp router-id [intf] force Best practice — always pin to the loopback interface to ensure stability. Without force, the change only takes effect when the current LDP session resets
2 Highest IP address on an active loopback interface Default selection if no manual configuration. Stable as long as the loopback does not go down
3 (lowest) Highest IP address on any active non-loopback interface Avoid — physical interface IP addresses can change and flap, causing LDP Router ID changes that reset all LDP sessions

3. MPLS Network Roles — PE, P, and CE Routers

  MPLS Service-Provider Network Architecture:

  Customer A Site 1              SP MPLS Core              Customer A Site 2
  ┌───────────┐   ┌──────────────────────────────────────┐  ┌───────────┐
  │  CE-A1    │───│  PE1   ──────  P1  ──────  PE2       │──│  CE-A2    │
  │(IOS router│   │(Gi0/0) (Gi0/1)(Gi0/0)(Gi0/1)(Gi0/0) │  │(IOS router│
  │ no MPLS)  │   │                                       │  │ no MPLS)  │
  └───────────┘   │        ──────  P2  ──────             │  └───────────┘
                  └──────────────────────────────────────┘

  ┌────────┬──────────────────────────────────────────────────────────────┐
  │  Role  │  Description                                                 │
  ├────────┼──────────────────────────────────────────────────────────────┤
  │  CE    │  Customer Edge router. Owned by the customer or SP.          │
  │        │  Connects to the PE router using standard IP routing         │
  │        │  (static, OSPF, EIGRP, or BGP). Has NO knowledge of MPLS —  │
  │        │  it sends and receives plain IP packets. Does not run LDP.   │
  │        │  The CE-PE link is outside the MPLS domain.                  │
  ├────────┼──────────────────────────────────────────────────────────────┤
  │  PE    │  Provider Edge router. The boundary between the customer     │
  │        │  network and the MPLS core. Runs BOTH standard IP routing    │
  │        │  (toward CE) and MPLS label switching (toward P routers).   │
  │        │  Performs PUSH on ingress (imposes labels on CE traffic)     │
  │        │  and POP on egress (removes labels, forwards IP to CE).      │
  │        │  In L3VPN, PE routers run MP-BGP to exchange VPN routes      │
  │        │  and maintain per-VRF routing tables. LDP runs on all        │
  │        │  interfaces facing the MPLS core (not the CE-facing links).  │
  ├────────┼──────────────────────────────────────────────────────────────┤
  │  P     │  Provider (core) router. Fully inside the MPLS domain.       │
  │        │  Performs only SWAP operations — receives a labelled packet, │
  │        │  swaps the top label, and forwards. Has no knowledge of      │
  │        │  customer VRFs, BGP VPN routes, or CE prefixes. Only         │
  │        │  needs to know how to swap transport labels — the IGP and    │
  │        │  LDP provide all necessary information. P routers have a     │
  │        │  significantly smaller routing table than PE routers         │
  │        │  (no VPN/customer routes) which is one of MPLS's key         │
  │        │  scalability benefits.                                        │
  └────────┴──────────────────────────────────────────────────────────────┘

  Label operations across the topology for traffic from CE-A1 to CE-A2:

  CE-A1      PE1           P1            PE2           CE-A2
  ──────────────────────────────────────────────────────────────
  IP pkt →  PUSH labels → SWAP top lbl → POP top lbl → IP pkt
            [VPN][Trans]   [Trans']      [VPN only]
                ↑ two-label stack          ↑ PHP: P1 already
                (transport + VPN)            popped transport
  

Label Operations Summary by Router Role

Router Role MPLS Operation Runs LDP? Runs IGP? Runs MP-BGP? Knows Customer Routes?
CE None — plain IP No Toward PE only Optional (eBGP to PE) Its own routes only
PE PUSH (ingress) / POP (egress) Yes (core-facing) Yes (full IGP) Yes (VPN route exchange) Yes (per-VRF tables)
P SWAP Yes (all core interfaces) Yes (full IGP) No No (only SP loopbacks/links)

4. Lab Topology

  CE-A1                  MPLS Core                    CE-A2
  ┌──────┐  192.168.1.0/30   ┌──────┐  10.0.12.0/30  ┌──────┐  192.168.2.0/30
  │      │──────────────────►│      │────────────────►│      │────────────────►
  │ CE1  │  .1          .2   │ PE1  │  .1         .2  │  P1  │  .1         .2
  │      │ Gi0/0     Gi0/0   │      │ Gi0/1    Gi0/0  │      │ Gi0/1    Gi0/0
  └──────┘                   └──────┘                 └──────┘
                             Gi0/2 .1                 Gi0/2 .1
                             10.0.14.0/30             10.0.24.0/30
                             Gi0/2 .2                 Gi0/2 .2
                             ┌──────┐                 ┌──────┐
                             │  P2  │─────────────────│ PE2  │──── CE2
                             │      │  10.0.23.0/30   │      │  192.168.3.0/30
                             └──────┘  .1         .2  └──────┘
                             Gi0/1                    Gi0/0

  Loopback addresses (used as LDP Router IDs and BGP update sources):
  ┌────────┬───────────────────┬────────────────┐
  │ Router │ Loopback0         │  Role          │
  ├────────┼───────────────────┼────────────────┤
  │ PE1    │ 1.1.1.1/32        │  PE router     │
  │ P1     │ 2.2.2.2/32        │  P router      │
  │ P2     │ 3.3.3.3/32        │  P router      │
  │ PE2    │ 4.4.4.4/32        │  PE router     │
  │ CE1    │ 10.10.10.10/32    │  CE (no MPLS)  │
  │ CE2    │ 10.20.20.20/32    │  CE (no MPLS)  │
  └────────┴───────────────────┴────────────────┘

  IGP: OSPF Area 0 on all SP routers (PE1, P1, P2, PE2)
       OSPF redistributes all loopback and link addresses
  LDP: Enabled on all core-facing interfaces (PE1-P1, PE1-P2,
       P1-PE2, P2-PE2 links) and loopbacks
  CE routing: Static routes on CE1/CE2 pointing to PE routers
  

5. Step 1 — Base IP and OSPF Configuration

MPLS requires a fully working IGP before LDP can build LSPs. Configure OSPF Area 0 on all SP routers, advertise all loopback and link addresses, and verify full IP reachability before enabling MPLS. The CE routers use only static routes — they have no OSPF or MPLS.

PE1 Base Configuration

PE1>en
PE1#conf t
PE1(config)#hostname PE1

! ── Loopback (LDP Router ID) ──────────────────────────────
PE1(config)#interface Loopback0
PE1(config-if)# ip address 1.1.1.1 255.255.255.255
PE1(config-if)# no shutdown
PE1(config-if)#exit

! ── CE1-facing interface (NOT in OSPF, NOT MPLS enabled) ──
PE1(config)#interface gi0/0
PE1(config-if)# ip address 192.168.1.2 255.255.255.252
PE1(config-if)# description To-CE1
PE1(config-if)# no shutdown
PE1(config-if)#exit

! ── Core-facing interface toward P1 ──────────────────────
PE1(config)#interface gi0/1
PE1(config-if)# ip address 10.0.12.1 255.255.255.252
PE1(config-if)# description To-P1
PE1(config-if)# no shutdown
PE1(config-if)#exit

! ── Core-facing interface toward P2 ──────────────────────
PE1(config)#interface gi0/2
PE1(config-if)# ip address 10.0.14.1 255.255.255.252
PE1(config-if)# description To-P2
PE1(config-if)# no shutdown
PE1(config-if)#exit

! ── OSPF — advertise loopback and core links only ─────────
! ── CE-facing Gi0/0 is passive (not in the SP OSPF domain) ─
PE1(config)#router ospf 1
PE1(config-router)# router-id 1.1.1.1
PE1(config-router)# network 1.1.1.1 0.0.0.0 area 0
PE1(config-router)# network 10.0.12.0 0.0.0.3 area 0
PE1(config-router)# network 10.0.14.0 0.0.0.3 area 0
PE1(config-router)# passive-interface gi0/0
PE1(config-router)#exit
PE1(config)#end
  

P1 Base Configuration

P1>en
P1#conf t
P1(config)#hostname P1

P1(config)#interface Loopback0
P1(config-if)# ip address 2.2.2.2 255.255.255.255
P1(config-if)# no shutdown
P1(config-if)#exit

P1(config)#interface gi0/0
P1(config-if)# ip address 10.0.12.2 255.255.255.252
P1(config-if)# description To-PE1
P1(config-if)# no shutdown
P1(config-if)#exit

P1(config)#interface gi0/1
P1(config-if)# ip address 192.168.2.1 255.255.255.252
P1(config-if)# description To-PE2
P1(config-if)# no shutdown
P1(config-if)#exit

P1(config)#interface gi0/2
P1(config-if)# ip address 10.0.24.1 255.255.255.252
P1(config-if)# description To-PE2-via-P2
P1(config-if)# no shutdown
P1(config-if)#exit

P1(config)#router ospf 1
P1(config-router)# router-id 2.2.2.2
P1(config-router)# network 2.2.2.2 0.0.0.0 area 0
P1(config-router)# network 10.0.12.0 0.0.0.3 area 0
P1(config-router)# network 192.168.2.0 0.0.0.3 area 0
P1(config-router)# network 10.0.24.0 0.0.0.3 area 0
P1(config-router)#exit
P1(config)#end
  

PE2 Base Configuration

PE2>en
PE2#conf t
PE2(config)#hostname PE2

PE2(config)#interface Loopback0
PE2(config-if)# ip address 4.4.4.4 255.255.255.255
PE2(config-if)# no shutdown
PE2(config-if)#exit

PE2(config)#interface gi0/0
PE2(config-if)# ip address 192.168.2.2 255.255.255.252
PE2(config-if)# description To-P1
PE2(config-if)# no shutdown
PE2(config-if)#exit

PE2(config)#interface gi0/1
PE2(config-if)# ip address 10.0.24.2 255.255.255.252
PE2(config-if)# description To-P2
PE2(config-if)# no shutdown
PE2(config-if)#exit

PE2(config)#interface gi0/2
PE2(config-if)# ip address 192.168.3.1 255.255.255.252
PE2(config-if)# description To-CE2
PE2(config-if)# no shutdown
PE2(config-if)#exit

PE2(config)#router ospf 1
PE2(config-router)# router-id 4.4.4.4
PE2(config-router)# network 4.4.4.4 0.0.0.0 area 0
PE2(config-router)# network 192.168.2.0 0.0.0.3 area 0
PE2(config-router)# network 10.0.24.0 0.0.0.3 area 0
PE2(config-router)# passive-interface gi0/2
PE2(config-router)#exit
PE2(config)#end
  
P2 configuration follows the same pattern as P1 — configure the loopback, the two core-facing interfaces (toward PE1 on 10.0.14.0/30 and toward PE2 on 10.0.23.0/30), and add all networks to OSPF Area 0. After configuring all four routers, verify full IP reachability before enabling MPLS. Every SP router must be able to ping every other SP router's loopback address — these loopbacks become the LDP Router IDs and must be reachable for LDP TCP sessions to establish. See OSPF Single-Area Configuration and show ip route for OSPF verification.

Verify OSPF Before Enabling MPLS

! ── All four SP routers should see all loopbacks in OSPF ──
PE1#show ip ospf neighbor

Neighbor ID     Pri   State           Dead Time   Address         Interface
2.2.2.2           1   FULL/BDR        00:00:34    10.0.12.2       Gi0/1
3.3.3.3           1   FULL/BDR        00:00:31    10.0.14.2       Gi0/2

PE1#show ip route ospf | include O

O     2.2.2.2/32       [110/2] via 10.0.12.2, Gi0/1
O     3.3.3.3/32       [110/2] via 10.0.14.2, Gi0/2
O     4.4.4.4/32       [110/3] via 10.0.12.2, Gi0/1
O     10.0.24.0/30     [110/2] via 10.0.12.2, Gi0/1
O     192.168.2.0/30   [110/2] via 10.0.12.2, Gi0/1
O     10.0.23.0/30     [110/2] via 10.0.14.2, Gi0/2

! ── Critical: ping PE2's loopback from PE1 ───────────────
PE1#ping 4.4.4.4 source lo0

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 4.4.4.4, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1
!!!!!
Success rate is 100 percent (5/5)
  

6. Step 2 — Enable MPLS and LDP

Enabling MPLS on IOS requires two things: a global mpls ip command to enable label switching, and mpls ip on each interface where you want MPLS forwarding and LDP to run. The LDP Router ID should be manually pinned to the stable loopback interface. The mpls label protocol ldp command is the default on modern IOS — it is shown here for clarity.

Enable MPLS on PE1

PE1#conf t

! ── Step 1: Set LDP Router ID to Loopback0 ───────────────
! ── "force" resets the LDP session immediately ────────────
PE1(config)#mpls ldp router-id Loopback0 force

! ── Step 2: Set label protocol to LDP (default, explicit) ─
PE1(config)#mpls label protocol ldp

! ── Step 3: Enable MPLS on each CORE-FACING interface ─────
! ── Do NOT enable mpls ip on the CE-facing interface Gi0/0
PE1(config)#interface gi0/1
PE1(config-if)# mpls ip
PE1(config-if)#exit

PE1(config)#interface gi0/2
PE1(config-if)# mpls ip
PE1(config-if)#exit

PE1(config)#end
  
The mpls ip command on an interface does two things simultaneously: it enables MPLS forwarding on the interface (the router will accept and forward labelled packets on this interface) and it enables LDP on the interface (the router will send LDP Hello messages and attempt to establish LDP sessions with neighbours discovered on this interface). The CE-facing interface (Gi0/0) must not have mpls ip — CE routers do not run MPLS and cannot process labelled packets. Traffic from CE1 arrives as plain IP on Gi0/0, PE1 performs the PUSH operation and forwards the labelled packet out Gi0/1 or Gi0/2 into the core.

Enable MPLS on P1, P2, and PE2

! ── P1: enable MPLS on all interfaces (pure core router) ──
P1#conf t
P1(config)#mpls ldp router-id Loopback0 force
P1(config)#mpls label protocol ldp
P1(config)#interface gi0/0
P1(config-if)# mpls ip
P1(config-if)#exit
P1(config)#interface gi0/1
P1(config-if)# mpls ip
P1(config-if)#exit
P1(config)#interface gi0/2
P1(config-if)# mpls ip
P1(config-if)#exit
P1(config)#end

! ── PE2: same as PE1 — only core-facing interfaces ────────
PE2#conf t
PE2(config)#mpls ldp router-id Loopback0 force
PE2(config)#mpls label protocol ldp
PE2(config)#interface gi0/0
PE2(config-if)# mpls ip
PE2(config-if)#exit
PE2(config)#interface gi0/1
PE2(config-if)# mpls ip
PE2(config-if)#exit
! ── gi0/2 faces CE2 — do NOT enable mpls ip ──────────────
PE2(config)#end

! ── P2: similar to P1, enable on all core interfaces ──────
P2#conf t
P2(config)#mpls ldp router-id Loopback0 force
P2(config)#mpls label protocol ldp
P2(config)#interface gi0/0
P2(config-if)# mpls ip
P2(config-if)#exit
P2(config)#interface gi0/1
P2(config-if)# mpls ip
P2(config-if)#exit
P2(config)#end
  

7. Step 3 — Verify LDP Neighbour Relationships

show mpls ldp neighbor

PE1#show mpls ldp neighbor

    Peer LDP Ident: 2.2.2.2:0; Local LDP Ident 1.1.1.1:0
        TCP connection: 2.2.2.2.646 - 1.1.1.1.37221
        State: Oper; Msgs sent/rcvd: 47/46; Downstream
        Up time: 00:12:33
        LDP discovery sources:
          GigabitEthernet0/1, Src IP addr: 10.0.12.2
        Addresses bound to peer LDP Ident:
          10.0.12.2       2.2.2.2         10.0.24.1
          192.168.2.1

    Peer LDP Ident: 3.3.3.3:0; Local LDP Ident 1.1.1.1:0
        TCP connection: 3.3.3.3.646 - 1.1.1.1.41803
        State: Oper; Msgs sent/rcvd: 45/44; Downstream
        Up time: 00:12:29
        LDP discovery sources:
          GigabitEthernet0/2, Src IP addr: 10.0.14.2
        Addresses bound to peer LDP Ident:
          10.0.14.2       3.3.3.3         10.0.23.1
  
Each LDP neighbour entry shows the critical fields needed for verification. Peer LDP Ident confirms the peer's LDP Router ID and label space (0 = per-platform label space, the standard for most IOS deployments). TCP connection shows the source and destination of the underlying TCP session on port 646 — the session is always established between the two routers' LDP Router IDs (loopback addresses). State: Oper is the key indicator — the session is fully operational and label bindings are being exchanged. Other states include Non-existent (session never formed), Initialized (TCP connected, negotiating), and OpenRec/OpenSent (initialisation in progress). Discovery sources confirms which interface the LDP Hello that initiated the session was received on. Addresses bound to peer lists all interface addresses the peer has advertised — used to map prefixes to the correct LDP session when building the LFIB.

show mpls ldp neighbor detail

PE1#show mpls ldp neighbor detail

    Peer LDP Ident: 2.2.2.2:0; Local LDP Ident 1.1.1.1:0
        TCP connection: 2.2.2.2.646 - 1.1.1.1.37221
        Password: not required, none, in use
        State: Oper; Msgs sent/rcvd: 47/46; Downstream; Last TIB rev sent 18
        Up time: 00:12:33; UID: 3; Peer Id 0;
        LDP discovery sources:
          GigabitEthernet0/1, Src IP addr: 10.0.12.2
            holdtime: 15000 ms, hello interval: 5000 ms
        Addresses bound to peer LDP Ident:
          10.0.12.2       2.2.2.2         10.0.24.1
          192.168.2.1
        Peer holdtime: 180000 ms; KA interval: 60000 ms; Peer state: estab
        Capabilities Sent:
          [ICCP (type 0x0405) MajVer 1 MinVer 0]
          [Dynamic Announcement (0x0B)]
          [mLDP Point-to-Multipoint (0x0507)]
          [mLDP Multipoint-to-Multipoint (0x0508)]
        Capabilities Received:
          [ICCP (type 0x0405) MajVer 1 MinVer 0]
          [Dynamic Announcement (0x0B)]
  
The detail view adds keepalive and hello timer values, the negotiated capabilities, and the session UID. The holdtime (15,000 ms = 15 seconds for Hellos) and KA interval (60,000 ms = 60 seconds for keepalives) should be consistent with defaults — a mismatch between peers causes the session to fail during initialisation. The Downstream label distribution mode indicates standard Downstream Unsolicited (DU) mode — the peer sends label bindings without being explicitly requested, for all prefixes it knows about.

show mpls ldp discovery

PE1#show mpls ldp discovery

 Local LDP Identifier:
    1.1.1.1:0
    Discovery Sources:
    Interfaces:
        GigabitEthernet0/1 (ldp): xmit/recv
            LDP Id: 2.2.2.2:0
        GigabitEthernet0/2 (ldp): xmit/recv
            LDP Id: 3.3.3.3:0
  
show mpls ldp discovery confirms which interfaces are sending and receiving LDP Hello messages. xmit/recv means the interface is both sending Hellos and receiving them from a peer — a healthy bidirectional discovery. If an interface shows only xmit (transmitting but not receiving), the neighbour is not responding — check whether mpls ip is configured on the remote end's corresponding interface. See also show interfaces to confirm the interface is up/up before troubleshooting LDP.

8. Step 4 — Verify Label Bindings and Forwarding

show mpls ldp bindings — The Label Information Base (LIB)

PE1#show mpls ldp bindings

  lib entry: 1.1.1.1/32, rev 4
        local binding:  label: imp-null    ← PE1 advertises imp-null for itself
        remote binding: lsr: 2.2.2.2:0, label: 16
        remote binding: lsr: 3.3.3.3:0, label: 22

  lib entry: 2.2.2.2/32, rev 6
        local binding:  label: 16
        remote binding: lsr: 2.2.2.2:0, label: imp-null  ← P1 owns 2.2.2.2
        remote binding: lsr: 3.3.3.3:0, label: 20

  lib entry: 3.3.3.3/32, rev 8
        local binding:  label: 17
        remote binding: lsr: 2.2.2.2:0, label: 21
        remote binding: lsr: 3.3.3.3:0, label: imp-null  ← P2 owns 3.3.3.3

  lib entry: 4.4.4.4/32, rev 10
        local binding:  label: 18
        remote binding: lsr: 2.2.2.2:0, label: 17   ← P1's label for PE2
        remote binding: lsr: 3.3.3.3:0, label: 23   ← P2's label for PE2

  lib entry: 10.0.12.0/30, rev 12
        local binding:  label: imp-null
        remote binding: lsr: 2.2.2.2:0, label: imp-null

  lib entry: 10.0.14.0/30, rev 14
        local binding:  label: imp-null
        remote binding: lsr: 3.3.3.3:0, label: imp-null
  
The LIB shows every label binding for every prefix — both the local binding (the label PE1 has allocated for each prefix and advertised to its peers) and all remote bindings (the labels each peer has allocated for each prefix and sent to PE1). Key observations: a router always advertises imp-null (Implicit NULL, value 3) for its own directly connected or locally originated prefixes — this signals PHP to the upstream neighbour. When a neighbour receives imp-null as the label for a prefix, it will POP the label rather than SWAP it when forwarding to that neighbour. For prefix 4.4.4.4/32 (PE2's loopback), PE1 has two remote bindings — label 17 from P1 and label 23 from P2. Only one of these (from the IGP next-hop peer) will be installed in the LFIB.

show mpls forwarding-table — The LFIB

PE1#show mpls forwarding-table

Local  Outgoing    Prefix              Bytes Label   Outgoing   Next Hop
Label  Label or VC or Tunnel Id        Switched      interface
16     Pop Label   2.2.2.2/32          0             Gi0/1      10.0.12.2
17     Pop Label   3.3.3.3/32          0             Gi0/2      10.0.14.2
18     17          4.4.4.4/32          24680         Gi0/1      10.0.12.2
       23          4.4.4.4/32          0             Gi0/2      10.0.14.2
No Label           192.168.1.0/30      0             Gi0/0      192.168.1.1
  
The LFIB is the active forwarding table for labelled packets. Each row explains the complete label operation at this router. Reading the columns: Local Label — the incoming label value PE1 expects to receive on a labelled packet. Outgoing Label — the label operation to perform: Pop Label means remove the top label (PHP triggered by the downstream router advertising imp-null); a numeric value means swap to this label; No Label at the egress PE means the packet exits unlabelled (plain IP forwarding toward CE). Prefix — the FEC (destination prefix) this entry serves. Bytes Label Switched — traffic counter, useful for confirming specific FECs are being used. Outgoing interface and Next Hop — where to forward after the label operation. For 4.4.4.4/32 (PE2's loopback), PE1 shows two ECMP entries: local label 18 can exit via Gi0/1 (SWAP to label 17, toward P1) or Gi0/2 (SWAP to label 23, toward P2). The bytes counter of 24680 on the first entry and 0 on the second suggests IOS is currently using the P1 path. See show ip route for the underlying IP routing context.

show mpls forwarding-table — Per-Prefix Detail

! ── Query a specific prefix ───────────────────────────────
PE1#show mpls forwarding-table 4.4.4.4/32 detail

Local  Outgoing    Prefix              Bytes Label   Outgoing   Next Hop
Label  Label or VC or Tunnel Id        Switched      interface
18     17          4.4.4.4/32          24680         Gi0/1      10.0.12.2
        MAC/Encaps=14/18, MRU=1496, Label Stack{17}
        00001A2B3C4D00001A2B3C4E8847 00011000
        No output feature configured

       23          4.4.4.4/32          0             Gi0/2      10.0.14.2
        MAC/Encaps=14/18, MRU=1496, Label Stack{23}
        00001A2B3C5500001A2B3C568847 00017000
        No output feature configured
  
The detail view adds the Label Stack showing the exact label value pushed in the MPLS header, the MRU (Maximum Receive Unit — the maximum labelled frame size, which is MTU minus 4 bytes per label), and the raw MAC/MPLS encoding that will be prepended to the packet. The 8847 in the hex string is the Ethernet EtherType for MPLS unicast, confirming this is correctly encoded as an MPLS-labelled Ethernet frame.

show mpls ldp bindings — Verify a Specific Prefix

! ── Check bindings for PE2's loopback specifically ────────
PE1#show mpls ldp bindings 4.4.4.4 32

  lib entry: 4.4.4.4/32, rev 10
        local binding:  label: 18
        remote binding: lsr: 2.2.2.2:0, label: 17
        remote binding: lsr: 3.3.3.3:0, label: 23

! ── Verify from P1's perspective (label swap) ─────────────
P1#show mpls forwarding-table 4.4.4.4/32

Local  Outgoing    Prefix              Bytes Label   Outgoing   Next Hop
Label  Label or VC or Tunnel Id        Switched      interface
17     Pop Label   4.4.4.4/32          24680         Gi0/1      192.168.2.2

! ── P1 swaps incoming label 17 to "Pop Label" ─────────────
! ── This means P1 is the PENULTIMATE hop for PE2's loopback
! ── P1 received imp-null from PE2 for 4.4.4.4/32, so P1 pops
! ── PE2 receives unlabelled IP packet for 4.4.4.4/32 ───────

! ── Full LSP trace from PE1 to PE2's loopback: ────────────
! ── PE1: local=18, outgoing=17 via Gi0/1 (SWAP 18→17 to P1)
! ── P1:  local=17, outgoing=Pop via Gi0/1 (POP, forward IP to PE2)
! ── PE2: receives unlabelled IP, delivers to Loopback0 ─────
  

Trace the Complete LSP with traceroute

! ── traceroute from PE1 to PE2 loopback — shows MPLS labels ─
PE1#traceroute 4.4.4.4 source loopback0

Type escape sequence to abort.
Tracing the route to 4.4.4.4
VRF info: (vrf in name/id, vrf out name/id)
  1 10.0.12.2 [MPLS: Label 17 Exp 0] 4 msec 4 msec 4 msec
  2 192.168.2.2 4 msec 4 msec 4 msec

! ── Hop 1: P1 (10.0.12.2) — labelled packet with label 17 ──
! ── Hop 2: PE2 (192.168.2.2) — no label shown (PHP: P1 popped)
! ── Total 2 hops — LSP is working end to end ────────────────

! ── Standard "mpls" traceroute option for more detail ──────
PE1#traceroute mpls ipv4 4.4.4.4/32 source 1.1.1.1

Tracing MPLS Label Switched Path to 4.4.4.4/32, timeout is 2 seconds

Codes: '!' - success, 'Q' - request not sent, '.' - timeout,
  'L' - labeled output, 'B' - unlabeled output,
  'D' - DS Map mismatch, 'F' - no FEC mapping, 'f' - FEC mismatch,
  'M' - malformed request, 'm' - unsupported tlvs, 'N' - no rx label,
  'P' - no rx intf label prot, 'p' - premature termination of LSP,
  'R' - transit router, 'I' - unknown upstream index,
  'X' - unknown return code, 'x' - return code 0

Type escape sequence to abort.

  0 1.1.1.1 MRU 1496 [Labels: 17 Exp: 0]
L 1 10.0.12.2 MRU 1496 [Labels: implicit-null Exp: 0] 4 msec 4 msec 4 msec
! 2 192.168.2.2 3 msec 3 msec 4 msec
  
The MPLS traceroute (traceroute mpls ipv4) is the definitive LSP verification tool. The L code at hop 1 confirms the packet traversed a labelled segment. The ! code at hop 2 (PE2) confirms successful delivery. The [Labels: implicit-null Exp: 0] at P1 confirms PHP is occurring — P1 is popping the label before forwarding to PE2. If the LSP were broken, you would see . (timeout) or B (unlabelled output at a router that should be label-switching) indicating where in the path the label forwarding fails.

9. Step 5 — CE Router Configuration and End-to-End Verification

CE routers connect to PE routers using standard IP routing. They have no MPLS configuration — they are completely unaware of the MPLS core. Traffic from CE1 to CE2 enters the MPLS domain at PE1 (PUSH), is label-switched across the core by P routers (SWAP), and exits at PE2 (POP) as plain IP delivered to CE2. See Static Route Configuration for CE router routing toward the PE.

CE1 Configuration

CE1>en
CE1#conf t
CE1(config)#hostname CE1

CE1(config)#interface gi0/0
CE1(config-if)# ip address 192.168.1.1 255.255.255.252
CE1(config-if)# description To-PE1
CE1(config-if)# no shutdown
CE1(config-if)#exit

CE1(config)#interface Loopback0
CE1(config-if)# ip address 10.10.10.10 255.255.255.255
CE1(config-if)# no shutdown
CE1(config-if)#exit

! ── Static default route toward PE1 ──────────────────────
CE1(config)#ip route 0.0.0.0 0.0.0.0 192.168.1.2

CE1(config)#end

! ── CE1 has no mpls configuration at all ─────────────────
CE1#show mpls interfaces
! ── (empty output — correct for CE router) ───────────────
  

Configure PE1 to Route CE1's Prefix into OSPF

! ── On PE1: configure the CE1-facing interface and ────────
! ── redistribute or statically advertise CE1's prefix ─────

! ── Add a static route for CE1's loopback ────────────────
PE1(config)#ip route 10.10.10.10 255.255.255.255 192.168.1.1

! ── Redistribute into OSPF so the SP core knows about it ──
PE1(config)#router ospf 1
PE1(config-router)# redistribute static subnets
PE1(config-router)#exit
PE1(config)#end

! ── Verify CE1's prefix appears in OSPF on other SP routers
P1#show ip route 10.10.10.10
O E2    10.10.10.10/32 [110/20] via 10.0.12.1, Gi0/0
  

Verify End-to-End Forwarding

! ── Check the forwarding table for CE1's prefix on PE2 ────
PE2#show mpls forwarding-table 10.10.10.10/32

Local  Outgoing    Prefix              Bytes Label   Outgoing   Next Hop
Label  Label or VC or Tunnel Id        Switched      interface
25     16          10.10.10.10/32      0             Gi0/0      192.168.2.1

! ── PE2 has local label 25 for CE1's prefix ───────────────
! ── It will swap incoming label 25 to label 16 toward P1 ──
! ── which eventually reaches PE1 and then CE1 ────────────

! ── Trace from CE2 (if configured) to CE1 loopback ────────
CE2#traceroute 10.10.10.10 source loopback0

  1 192.168.3.1 2 msec        ← PE2 (egress PE, plain IP)
  2 192.168.2.1 [MPLS: Label 16 Exp 0] 4 msec  ← P1 (MPLS core)
  3 10.0.12.1 4 msec           ← PE1 (ingress PE, PHP popped)
  4 192.168.1.1 2 msec         ← CE1

! ── The MPLS label appears only on the core hop ──────────
! ── CE routers on both ends see no MPLS labels ───────────
  

10. Complete Verification Command Reference

Command What It Shows Key Fields to Check
show mpls ldp neighbor All established LDP sessions — peer LDP ID, TCP connection endpoints, session state, uptime, discovery interface State: Oper = session fully up and exchanging labels. Peer LDP Ident = peer's LDP Router ID (should be the loopback). Discovery sources = interface where Hello was received
show mpls ldp neighbor detail Adds hello/keepalive timers, negotiated capabilities, and session statistics to the basic neighbour output Hello holdtime (15s default), KA interval (60s default), capabilities — mismatched timers prevent session establishment
show mpls ldp discovery Interfaces sending and receiving LDP Hello messages and the LDP Router IDs discovered on each interface xmit/recv = bidirectional — healthy. xmit only = peer not responding. No output for an interface = mpls ip not configured on that interface
show mpls ldp bindings The full Label Information Base — local labels allocated for every prefix, plus all remote bindings received from all LDP peers local binding: imp-null = this router signals PHP for this prefix (its own prefixes). remote binding: imp-null from a peer = that peer is the prefix owner (penultimate hop should pop). Numeric labels = normal swap entries
show mpls forwarding-table The active Label Forwarding Information Base — all installed label forwarding entries with incoming label, outgoing operation, prefix, byte counters, exit interface, and next-hop Pop Label = PHP occurring at this router (penultimate hop). No Label = packet exits unlabelled (egress PE toward CE). Numeric outgoing label = label swap. Bytes Label Switched counter confirms traffic is actually using the entry
show mpls forwarding-table [prefix] detail Per-prefix LFIB detail including label stack, MRU, and raw L2 encoding Label Stack{N} confirms the exact outgoing label value. MRU confirms adequate MTU for labelled frames (standard MTU minus 4 bytes per label)
show mpls interfaces All interfaces with MPLS enabled — interface name, IP status, MPLS operational status, LDP status Yes in the LDP column = LDP enabled and running on the interface. No = interface has mpls ip but LDP is not running (check if LDP is globally enabled)
show mpls interfaces detail Adds MTU, label stack depth, and per-interface MPLS feature flags MTU should accommodate at least one label (1504 bytes minimum for standard 1500-byte Ethernet — ideally 1508+ for VPN label stacks)
show ip cef [prefix] detail CEF (Cisco Express Forwarding) entry for a prefix — shows the outgoing label that will be pushed for unlabelled IP packets destined for this prefix at the ingress PE nexthop ... GiX/Y label [N] = label N will be pushed when forwarding to this prefix. Confirms the FIB-to-LFIB connection at the ingress PE
traceroute mpls ipv4 [prefix]/[len] MPLS LSP traceroute — sends MPLS echo requests along the LSP to verify each hop and confirm the end-to-end LSP is intact L = labelled output (expected at all core hops). ! = successful delivery. . = timeout (broken LSP at this hop). B = unlabelled output at a hop that should be label-switching (MPLS not enabled on that interface)

MPLS Troubleshooting Quick Reference

Symptom Likely Cause Diagnosis & Fix
LDP session not forming (show mpls ldp neighbor empty) mpls ip not configured on the interface, or no route to peer's LDP Router ID (loopback) Verify show mpls ldp discovery — is the Hello being sent/received? Check show ip route [peer-loopback] — the LDP TCP session is established between loopback addresses. If no route to peer loopback, OSPF is not advertising it or the loopback is missing from OSPF network statements
LDP session stuck in non-Oper state LDP Router ID is a physical interface IP that is flapping, or mismatched LDP parameters (transport address, hello timers) Pin LDP Router ID with mpls ldp router-id Loopback0 force. Check show mpls ldp neighbor detail for timer mismatches. Verify both ends can reach each other's LDP transport address (TCP connection field)
Prefix missing from show mpls forwarding-table Prefix not in the IP routing table, or no label binding received from the next-hop LDP peer for this prefix Verify the prefix is in show ip route first. Then check show mpls ldp bindings [prefix] — is there a remote binding from the next-hop peer? If the LDP session is up but the binding is missing, the prefix may not be in the peer's routing table
MPLS traceroute shows B (unlabelled output) at a core hop mpls ip not enabled on the outgoing interface of a P or PE router, so the packet is forwarded as plain IP rather than with a label On the router where B appears, run show mpls interfaces to find which interface is missing mpls ip. Add mpls ip to the interface in question
MTU/fragmentation issues — traffic fails for large packets only MPLS label header adds 4 bytes per label to every packet. If the underlying link MTU is 1500 bytes and MPLS adds a 4-byte label, packets up to 1500 bytes after labelling will exceed the MTU and be fragmented or dropped Increase interface MTU on all MPLS core links to at least 1504 bytes (one label) or 1508+ bytes (VPN label stack). Use ip mtu to set IP MTU and mpls mtu for the MPLS-specific MTU. Verify with show mpls forwarding-table [prefix] detail — the MRU field shows the maximum labelled packet size

Key Points & Exam Tips

  • MPLS labels are 32-bit shim headers inserted between L2 and L3. Each label has four fields: 20-bit label value, 3-bit TC (QoS), 1-bit S (bottom-of-stack), and 8-bit TTL. The S bit identifies the last label in the stack — after popping an S=1 label, the next header is IP. Reserved labels 0 (IPv4 Explicit NULL) and 3 (Implicit NULL / PHP signal) are critical to understand.
  • The three MPLS forwarding operations: PUSH (ingress PE — impose label on unlabelled IP packet), SWAP (P routers — replace top label, no IP lookup), POP (egress PE or penultimate P router for PHP). The entire forwarding efficiency of MPLS comes from P routers performing only label swaps without any IP routing table lookups.
  • LDP (RFC 5036) uses UDP multicast Hellos to discover neighbours on the same subnet (port 646, multicast 224.0.0.2), then establishes a TCP session between LDP Router IDs for label binding exchange. The TCP session is between loopback addresses — if there is no route to the peer's loopback, the TCP session fails even when the Hello is received. Always pin the LDP Router ID to the loopback with mpls ldp router-id Loopback0 force.
  • The three MPLS tables: LIB (Label Information Base) — all received label bindings from all peers, shown by show mpls ldp bindings; LFIB (Label Forwarding Information Base) — active forwarding entries for labelled packets, shown by show mpls forwarding-table; FIB — IP forwarding table extended with outgoing labels for unlabelled ingress traffic, shown by show ip cef.
  • PHP (Penultimate Hop Popping) is signalled by the egress PE advertising label value 3 (Implicit NULL) for its own prefixes. The penultimate P router sees imp-null as the outgoing label and pops instead of swapping, so the egress PE receives an already-unlabelled IP packet and needs only one forwarding lookup instead of two. In show mpls forwarding-table, PHP shows as "Pop Label" in the outgoing label column.
  • PE routers sit at the boundary between the customer network and the MPLS core — they PUSH labels on ingress and POP labels on egress. They run LDP toward the core, IGP throughout the SP network, and MP-BGP for VPN route exchange. P routers are pure label-switching devices in the core — they only SWAP labels and have no knowledge of customer VRFs or routes. CE routers are outside the MPLS domain entirely — they run plain IP toward their PE with no MPLS configuration.
  • The IOS command to enable MPLS is mpls ip — applied both globally (optional in modern IOS) and on each interface that should participate in MPLS forwarding and LDP. Do not enable mpls ip on CE-facing interfaces — CE routers cannot process labelled packets. OSPF (or another IGP) must be fully functional across the SP core before LDP will build LSPs — LDP builds LSPs following the IGP's shortest paths.
  • On the exam: know the difference between the LIB (all bindings, show mpls ldp bindings) and the LFIB (active forwarding, show mpls forwarding-table); know that "State: Oper" in show mpls ldp neighbor means the session is fully up; know that "Pop Label" in the forwarding table indicates PHP; know what "imp-null" means (a router advertising it for its own prefixes, signalling the upstream peer to perform PHP); and understand why P routers do not need to carry customer/VPN routes.
Next Steps: This lab covers the MPLS transport layer — the foundation that all MPLS services build on. The natural next step is MPLS L3VPN (BGP-MPLS VPN) where MP-iBGP carries per-customer VPN routes between PE routers and VRFs isolate customer routing tables — see MPLS VPN (L3VPN). For VRF concepts that underpin L3VPN, see VRF-Lite Configuration. For the BGP foundation needed for MP-BGP in MPLS VPNs, see BGP Basics & eBGP. For multi-area OSPF used as the IGP in larger MPLS cores, see OSPF Multi-Area Configuration. For DMVPN as an alternative WAN architecture to MPLS-based VPNs, see DMVPN Configuration.

TEST WHAT YOU LEARNED

1. A packet enters PE1 from CE1 as a plain IP packet. PE1 pushes a two-label stack (transport label 18, VPN label 25, S=1 on the inner label) and forwards it into the core. P1 receives the packet and performs a SWAP, replacing label 18 with label 17. What operation does P1 perform on the inner label 25?

Correct answer is C. This is one of the most fundamental principles of MPLS label stack processing and the key to understanding why P routers can forward VPN traffic without any knowledge of customer routes. MPLS label processing is strictly top-down and one label at a time: a router examines only the top label in the stack, performs the LFIB lookup on that top label value, executes the prescribed operation (swap, pop, or push), and forwards. The inner labels are completely opaque to P routers — they are carried as part of the payload without examination. This design is intentional and is the source of MPLS's scalability advantage: P routers need only one LFIB entry per transport LSP (typically one per PE loopback prefix) rather than one entry per customer VPN route. A large SP network might have hundreds of customer VPNs with thousands of routes each, but the P routers only ever see the much smaller set of transport labels — perhaps one per PE router. The S bit (Bottom of Stack) is an informational field telling the router whether there are more labels below the current one — it does not trigger processing of the inner label at transit hops. Only when the top label is popped and S=1 does the router know that the next header is IP (or another non-MPLS protocol).

2. show mpls ldp neighbor shows no output on PE1 even though show mpls ldp discovery shows LDP Hellos being sent and received on Gi0/1. What is the most likely reason the LDP session has not formed?

Correct answer is A. This is the single most common LDP troubleshooting scenario and stems from the two-phase nature of LDP neighbour establishment. Phase 1 (Hello/discovery) uses UDP multicast messages sent to 224.0.0.2 on the directly connected link — these succeed as long as both interfaces are up, MPLS is enabled on them, and they share the same subnet. No routing is required for Phase 1 because multicast UDP on a directly connected segment is link-local. Phase 2 (TCP session) is completely different — it requires the two routers to establish a TCP connection between their LDP Router IDs, which are loopback addresses by best practice. TCP is unicast and requires a route in the routing table to reach the destination. If the peer's loopback (2.2.2.2) is not reachable — most commonly because the OSPF network statement was missed, or because OSPF has not yet converged, or because the loopback subnet was accidentally excluded from OSPF — the TCP SYN never gets a reply, the TCP connection times out, and the LDP session never reaches the Oper state. The diagnostic is simple: ping the peer's LDP Router ID from your own loopback. If the ping fails, fix the routing (OSPF). If the ping succeeds but the session still does not form, the LDP Router ID on one end may be a non-loopback address that is being changed by interface flaps, causing the TCP session to reset continuously. Option D is incorrect — mpls ldp router-id is a best practice configuration but is not mandatory; without it, IOS automatically selects the highest loopback IP as the LDP Router ID.

3. In show mpls ldp bindings, PE2 advertises label value 3 (Implicit NULL) for its own loopback prefix 4.4.4.4/32 to all its LDP peers. What does this tell P1, and what does P1 do with labelled packets destined for 4.4.4.4/32?

Correct answer is D. PHP is a fundamental MPLS optimisation that is enabled by default on all Cisco IOS routers — you will see imp-null in the LIB for every locally originated/directly connected prefix. The mechanism is elegantly simple: the egress PE (the one that will ultimately deliver the packet to the CE or to the locally connected destination) tells its upstream neighbours "don't bother pushing a label for me to then immediately look up and pop — just pop it yourself, one hop early." The penultimate P router's LFIB shows "Pop Label" for that prefix, meaning it will remove the top label before forwarding. The egress PE then receives the packet with one less label, requiring one less lookup. In an L3VPN context, this is even more important: without PHP, the egress PE would receive a packet with two labels (transport + VPN), perform an LFIB lookup on the transport label, pop it, then see the VPN label and perform another LFIB lookup. With PHP, the egress PE receives the packet with only the VPN label, performs one LFIB lookup on the VPN label, identifies the VRF, performs the VRF IP lookup, and forwards to the CE. Two lookups reduced to two well-targeted lookups (or one for IP-only traffic). The value "3" is specifically the Implicit NULL — it is never actually placed in a packet header (unlike Explicit NULL, value 0, which is placed in the header and discarded by the last hop). Implicit NULL is purely a signalling value in LDP — it means "perform PHP for this FEC."

4. What is the fundamental difference in routing table requirements between a PE router and a P router in an MPLS SP network, and why does this architectural difference represent one of MPLS's key scalability advantages?

Correct answer is B. The routing table separation between PE and P routers is one of the defining architectural advantages of MPLS VPN over alternative technologies. Consider the scale: a large SP might serve 5,000 enterprise customers, each with 200 branch prefixes, giving 1,000,000 customer routes total. In a traditional IP network, every core router would need to carry all 1,000,000 routes plus the Internet BGP table. With MPLS L3VPN, PE routers carry per-VRF customer routes via MP-iBGP (each PE only needs routes for VPNs it serves, but in a full-mesh scenario this could still be large), while P routers carry zero customer routes — only the SP infrastructure routes, typically a few hundred entries for the SP's own loopbacks and point-to-point links. The label stack mechanism is what enables this: when PE1 pushes a two-label stack (transport label for PE2 + VPN label for customer A), the transport label is all P routers need to see. All customer A traffic and all customer B traffic and all customer Z traffic destined for PE2 share the exact same transport label — they are all forwarded along the same LSP to PE2 using a single LFIB entry. The VPN differentiation is entirely in the inner label, which P routers ignore. This is called the "scalable VPN core" property of MPLS and is why P routers can be high-throughput forwarding engines with small routing tables, while PE routers handle the complex per-customer routing with richer (but slower) processing.

5. show mpls forwarding-table on PE1 shows "No Label" for the prefix 192.168.1.0/30 (the PE1-CE1 link). What does this mean and why is it correct?

Correct answer is C. "No Label" in the outgoing label field of the MPLS forwarding table is specifically the egress PE indicator — it marks the point where labelled packets exit the MPLS domain and become plain IP packets. The sequence of events for traffic arriving at PE2 labelled and destined for 192.168.3.0/30 (CE2's subnet in an analogous scenario) is: PE2 receives the packet with a label, looks up the local label in the LFIB, finds the entry that says "No Label, exit Gi0/2, next-hop CE2's IP," strips the label, and forwards the now-unlabelled IP packet out Gi0/2 to CE2. This is the POP operation at the egress PE. The "No Label" is not an error or a filtering action — it is the correct and expected state for any prefix that is reachable via an interface where MPLS is not enabled (CE-facing interfaces). Option D is partially correct in that directly connected prefixes on MPLS-enabled interfaces often also show "No Label" (or "Pop Label" from the other direction), but the specific reason for the CE-facing prefix having "No Label" is the MPLS domain boundary, not the directly-connected nature.

6. Why must OSPF (or another IGP) be fully converged across the SP core before LDP can build end-to-end LSPs, and what happens to label switching if OSPF re-converges due to a link failure?

Correct answer is A. The relationship between LDP and the IGP is one of the most important conceptual points in MPLS fundamentals. LDP-based MPLS (as opposed to RSVP-TE or SR-MPLS) is explicitly "traffic follows the IGP" — there is no independent path computation in LDP. Every LSP follows the IGP shortest path for the corresponding FEC. The label bindings distributed by LDP are keyed to IP prefixes in the routing table — if a prefix is not in the routing table, no label is distributed for it. Liberal label retention (the IOS default) means that LDP keeps label bindings from all peers in the LIB even for non-best-path peers. This is what enables rapid LSP reconvergence after a topology change: when OSPF re-converges and changes the next-hop for a prefix from P1 to P2, the LFIB is updated to use P2's label binding (which was already stored in the LIB from when the LDP session with P2 was first established). This avoids the need to wait for a new LDP label exchange — only the LFIB selection changes, using an already-known binding. The convergence time for MPLS after a topology change is therefore approximately equal to the IGP convergence time (the bottleneck), not IGP convergence time plus LDP re-exchange time. Option D is incorrect — LDP works with any IGP including EIGRP and IS-IS, and also with BGP for specific use cases.

7. A network engineer runs traceroute mpls ipv4 4.4.4.4/32 source 1.1.1.1 from PE1 and sees an "L" code at hop 1 (P1) followed by a "B" code at hop 2 (PE2). What does the "B" code indicate and what is the likely fix?

Correct answer is D. The MPLS traceroute output codes are precise diagnostic indicators: "L" means the router forwarded the packet with a label intact (correct label-switched forwarding); "!" means successful delivery to the destination; "B" means "unlabelled output" — the router at this hop in the LSP forwarded the packet without a label when a label was expected. The "B" code appears at the hop where the label was incorrectly stripped, which is the same hop where mpls ip is missing on the outgoing interface. The sequence "L" at P1 followed by "B" at PE2 tells us: PE1 correctly pushed the label and forwarded to P1; P1 correctly received the labelled packet (hence "L" at hop 1 shows labelled forwarding happened at PE1→P1); but P1 then forwarded to PE2 without a label — the "B" code appears at PE2 because the traceroute probe that PE2 receives has no label. The cause: P1's outgoing interface toward PE2 does not have mpls ip. Without it, the interface does not participate in MPLS forwarding, so P1 performs an IP lookup instead of an LFIB lookup and forwards the packet unlabelled. The fix is straightforward: interface [P1-to-PE2-interface]mpls ip. After enabling it, a new LDP Hello will be sent, the LDP session to PE2 will form, and P1 will build an LFIB entry that correctly swaps and labels the packet toward PE2.

8. What is the MPLS EtherType value (0x8847) used for, and what would happen to MPLS traffic on an Ethernet link if a switch in the SP core did not recognise or pass frames with this EtherType?

Correct answer is B. Understanding the Ethernet framing for MPLS is important for real-world deployments, particularly when MPLS routers are connected via Layer 2 infrastructure such as Metro Ethernet or carrier Ethernet switches. The EtherType field in the Ethernet header tells the receiving device what protocol is encapsulated in the frame's payload. Standard IP unicast uses 0x0800 — the receiver knows to parse the payload as an IP header. When a router pushes an MPLS label, the Ethernet frame's EtherType changes to 0x8847 — telling the receiver "the payload starts with an MPLS label stack, not directly with an IP header." This is why MPLS is called a "shim" — it inserts itself between the L2 and L3 headers at the EtherType boundary. A Layer 2 switch (like a Metro Ethernet provider switch) that carries MPLS router traffic does not need to understand MPLS — it just needs to forward frames with EtherType 0x8847 the same way it forwards any other Ethernet frames. However, if the switch has protocol-based filtering, VLAN assignment based on EtherType, or storm control that drops unknown EtherTypes, MPLS traffic will be silently dropped. This is a real deployment issue — some enterprise switches and managed media converters have been known to filter non-IP EtherTypes, breaking MPLS links unexpectedly. The fix is to configure the Layer 2 device to pass all EtherTypes or to explicitly permit 0x8847 and 0x8848.

9. CE1 sends a ping to CE2 across the MPLS core. Describe the complete label handling at each router in the path PE1 → P1 → PE2, including which table each router consults and what label operation it performs.

Correct answer is C. This question tests a precise understanding of which tables are used at each step of MPLS forwarding — a nuance that many candidates get wrong by conflating the LIB (control-plane) with the LFIB (data-plane). The key distinctions: The FIB (not the LIB or LFIB) is used at the ingress PE for unlabelled IP packets arriving from CE routers. The FIB is the CEF table — it contains the destination prefix, next-hop interface, next-hop MAC address for ARP, and crucially the outgoing MPLS label to impose (the label learned via LDP from the IGP next-hop peer, installed from the LIB). When CEF hardware or software processes an unlabelled IP packet, it does a single FIB lookup that returns both the forwarding action (next-hop interface) and the label operation (push this label value). The LFIB is used at transit P routers and at the egress PE (when the packet arrives still labelled). P routers use ONLY the LFIB — they look up the incoming label and get the outgoing label and interface directly. No IP lookup occurs at P routers. The LIB is a control-plane database — it is never directly consulted during data-plane forwarding. It is used to populate the LFIB (selecting the best binding from the LIB to install into the LFIB) but is not part of the per-packet forwarding path. Confusing the LIB with the LFIB is a very common exam error.

10. What is the purpose of the label space identifier in the LDP Router ID (shown as the ":0" suffix in Peer LDP Ident: 2.2.2.2:0), and what would a non-zero value indicate?

Correct answer is D. The label space concept comes directly from RFC 5036 (LDP specification) and is part of the LDP identifier format: [LDP Router ID]:[Label Space ID]. The label space identifier specifies the scope within which label values are meaningful. Per-platform label space (ID = 0): one global pool of labels for the entire router. A label value of 100 means the same thing regardless of which interface the labelled packet arrives on — the LFIB is keyed on [incoming label value] globally. This is the model used in virtually all modern Ethernet/IP MPLS deployments and all Cisco IOS routers in standard MPLS configurations. Per-interface label space (ID ≠ 0): separate label pools per interface. The LFIB is keyed on [incoming interface, incoming label value] — the same label value arriving on different interfaces can map to different forwarding entries. This was required for legacy cell-based MPLS over ATM, where labels were encoded in the VPI/VCI fields of ATM cells (a per-VC, inherently per-interface concept). Frame Relay MPLS similarly used per-interface label spaces matching the DLCI scope. Since ATM and Frame Relay are obsolete in modern networks, per-interface label spaces are essentially historical artefacts in today's context, but the label space field remains in the LDP specification and in Cisco IOS output for completeness. Understanding that ":0" means per-platform and that this is always the expected value in modern deployments is sufficient for both the exam and practical work.