DMVPN Phase 1, 2 & 3

Traditional hub-and-spoke VPNs require a static tunnel between every spoke and the hub — manageable for five sites, unscalable at fifty. DMVPN (Dynamic Multipoint VPN) replaces the static tunnel mesh with a single mGRE (Multipoint GRE) interface on each router and a registration protocol — NHRP (Next Hop Resolution Protocol) — that resolves which public IP hides behind each tunnel address. Spokes register their public IPs with the hub at startup; the hub maintains an NHRP database. From that foundation, the three DMVPN phases differ only in how far the NHRP intelligence is pushed. Phase 1 routes all traffic through the hub. Phase 2 lets spokes query the hub for each other's public IPs and build direct tunnels on demand. Phase 3 adds NHRP redirect so the hub signals spokes to build shortcuts without needing per-spoke routing entries, enabling route summarisation at the hub.

DMVPN builds on standard GRE tunnel concepts. For a GRE foundation see GRE Tunnel Configuration. IPsec is layered on top of the mGRE interface to encrypt DMVPN traffic — for the IPsec building blocks see IPsec Site-to-Site VPN and IPsec Basics. For the EIGRP routing that runs over the DMVPN overlay see EIGRP Configuration and EIGRP Overview. For alternative routing protocols over DMVPN see OSPF Configuration and BGP Basics & eBGP. For the WAN transport that forms the DMVPN underlay see WAN and MPLS.

1. DMVPN Architecture — Core Concepts

Building Blocks

Component Role IOS Object
mGRE Multipoint GRE — a single tunnel interface that accepts connections from multiple remote peers, each identified by their public IP. Unlike point-to-point GRE, no tunnel destination is needed; destinations are resolved dynamically by NHRP tunnel mode gre multipoint
NHRP Next Hop Resolution Protocol — maps tunnel overlay IPs to physical (NBMA) IPs. Spokes register with the hub; spokes query the hub for peer public IPs when building shortcuts ip nhrp map, ip nhrp registration, ip nhrp network-id
NHS Next Hop Server — the hub router that holds the NHRP registration database and answers resolution queries from spokes ip nhrp nhs [hub-tunnel-ip] configured on each spoke
NBMA address The public routable IP of each router — what NHRP resolves to. Packets are encapsulated in GRE and sent to this address across the underlay (internet/MPLS) ip nhrp map [overlay] [NBMA] and ip nhrp map multicast [NBMA]
Tunnel key Optional value that distinguishes multiple DMVPN clouds sharing the same physical interface — must match on all routers in the same cloud tunnel key [number]
NHRP network-id Locally significant identifier that groups the NHRP domain — does not need to match between hub and spoke (unlike the tunnel key) ip nhrp network-id [number]

Phase Comparison

Feature Phase 1 Phase 2 Phase 3
Spoke-to-spoke traffic path Always through hub (hair-pinned) Direct spoke-to-spoke tunnel after NHRP resolution Direct spoke-to-spoke tunnel triggered by NHRP redirect from hub
Hub tunnel mode mGRE mGRE mGRE
Spoke tunnel mode Point-to-point GRE (hub only) or mGRE mGRE (required — spokes must accept dynamic peers) mGRE
Hub routing Specific spoke prefixes — hub re-advertises between spokes Specific spoke prefixes — next-hop preserved so spokes can find each other Summary route — hub advertises aggregate; NHRP redirect triggers shortcut
NHRP redirect on hub Not needed Not needed Requiredip nhrp redirect on hub tunnel
NHRP shortcut on spokes Not needed Not needed Requiredip nhrp shortcut on spoke tunnels
Routing next-hop preservation Not needed — all traffic goes to hub Criticalno ip next-hop-self eigrp or BGP NH unchanged Not needed — summarization hides spoke prefixes; NHRP handles resolution
Scalability Limited — hub is bottleneck Good — direct paths, but hub must hold all spoke routes Best — hub summarises; spoke routing table stays small

Underlay vs Overlay

  OVERLAY  (tunnel network — DMVPN addresses, routing protocol runs here)
  ─────────────────────────────────────────────────────────────────────
  Tunnel0 addresses:
    HUB:    10.0.0.1/24
    SPOKE1: 10.0.0.2/24
    SPOKE2: 10.0.0.3/24
    SPOKE3: 10.0.0.4/24

  NHRP maps tunnel IP → NBMA (public) IP:
    10.0.0.2 → 203.0.113.10  (Spoke1 public IP)
    10.0.0.3 → 203.0.113.20  (Spoke2 public IP)
    10.0.0.4 → 203.0.113.30  (Spoke3 public IP)

  UNDERLAY (physical network — internet/MPLS — GRE packets travel here)
  ─────────────────────────────────────────────────────────────────────
  Physical interface public IPs:
    HUB:    198.51.100.1   (static — spokes must know this)
    SPOKE1: 203.0.113.10   (can be dynamic — registered via NHRP)
    SPOKE2: 203.0.113.20
    SPOKE3: 203.0.113.30
  

2. Lab Topology

                  ┌─────────────────────┐
                  │       INTERNET       │
                  │  (underlay network)  │
                  └──────────┬──────────┘
           ┌─────────────────┼──────────────────┐
           │                 │                  │
     203.0.113.10      198.51.100.1        203.0.113.20
      Gi0/0                Gi0/0               Gi0/0
    [SPOKE1]             [  HUB  ]           [SPOKE2]
    Tu0: 10.0.0.2        Tu0: 10.0.0.1       Tu0: 10.0.0.3
    LAN: 10.1.0.0/24     LAN: 10.0.1.0/24    LAN: 10.2.0.0/24
      Gi0/1                Gi0/1               Gi0/1

  DMVPN Overlay Subnet: 10.0.0.0/24
  Routing Protocol: EIGRP AS 100 (over the overlay)
  IPsec Profile: DMVPN-IPSEC (applied to Tunnel0)

  Router hostnames: NetsTuts-HUB, NetsTuts-SP1, NetsTuts-SP2
  NHRP Network-ID: 1
  Tunnel Key: 100
  
The hub's public IP (198.51.100.1) must be static and known to all spokes — it is hard-coded in the spoke NHRP configuration. Spoke public IPs can be dynamic; they register with the hub at startup. This asymmetry is a core DMVPN design principle: only the hub needs a fixed public IP.

3. IPsec Profile (Applied to All Phases)

IPsec is configured identically on all routers in all three phases. A protection profile is applied directly to the Tunnel0 interface — no crypto maps needed. Configure this on the hub and all spokes before proceeding to phase-specific tunnel setup:

! ═══ Configure on HUB, SPOKE1, and SPOKE2 ════════════════
!
! ── IKE Phase 1 policy ───────────────────────────────────
crypto isakmp policy 10
 authentication pre-share
 encryption aes 256
 hash sha256
 group 14
 lifetime 86400
!
! ── Pre-shared key — wildcard covers all DMVPN peers ─────
crypto isakmp key NetsTuts-DMVPN address 0.0.0.0 0.0.0.0
!
! ── IPsec transform set ──────────────────────────────────
crypto ipsec transform-set DMVPN-TS esp-aes 256 esp-sha256-hmac
 mode transport
!
! ── IPsec profile — attached to tunnel interface ─────────
crypto ipsec profile DMVPN-IPSEC
 set transform-set DMVPN-TS
 set isakmp-profile DMVPN-ISAKMP
!
  
mode transport is used instead of mode tunnel because GRE already provides the outer encapsulation — IPsec only needs to encrypt the GRE payload, not add another IP header. The wildcard pre-shared key (address 0.0.0.0 0.0.0.0) allows any DMVPN peer with the correct key to establish an IKE session — essential since spoke public IPs are dynamic and cannot be enumerated in advance. In production, certificate-based authentication (PKI) is preferred over pre-shared keys for scalability. The IPsec profile is applied to the tunnel interface with tunnel protection ipsec profile DMVPN-IPSEC. After completing configuration save with write memory. Verify physical interface status with show ip interface brief.

4. DMVPN Phase 1 — Hub-Routed

Phase 1 is the simplest DMVPN deployment. Every spoke forms a tunnel only with the hub; spoke-to-spoke communication is always hair-pinned through the hub. Spokes can use either point-to-point GRE or mGRE tunnel interfaces. All NHRP resolution stops at the hub — spokes never query for each other's NBMA addresses.

Phase 1 — Hub Configuration

! ════════════════════════════════════════════════════════════
! NetsTuts-HUB — DMVPN Phase 1
! ════════════════════════════════════════════════════════════
!
interface Tunnel0
 description DMVPN-Phase1-Hub
 ip address 10.0.0.1 255.255.255.0
 !
 ! ── mGRE: single interface accepts all spoke tunnels ─────
 tunnel mode gre multipoint
 tunnel source GigabitEthernet0/0
 tunnel key 100
 !
 ! ── NHRP: hub is the NHS — no nhs command needed on hub ──
 ip nhrp network-id 1
 ip nhrp map multicast dynamic
 !
 ! ── IPsec encryption ─────────────────────────────────────
 tunnel protection ipsec profile DMVPN-IPSEC
 !
 ip mtu 1400
 ip tcp adjust-mss 1360
!
! ── Physical WAN interface ────────────────────────────────
interface GigabitEthernet0/0
 ip address 198.51.100.1 255.255.255.0
 no shutdown
!
interface GigabitEthernet0/1
 ip address 10.0.1.1 255.255.255.0
 no shutdown
!
! ── EIGRP over the DMVPN overlay ─────────────────────────
router eigrp 100
 network 10.0.0.0 0.0.0.255
 network 10.0.1.0 0.0.0.255
 no auto-summary
!
  
ip nhrp map multicast dynamic is the single most important hub command. It tells the hub to dynamically build a multicast replication list from the NHRP registration database — any spoke that registers gets added automatically. Without this, EIGRP Hello packets (which are multicast) would not reach spokes, and routing adjacencies would never form. The hub never needs an ip nhrp nhs command — it IS the NHS. The tunnel key (100) must match on every router in the cloud; mismatches silently drop all GRE packets.

Phase 1 — Spoke Configuration

! ════════════════════════════════════════════════════════════
! NetsTuts-SP1 — DMVPN Phase 1 Spoke
! (SP2 identical — change addresses accordingly)
! ════════════════════════════════════════════════════════════
!
interface Tunnel0
 description DMVPN-Phase1-Spoke
 ip address 10.0.0.2 255.255.255.0
 !
 ! ── mGRE on spoke — allows future phase upgrade ───────────
 tunnel mode gre multipoint
 tunnel source GigabitEthernet0/0
 tunnel key 100
 !
 ! ── NHRP: register with the hub NHS ──────────────────────
 ip nhrp network-id 1
 !
 ! ── Static NHRP entry: hub's tunnel IP → hub's public IP ─
 ip nhrp map 10.0.0.1 198.51.100.1
 !
 ! ── Multicast replication toward hub (for EIGRP Hellos) ──
 ip nhrp map multicast 198.51.100.1
 !
 ! ── Point to hub as the NHS ──────────────────────────────
 ip nhrp nhs 10.0.0.1
 !
 tunnel protection ipsec profile DMVPN-IPSEC
 !
 ip mtu 1400
 ip tcp adjust-mss 1360
!
interface GigabitEthernet0/0
 ip address 203.0.113.10 255.255.255.0
 no shutdown
!
interface GigabitEthernet0/1
 ip address 10.1.0.1 255.255.255.0
 no shutdown
!
router eigrp 100
 network 10.0.0.0 0.0.0.255
 network 10.1.0.0 0.0.0.255
 no auto-summary
!
  
Three NHRP commands work together on each spoke. ip nhrp map 10.0.0.1 198.51.100.1 is the static bootstrap entry that maps the hub's tunnel IP to its known public IP — without this the spoke cannot send the initial NHRP registration request because it does not know where the hub lives. ip nhrp map multicast 198.51.100.1 ensures EIGRP multicast Hellos are replicated to the hub's public IP (EIGRP uses 224.0.0.10 which must be forwarded as unicast over the GRE tunnel). ip nhrp nhs 10.0.0.1 designates the hub as the NHS server for this spoke's NHRP domain. In Phase 1, spokes never send NHRP resolution requests for other spokes — NHRP is used only for registration.

Phase 1 Traffic Flow

  SP1 (10.1.0.x) → SP2 (10.2.0.x) in Phase 1:

  1. SP1 has route to 10.2.0.0/24 via 10.0.0.1 (hub) ← hub re-advertises
  2. SP1 sends packet: src 10.1.0.10, dst 10.2.0.10 → encapsulate in GRE → send to hub NBMA 198.51.100.1
  3. Hub decapsulates → re-encapsulates → sends to SP2 NBMA 203.0.113.20
  4. SP2 delivers to 10.2.0.10

  All inter-spoke traffic traverses the hub twice (once inbound, once outbound).
  Hub bandwidth and CPU are the bottleneck.
  

5. DMVPN Phase 2 — Spoke-to-Spoke Shortcuts

Phase 2 adds the ability for spokes to build direct tunnels to each other on demand. When Spoke1 sends traffic destined for Spoke2's network, it triggers an NHRP resolution request to the hub asking "what is the public IP for tunnel address 10.0.0.3?" The hub replies with Spoke2's NBMA address, and Spoke1 caches that mapping and builds a direct GRE tunnel. All spokes must use mGRE tunnels in Phase 2.

Phase 2 Changes from Phase 1

The tunnel interface configuration is identical to Phase 1 — no tunnel commands change between phases. The critical Phase 2 requirement is in the routing protocol. EIGRP must preserve the original next-hop when advertising spoke routes across the hub — if the hub changes the next-hop to itself (default EIGRP behaviour), spokes send traffic to the hub rather than directly to the originating spoke, and NHRP resolution is never triggered.

Phase 2 — Hub EIGRP Change (Critical)

! ════════════════════════════════════════════════════════════
! NetsTuts-HUB — Phase 2 EIGRP change
! ════════════════════════════════════════════════════════════
!
router eigrp 100
 network 10.0.0.0 0.0.0.255
 network 10.0.1.0 0.0.0.255
 no auto-summary
!
interface Tunnel0
 ! ── Preserve original next-hop when advertising to spokes ─
 no ip next-hop-self eigrp 100
 !
 ! ── EIGRP split horizon disabled — hub must re-advertise
 ! ── routes learned from spokes back to other spokes ───────
 no ip split-horizon eigrp 100
!
  
These two interface-level EIGRP commands are the entire difference between Phase 1 and Phase 2 behaviour from a routing perspective. no ip split-horizon eigrp 100 allows the hub to re-advertise routes learned from one spoke back out the same Tunnel0 interface to other spokes — split horizon normally blocks this, which would prevent spokes from learning each other's prefixes entirely. no ip next-hop-self eigrp 100 preserves the original spoke's tunnel IP as the EIGRP next-hop when the hub forwards the advertisement. Without this, the hub replaces the next-hop with 10.0.0.1 (itself) and spokes route everything through the hub — exactly Phase 1 behaviour despite being configured as Phase 2.

Phase 2 Traffic Flow — NHRP Resolution

  SP1 → SP2 first packet (Phase 2):

  1. SP1 routing table: 10.2.0.0/24 via 10.0.0.3 (SP2 tunnel IP) ← next-hop preserved!
  2. SP1 does not know SP2's NBMA IP — sends NHRP Resolution Request to hub (10.0.0.1)
     "What is the NBMA address for tunnel IP 10.0.0.3?"
  3. First packet is forwarded to hub while NHRP query is outstanding
  4. Hub looks up NHRP cache: 10.0.0.3 → 203.0.113.20
  5. Hub sends NHRP Resolution Reply to SP1: "10.0.0.3 is at NBMA 203.0.113.20"
  6. SP1 caches the mapping; builds direct IPsec/GRE tunnel to 203.0.113.20
  7. All subsequent SP1 → SP2 traffic takes the direct path (bypasses hub)
  8. NHRP cache entry expires after hold time (default 7200 sec) — resolved again if needed
  

Phase 2 — NHRP Hold Timer Tuning

! ── Apply on hub and all spokes ──────────────────────────
interface Tunnel0
 ! ── How long NHRP registrations are valid ────────────────
 ip nhrp holdtime 300
 !
 ! ── How often spokes re-register (should be < holdtime) ──
 ip nhrp registration timeout 60
!
  

6. DMVPN Phase 3 — NHRP Redirect and Summarization

Phase 3 solves the primary Phase 2 scaling limitation: with hundreds of spokes, each spoke's routing table must contain a specific route for every other spoke's LAN — advertised with the original next-hop preserved. Phase 3 moves the intelligence to NHRP. The hub advertises only a summary route covering all spoke networks; spokes install that summary pointing to the hub. When Spoke1 sends traffic matching the summary (destined for Spoke2's LAN), the packet reaches the hub. The hub — instead of forwarding — sends an NHRP Redirect message back to Spoke1 telling it to go directly to Spoke2. Spoke1 sends an NHRP resolution request to Spoke2 directly and builds a shortcut route.

Phase 3 — Hub Configuration Changes

! ════════════════════════════════════════════════════════════
! NetsTuts-HUB — Phase 3 changes
! ════════════════════════════════════════════════════════════
!
interface Tunnel0
 ! ── Enable NHRP redirect — hub sends redirect on transit ──
 ip nhrp redirect
 !
 ! ── Phase 3: next-hop-self and split-horizon can be RE-ENABLED
 ! ── (or left off — no functional difference in Phase 3) ───
 ! ip next-hop-self eigrp 100   ← optional in Phase 3
 ! ip split-horizon eigrp 100   ← optional in Phase 3
!
! ── Hub advertises a summary to spokes instead of specifics ─
router eigrp 100
 network 10.0.0.0 0.0.0.255
 network 10.0.1.0 0.0.0.255
 no auto-summary
!
interface Tunnel0
 ! ── Summarise all spoke LANs into one prefix ─────────────
 ip summary-address eigrp 100 10.0.0.0 255.0.0.0
!
  
ip nhrp redirect on the hub tunnel interface is the single command that defines Phase 3. When the hub receives a packet that will transit the DMVPN cloud (source and destination are both DMVPN tunnel addresses), it forwards the packet normally AND sends an NHRP Traffic Indication (redirect) message to the source spoke, telling it to resolve the destination directly. The summary address (10.0.0.0/8 covering all spoke LANs in this example) means each spoke only needs one route for all remote sites — the hub's tunnel IP — until NHRP builds a shortcut. When the summary is broad enough, spoke routing tables scale to O(1) rather than O(n) for remote prefixes. For route summarisation techniques see Route Summarisation & Aggregation.

Phase 3 — Spoke Configuration Changes

! ════════════════════════════════════════════════════════════
! NetsTuts-SP1 and NetsTuts-SP2 — Phase 3 changes
! ════════════════════════════════════════════════════════════
!
interface Tunnel0
 ! ── Enable NHRP shortcut — spoke installs shortcut routes ─
 ip nhrp shortcut
!
  
ip nhrp shortcut on each spoke tunnel enables the spoke to install NHRP-derived shortcut routes in its routing table when it receives an NHRP resolution reply. Without this, the spoke receives the NHRP redirect from the hub and sends the resolution request, but never installs the resulting shortcut — traffic continues flowing through the hub. The shortcut route is installed as a host route (/32) or specific prefix in the RIB, overriding the summary route from the hub for that specific destination until the NHRP cache entry expires.

Phase 3 Traffic Flow — NHRP Redirect

  SP1 → SP2 in Phase 3 (first packet):

  1. SP1 routing table: 10.0.0.0/8 via 10.0.0.1 (hub summary) — no specific SP2 route
  2. SP1 sends packet to hub (10.0.0.1 → 198.51.100.1 underlay)
  3. Hub forwards packet to SP2 AND sends NHRP Traffic Indication (redirect) to SP1:
     "You should send directly to 10.0.0.3 — its NBMA is 203.0.113.20"
  4. SP1 sends NHRP Resolution Request directly to SP2 (203.0.113.20)
  5. SP2 sends NHRP Resolution Reply: confirms its own NBMA
  6. SP1 installs shortcut route: 10.2.0.0/24 via 10.0.0.3 (overrides summary)
  7. SP1 builds direct IPsec tunnel to 203.0.113.20
  8. All subsequent SP1 → SP2 traffic takes the direct path

  Key Phase 3 advantage:
  SP1 routing table has 1 remote route (summary) instead of N spoke-specific routes
  Shortcut routes are temporary — installed on demand, removed when idle
  

Complete Phase 3 Spoke Config (All Commands Combined)

! ════════════════════════════════════════════════════════════
! NetsTuts-SP1 — Complete Phase 3 configuration
! ════════════════════════════════════════════════════════════
!
interface Tunnel0
 description DMVPN-Phase3-Spoke
 ip address 10.0.0.2 255.255.255.0
 tunnel mode gre multipoint
 tunnel source GigabitEthernet0/0
 tunnel key 100
 ip nhrp network-id 1
 ip nhrp map 10.0.0.1 198.51.100.1
 ip nhrp map multicast 198.51.100.1
 ip nhrp nhs 10.0.0.1
 ip nhrp shortcut
 ip nhrp holdtime 300
 tunnel protection ipsec profile DMVPN-IPSEC
 ip mtu 1400
 ip tcp adjust-mss 1360
!
interface GigabitEthernet0/0
 ip address 203.0.113.10 255.255.255.0
 no shutdown
!
interface GigabitEthernet0/1
 ip address 10.1.0.1 255.255.255.0
 no shutdown
!
router eigrp 100
 network 10.0.0.0 0.0.0.255
 network 10.1.0.0 0.0.0.255
 no auto-summary
!
  

7. Verification

show dmvpn

NetsTuts-HUB#show dmvpn
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
        N - NATed, L - Local, X - No Socket, T1 - Route Installed, T2 - Nexthop-override
        C - CTS Authenticated, I2 - Temporary

#Ent --> Number of NHRP entries with same NBMA peer
NHS Status: E --> Expecting Replies, R --> Responding, W --> Waiting

Interface: Tunnel0, IPv4 NHRP Details
Type:Hub, NHRP Peers:2,

 # Ent  Peer NBMA Addr  Peer Tunnel Add  State  UpDn Tm  Attrb
 ----- ---------------- ---------------  -----  --------  -----
     1 203.0.113.10          10.0.0.2     UP    00:12:43    D
     1 203.0.113.20          10.0.0.3     UP    00:08:17    D
  
State: UP for both spokes confirms successful NHRP registration and tunnel establishment. The Attrb: D (Dynamic) on both entries confirms they registered dynamically rather than being statically defined on the hub — the key DMVPN scalability feature. The UpDn timer shows how long each spoke has been in UP state. On a spoke, the hub entry will show Attrb: S (Static) because it was configured with a manual ip nhrp map command. Spoke-to-spoke shortcuts (Phase 2/3) appear as additional Dynamic entries.

show dmvpn detail

NetsTuts-HUB#show dmvpn detail
...
 # Ent  Peer NBMA Addr  Peer Tunnel Add  State  UpDn Tm  Attrb
     1 203.0.113.10          10.0.0.2     UP    00:12:43    D
     Tunnel Protect:  DMVPN-IPSEC
     IKE SA status:   Active
     IPsec SA status: Active (ESP)
     NHRP registration expires: 00:04:57
  

show ip nhrp — NHRP Cache on Hub

NetsTuts-HUB#show ip nhrp
10.0.0.2/32 via 10.0.0.2
   Tunnel0 created 00:12:43, expire 00:04:57
   Type: dynamic, Flags: router rib nho
   NBMA address: 203.0.113.10
10.0.0.3/32 via 10.0.0.3
   Tunnel0 created 00:08:17, expire 00:09:03
   Type: dynamic, Flags: router rib nho
   NBMA address: 203.0.113.20
  
The NHRP cache on the hub is the authoritative database for the DMVPN cloud. Each spoke's tunnel IP (/32) is mapped to its NBMA (public) address with a creation timestamp and expiry time. The expire counter counts down — when it reaches zero the entry is removed unless the spoke re-registers before expiry. The nho flag indicates Next-Hop Override is active (relevant in Phase 2 for the next-hop preservation behaviour). On a spoke after Phase 3 shortcut installation, run show ip nhrp to see the temporary shortcut entry for the remote spoke.

show ip nhrp — On Spoke After Phase 3 Shortcut

NetsTuts-SP1#show ip nhrp
10.0.0.1/32 via 10.0.0.1
   Tunnel0 created 00:12:43, expire 00:04:57
   Type: static, Flags: used
   NBMA address: 198.51.100.1
10.0.0.3/32 via 10.0.0.3
   Tunnel0 created 00:00:12, expire 00:04:48
   Type: dynamic, Flags: router rib
   NBMA address: 203.0.113.20
  
After a Phase 3 shortcut is established, SP1's NHRP cache shows two entries. The static entry (10.0.0.1) is the manually configured hub mapping. The dynamic entry (10.0.0.3) is the shortcut to SP2 that was installed by NHRP after the hub sent the redirect. The short created time confirms this is fresh. Run show ip route 10.2.0.0 on SP1 to see the specific shortcut route that overrides the hub summary.

show ip route — Verify Shortcut Route (Phase 3)

NetsTuts-SP1#show ip route 10.2.0.0
Routing entry for 10.2.0.0/24
  Known via "nhrp", distance 250, metric 0
  Tag 10.0.0.3, type extern
  Last update from 10.0.0.3 on Tunnel0, 00:00:10 ago
  Routing Descriptor Blocks:
  * 10.0.0.3, from 10.0.0.3, via Tunnel0
      Route metric is 0, traffic share count is 1
  
The routing source "nhrp" and administrative distance of 250 identifies this as an NHRP-installed shortcut route — not an EIGRP route. The next-hop 10.0.0.3 (SP2's tunnel IP) allows SP1 to build a direct GRE tunnel to SP2's NBMA address (203.0.113.20). This route overrides the hub summary (10.0.0.0/8 via 10.0.0.1) for the specific SP2 LAN prefix because it is a more specific match, not because of a lower AD — the summary AD (90 for EIGRP internal) is actually lower than the NHRP shortcut AD (250). The longer prefix match takes priority.

show ip eigrp neighbors — Verify Routing Adjacencies

NetsTuts-HUB#show ip eigrp neighbors
EIGRP-IPv4 Neighbors for AS(100)
H   Address          Interface    Hold  Uptime   SRTT   RTO   Q   Seq
                                  (sec)           (ms)        Cnt  Num
0   10.0.0.2         Tu0           12   00:12:43   15   100   0   42
1   10.0.0.3         Tu0           11   00:08:17   18   108   0   31
  

Verification Command Summary

Command Shows Key Field to Check
show dmvpn NHRP peer state, NBMA addresses, tunnel IPs, up/down time State: UP for all peers; D (dynamic) for spokes on hub
show dmvpn detail IPsec SA status per peer, NHRP expiry, IKE status IPsec SA status: Active; NHRP registration time remaining
show ip nhrp NHRP cache — tunnel IP → NBMA IP mappings with type and expiry Hub: dynamic entries for all spokes; Spoke: static hub entry + dynamic shortcuts
show ip nhrp detail Full NHRP entry detail including flags, requester, and VPN info Flags: router rib confirms route installed; shortcut confirms Phase 3
show ip eigrp neighbors EIGRP adjacencies over the DMVPN overlay All spokes (from hub) or hub (from spoke) showing Hold timer > 0
show ip route Full routing table — check for NHRP shortcut routes (source: nhrp, AD 250) Phase 3: specific spoke LAN routes via spoke tunnel IP (not hub); source "nhrp"
show crypto isakmp sa IKE Phase 1 SA state per peer State: QM_IDLE (active) for each DMVPN peer; MM_NO_STATE indicates failure
show crypto ipsec sa IPsec Phase 2 SA — encrypt/decrypt counters, SPI pkts encaps and pkts decaps incrementing confirms bidirectional encryption

8. Troubleshooting DMVPN

Problem Symptom Cause Fix
No spokes register with hub show dmvpn on hub shows no peers; show ip nhrp empty Tunnel key mismatch; tunnel source interface down; NHRP network-id mismatch; underlay routing broken (spoke cannot reach hub public IP) Verify tunnel key matches on all routers. Confirm show interfaces Tunnel0 shows up/up. Test underlay: ping hub public IP (198.51.100.1) from spoke's physical interface. Check ip nhrp map on spoke points to correct hub public IP
EIGRP adjacency not forming over tunnel show ip eigrp neighbors shows no neighbours on Tunnel0 ip nhrp map multicast missing on spokes (EIGRP Hellos not forwarded to hub); split horizon blocking re-advertisement on hub; tunnel interface not in EIGRP network statement Verify ip nhrp map multicast 198.51.100.1 on each spoke. On hub, confirm ip nhrp map multicast dynamic and no ip split-horizon eigrp 100 on Tunnel0. Confirm Tunnel0's subnet is in network statement
Phase 2: all traffic still going through hub Traceroute shows hub as transit hop for spoke-to-spoke traffic; show ip nhrp on spoke shows no dynamic entries for other spokes no ip next-hop-self eigrp 100 missing on hub — hub changes EIGRP next-hop to itself so spokes route to hub, never triggering NHRP resolution for remote spoke addresses Add no ip next-hop-self eigrp 100 to hub Tunnel0. Verify with show ip eigrp topology on a spoke — the next-hop for a remote spoke's LAN should be the spoke's tunnel IP (10.0.0.3) not the hub (10.0.0.1)
Phase 3: spoke-to-spoke shortcuts not forming Traffic still transits hub after Phase 3 config; show ip nhrp on spoke shows no shortcut entries; route source is EIGRP not NHRP ip nhrp redirect missing on hub Tunnel0; ip nhrp shortcut missing on spoke Tunnel0; summary route too specific (not covering spoke LANs) Verify ip nhrp redirect on hub and ip nhrp shortcut on all spokes. Confirm summary covers all spoke LAN prefixes. Test by pinging spoke-to-spoke — first ping hits hub, second ping should take shortcut. Run debug ip nhrp to see redirect messages
IPsec not encrypting tunnel traffic show crypto ipsec sa shows 0 packets; show dmvpn detail shows IPsec SA: Inactive Pre-shared key mismatch; transform set mismatch; IKE policy mismatch; tunnel protection profile not applied to Tunnel0 Confirm tunnel protection ipsec profile DMVPN-IPSEC on all Tunnel0 interfaces. Verify IKE policy parameters (encryption, hash, DH group) match exactly. Check pre-shared key is identical. Run debug crypto isakmp to see Phase 1 negotiation failure reason
Large packets failing across DMVPN Small pings work; large transfers or application sessions fail ip mtu 1400 or ip tcp adjust-mss 1360 missing on Tunnel0 — GRE (24 bytes) + IPsec (variable, ~50 bytes) overhead reduces usable MTU below 1500 Add ip mtu 1400 and ip tcp adjust-mss 1360 to all Tunnel0 interfaces. Test with ping [destination] size 1400 df-bit — should succeed. Adjust mtu downward if still failing with IPsec overhead
NHRP registrations expiring before renewal Spokes intermittently disappear from show dmvpn; connectivity drops then recovers NHRP hold time too short or registration timer too long — spoke does not re-register before expiry on hub Ensure ip nhrp registration timeout on spokes is less than ip nhrp holdtime. Standard values: holdtime 300, registration timeout 60. Verify underlay connectivity is stable — registration failures often mask an underlying physical or routing issue

Key Points & Exam Tips

  • DMVPN combines three technologies: mGRE (single multipoint tunnel interface), NHRP (dynamic tunnel IP to public IP mapping), and IPsec (optional encryption). All three must function for a working DMVPN cloud.
  • The hub's public IP must be static — spokes hard-code it in ip nhrp map and ip nhrp nhs. Spoke public IPs can be dynamic; they register via NHRP at startup.
  • ip nhrp map multicast dynamic on the hub is essential — it dynamically adds registered spokes to the multicast replication list so EIGRP Hello packets reach all spokes. Without it, no routing adjacencies form.
  • The tunnel key must match on all routers in the same DMVPN cloud. A mismatch silently drops all GRE packets — the tunnel appears up but passes no traffic.
  • Phase 1 — all traffic through hub; no routing changes required beyond standard EIGRP.
  • Phase 2 — direct spoke-to-spoke tunnels on demand. Requires two hub EIGRP commands: no ip split-horizon eigrp [AS] (so hub re-advertises spoke routes back out Tunnel0) and no ip next-hop-self eigrp [AS] (so spoke routes retain the originating spoke's tunnel IP as next-hop, triggering NHRP resolution).
  • Phase 3 — direct tunnels triggered by hub redirect; spoke routing tables only need a summary. Requires ip nhrp redirect on the hub Tunnel0 and ip nhrp shortcut on all spoke Tunnel0 interfaces. The hub advertises a summary; NHRP shortcut routes (AD 250, source "nhrp") override the summary for active paths.
  • NHRP shortcut routes have an administrative distance of 250 — higher than EIGRP (90/170) or OSPF (110). They override the hub summary through longest prefix match, not lower AD.
  • IPsec uses mode transport (not tunnel) on DMVPN because GRE already provides outer encapsulation. The wildcard pre-shared key (address 0.0.0.0 0.0.0.0) allows any DMVPN peer to authenticate — required for dynamic spokes.
  • MTU on DMVPN tunnels: GRE adds 24 bytes; IPsec adds approximately 50–70 bytes. Set ip mtu 1400 and ip tcp adjust-mss 1360 on Tunnel0 to prevent fragmentation across all DMVPN phases.
Next Steps: For the IPsec components underlying DMVPN encryption see IPsec Site-to-Site VPN and IPsec Basics. For the EIGRP routing protocol running over the overlay see EIGRP Configuration. For the GRE encapsulation concepts that DMVPN extends see GRE Tunnel Configuration. For route summarisation at the hub see Route Summarisation & Aggregation. For QoS over the DMVPN overlay see LLQ & CBWFQ Configuration. For SD-WAN — the modern evolution of DMVPN-style overlay networking — see Cisco SD-WAN Overview. For end-to-end troubleshooting methodology see End-to-End Troubleshooting.

TEST WHAT YOU LEARNED

1. What is the role of ip nhrp map multicast dynamic on the DMVPN hub, and what breaks if it is missing?

Correct answer is B. mGRE interfaces have no fixed tunnel destination — by design. When EIGRP sends a Hello to the multicast address 224.0.0.10, IOS needs to know which physical IP addresses to replicate that packet to across the GRE overlay. On a standard point-to-point GRE tunnel the answer is obvious — the single configured destination. On an mGRE interface, there is no static destination list. ip nhrp map multicast dynamic solves this by telling IOS: "maintain a dynamic multicast replication list; add each spoke's NBMA address to it when the spoke registers via NHRP." The result is that as soon as a spoke registers, the hub can forward EIGRP Hellos to it, the adjacency forms, and routes are exchanged. Without this command, spoke NHRP registrations succeed (the hub has NHRP entries) but the EIGRP adjacency never forms because Hellos cannot reach the spokes.

2. In DMVPN Phase 2, why is no ip next-hop-self eigrp 100 on the hub's Tunnel0 interface a critical requirement, and what is the symptom if it is missing?

Correct answer is D. EIGRP's default behaviour on any interface is to set itself as the next-hop for routes it re-advertises. This is called "next-hop-self" and it is correct behaviour for most topologies — the advertising router is the next-hop to the destination from the recipient's perspective. In DMVPN Phase 2, this default breaks the design. When the hub re-advertises Spoke1's 10.1.0.0/24 route to Spoke2 with next-hop 10.0.0.1 (hub), Spoke2 installs the route with the hub as the next-hop. When Spoke2 sends traffic to 10.1.0.0/24, it resolves the next-hop (10.0.0.1) to the hub's NBMA address (198.51.100.1) and sends the packet to the hub — which then forwards it to Spoke1. NHRP never needs to resolve Spoke1's NBMA because the packet never has Spoke1's tunnel IP (10.0.0.2) as a destination. no ip next-hop-self eigrp 100 preserves 10.0.0.2 as the next-hop, causing Spoke2 to trigger NHRP resolution for Spoke1's address and build a direct tunnel.

3. What is the fundamental difference between how Phase 2 and Phase 3 trigger spoke-to-spoke shortcuts?

Correct answer is A. This is the most important conceptual difference between Phase 2 and Phase 3. In Phase 2, the routing table on each spoke contains specific /24 routes for each remote spoke LAN with the originating spoke's tunnel IP as the next-hop (preserved by no ip next-hop-self). When Spoke2 needs to reach Spoke1's 10.1.0.0/24, it looks up 10.0.0.2 as the next-hop, has no NHRP cache entry for 10.0.0.2, and immediately sends an NHRP resolution request to the hub — the hub is the passive responder. In Phase 3, Spoke2 only has the summary route (e.g. 10.0.0.0/8) pointing to the hub. The first packet goes to the hub. The hub actively inspects the packet, sees that the source (Spoke2) and destination (Spoke1's LAN) are both DMVPN participants, and sends an NHRP Traffic Indication message to Spoke2: "I redirected this packet but you should go directly to 10.0.0.2 next time." The hub is the active initiator of the shortcut process in Phase 3, not a passive responder.

4. A DMVPN Phase 3 shortcut route appears in a spoke's routing table with source "nhrp" and administrative distance 250. The hub is advertising the same destination via EIGRP with AD 90. Why does the NHRP shortcut take precedence over the EIGRP route?

Correct answer is C. Administrative distance is only used to choose between routes of equal prefix length from different sources. Longest prefix match is applied first and always wins regardless of AD. In Phase 3, the hub advertises a summary such as 10.0.0.0/8 via EIGRP (AD 90). The NHRP shortcut is installed as a specific prefix — 10.2.0.0/24 (the exact spoke LAN) — with AD 250. When the spoke has a packet destined for 10.2.0.5, the routing lookup finds two matches: the /8 summary (AD 90) and the /24 shortcut (AD 250). Longest prefix match selects /24 unconditionally — it is 16 bits more specific than /8. The packet is forwarded via the NHRP shortcut (directly to Spoke2) despite the EIGRP route having a much lower AD. AD comparison would only apply if both routes had the same prefix length — e.g. two /24 routes from different sources — in which case EIGRP (90) would win over NHRP (250).

5. What is the significance of the tunnel key in DMVPN configuration, and what is the symptom of a mismatch between the hub and a spoke?

Correct answer is D. The GRE tunnel key (RFC 2784) is a 32-bit value carried in the GRE header (when the Key bit is set). Its original purpose was to differentiate multiple logical GRE tunnels between the same pair of endpoints. In DMVPN, the tunnel key serves as a cloud identifier — all routers in the same DMVPN cloud must use the same key. When GRE packets arrive at a router with tunnel key 100 configured, IOS checks the incoming GRE header's key field. A packet with key 200 (sent by a misconfigured spoke) does not match key 100 and is discarded without any notification. The discarding is silent — no ICMP error, no log message by default. This makes tunnel key mismatches particularly difficult to diagnose: the physical connectivity works, IPsec may even establish (IKE negotiates before GRE keys are checked), but no NHRP traffic gets through. The diagnostic is to run debug tunnel or check show interfaces Tunnel0 for input drops.

6. Why is IPsec configured with mode transport rather than mode tunnel on DMVPN tunnel interfaces?

Correct answer is B. IPsec has two modes. Tunnel mode wraps the entire original IP packet in a new IP header (new source, new destination) and then encrypts everything. Transport mode encrypts only the payload of the IP packet (the data after the IP header) leaving the IP header visible. In DMVPN, GRE provides the encapsulation: the original packet (from 10.1.0.10 to 10.2.0.10) is wrapped in a GRE header with source 203.0.113.10 and destination 203.0.113.20 — this outer IP header is what routes the packet across the internet underlay. IPsec transport mode then encrypts the GRE payload (the inner IP packet and GRE header data) without adding another IP header. If tunnel mode were used, IOS would add a fourth layer: original IP, GRE, IPsec tunnel outer IP — redundant and wasteful. The per-packet overhead in transport mode is approximately 50-70 bytes (ESP header, IV, auth tag) versus 70-90 bytes for tunnel mode. For DMVPN with its already significant encapsulation overhead, transport mode is the correct and standard choice.

7. show dmvpn on the hub shows both spokes in UP state with dynamic entries. A traceroute from Spoke1's LAN to Spoke2's LAN shows the hub as a transit hop after Phase 2 configuration. What should be checked first?

Correct answer is A. The symptom precisely describes the missing no ip next-hop-self eigrp 100 condition. NHRP registration is working (both spokes show UP in show dmvpn) — the issue is that shortcuts are not being built. The only reason a Phase 2 shortcut fails to build despite successful NHRP registration is that NHRP resolution for the remote spoke's tunnel IP is never triggered. NHRP resolution is triggered when a spoke needs to forward a packet to a next-hop address that is not in the NHRP cache. If the EIGRP next-hop for Spoke2's LAN is 10.0.0.1 (hub) — which is already in the NHRP cache as a static entry — no resolution request is ever sent for 10.0.0.3 (Spoke2). show ip eigrp topology is the definitive diagnostic: if the via address for a remote spoke's LAN prefix shows the hub's tunnel IP rather than the originating spoke's tunnel IP, no ip next-hop-self is missing. ip nhrp shortcut is a Phase 3 command, not Phase 2.

8. Why must only the hub's public IP be static in DMVPN, while spoke public IPs can be dynamic — and what NHRP mechanism makes dynamic spoke IPs work?

Correct answer is C. DMVPN's asymmetric IP requirement flows directly from the client-server nature of NHRP. The hub is the NHS (Next Hop Server) — the authoritative database for tunnel-IP-to-NBMA-IP mappings. Spokes are clients. Spokes must always initiate contact with the NHS to register; they cannot register if they do not know the server's address. This is why the hub must have a static, known public IP — it is the bootstrap address every spoke hard-codes in ip nhrp nhs and ip nhrp map. When a spoke comes online (even with a brand-new DHCP-assigned public IP from its ISP), it sends an NHRP Registration Request to the hub's static IP declaring "I am tunnel IP 10.0.0.2 and my current NBMA address is 203.0.113.10." The hub updates its cache. When any other spoke queries for 10.0.0.2, the hub returns the freshly registered 203.0.113.10 — the most recent registration always wins. This elegant mechanism means spokes can have entirely dynamic public IPs with zero configuration change required anywhere in the DMVPN network.

9. What is the purpose of no ip split-horizon eigrp 100 on the hub's Tunnel0 interface, and what is the effect if it is not configured?

Correct answer is D. Split horizon is a loop-prevention mechanism: if a router learned a route via interface X, it will not advertise that route back out interface X. This works well on point-to-point links where the only neighbour on the interface is the router you learned the route from. On DMVPN, all spokes connect to the hub via the same mGRE Tunnel0 interface. When the hub learns Spoke1's 10.1.0.0/24 route from Spoke1 (via Tunnel0), split horizon prevents it from re-advertising that route out Tunnel0 — which is the only interface Spoke2 is reachable via. The result is that Spoke2 never learns Spoke1's LAN prefix and connectivity fails completely. Disabling split horizon on Tunnel0 allows the hub to re-advertise routes between spokes. Note that the NHRP multicast replication and split horizon are separate concerns: split horizon affects EIGRP Update packets (unicast), while the multicast mapping affects EIGRP Hello packets. Both must be correctly configured for DMVPN EIGRP to work fully.

10. An engineer migrates from Phase 2 to Phase 3 by adding ip nhrp redirect on the hub and ip nhrp shortcut on spokes, and changes the hub to advertise a summary. After migration, spoke-to-spoke traffic uses direct tunnels for the first packet but then reverts to the hub for the second. What is the most likely cause?

Correct answer is C. The question contains a deliberately inverted symptom description — a common exam trick. The correct Phase 3 behaviour is: first packet → hub (summary route matches, goes to hub, hub sends NHRP redirect) → shortcut built → second and subsequent packets → direct to spoke. If the description were accurate and shortcuts genuinely reverted after the first successful direct packet, the most probable cause is ip nhrp shortcut not being applied to the spoke Tunnel0 — the spoke receives the NHRP resolution reply but without ip nhrp shortcut it does not install the shortcut route in the RIB. The NHRP cache entry exists (visible in show ip nhrp) but the corresponding route is not created, so routing continues to use the summary pointing to the hub. The diagnostic is to check show ip route [spoke-lan] — if it shows the hub summary (/8 or similar) rather than an NHRP-specific /24 route, the shortcut RIB entry is not being installed.