Load Balancing Across Links – EtherChannel, ECMP & Beyond

1. What Is Link-Level Load Balancing?

Load balancing in networking distributes traffic across multiple physical or logical links so that no single link becomes a bottleneck, and so that the network continues to function if one link fails. It operates at two distinct layers:

  • Layer 2 (EtherChannel / Link Aggregation): Multiple physical switch ports are bundled into a single logical Port-Channel interface. STP sees one link — all physical links carry traffic simultaneously.
  • Layer 3 (ECMP — Equal-Cost Multipath): A routing protocol installs multiple equal-cost next-hop paths for the same destination prefix. Traffic is distributed across multiple router interfaces.
  Layer 2 EtherChannel:                Layer 3 ECMP:
  Switch A ══════════ Switch B         Router A ──── Router B ──┐
           ════════                                              ▼ 10.0.0.0/24
           ════════                    Router A ──── Router C ──┘
  3 × 1 Gbps = 3 Gbps logical          Two equal-cost OSPF paths
  STP sees ONE 3 Gbps link             Both next-hops in routing table
            

Related pages: EtherChannel Configuration | OSPF Configuration | EIGRP Configuration | Spanning Tree Protocol | STP Overview | VLANs | show ip route | show interfaces

2. Per-Packet vs Per-Flow Load Balancing — The Core Choice

The fundamental design decision in any load balancing implementation is whether individual packets or entire flows are the unit of distribution.

Method How Traffic Is Distributed Packet Order Real-World Impact Where Used
Per-Packet Each packet is sent out a different link in round-robin sequence — packet 1 → link 1, packet 2 → link 2, packet 3 → link 1, etc. Out-of-order delivery likely — packets traverse different paths with different latencies TCP retransmits on reordering; VoIP jitter; video stutters. High theoretical utilisation but poor application performance. Rarely used in production; legacy CEF configurations; some WAN load balancers
Per-Flow All packets belonging to the same traffic flow (identified by a hash of headers) use the same link for the duration of the flow In-order guaranteed within each flow — different flows use different links TCP, VoIP, video all work correctly. Individual flow bandwidth limited to one link's capacity — large single flows cannot span multiple links. Default in EtherChannel, ECMP routing, LACP, PAgP
Per-Destination All traffic to a specific destination IP always uses the same link, regardless of source In-order within each destination flow Can cause severe imbalance if many sources all talk to one server (e.g., a popular web server) Older CEF implementations; some simple load balancers
Why per-flow is standard: TCP's congestion control and reliability depend on packets arriving in order. Reordering causes the receiving TCP stack to buffer packets, send duplicate ACKs, and eventually trigger unnecessary retransmissions — degrading throughput significantly. Per-flow hashing keeps packets ordered while still distributing load across all available links at the flow granularity.

3. EtherChannel and Link Aggregation Overview

EtherChannel (Cisco's implementation of link aggregation) bundles 2–8 physical Ethernet ports into a single logical Port-Channel interface. From the perspective of Spanning Tree Protocol, the higher-layer protocols, and the MAC address table, only one logical link exists — but all physical member links carry traffic simultaneously.

  Without EtherChannel:                With EtherChannel:
  Switch A ──── Switch B               Switch A ════ Switch B
           ──── (blocked by STP)                ════  (Port-Channel1)
  Only 1 Gbps active,                           ════  (all active)
  1 link wasted by STP                  3 Gbps bandwidth,
                                        3-link redundancy

EtherChannel Negotiation Protocols

Protocol Standard Modes Key Characteristics
LACP (Link Aggregation Control Protocol) IEEE 802.3ad / 802.1AX — open standard Active (sends LACP packets), Passive (responds only) Multi-vendor compatible; supports up to 16 member ports (8 active, 8 standby); preferred for all new deployments
PAgP (Port Aggregation Protocol) Cisco proprietary Desirable (sends PAgP packets), Auto (responds only) Cisco-only; supported on older IOS versions; being phased out in favour of LACP
Static / Manual (mode on) No negotiation protocol On (forces EtherChannel without negotiation) No dynamic negotiation — both sides must be manually configured. Risk: misconfiguration goes undetected. Useful where negotiation overhead is not desired or when connecting to non-Cisco devices that don't support LACP.
LACP active/passive rule: At least one side must be active for LACP to negotiate. Active–Active works. Active–Passive works. Passive–Passive does NOT form an EtherChannel — both sides are waiting for the other to initiate. The equivalent for PAgP: Desirable–Desirable or Desirable–Auto work; Auto–Auto does not.

4. EtherChannel Hashing — How Traffic Is Distributed

EtherChannel uses a hash algorithm to assign each frame to one of the member links. The switch computes a hash of selected header fields and uses the result to select the outgoing physical port. The same frame fields always produce the same hash result — guaranteeing that all packets in the same flow use the same physical link.

Cisco IOS Hash Method Options

Method Fields Hashed Best For Limitation
src-mac Source MAC address Environments with many different source devices All traffic from the same host always uses one link
dst-mac Destination MAC address Server farms with multiple servers (different MACs) All traffic to the same destination always uses one link
src-dst-mac XOR of source and destination MAC Layer 2 environments with many source/destination pairs Limited distribution if few unique pairs
src-ip Source IP address Environments with many different source clients All traffic from one IP goes to same link — imbalance with few sources
dst-ip Destination IP address Environments with many different destination servers All traffic to one IP goes to same link
src-dst-ip XOR of source and destination IP Most Layer 3 environments — best general-purpose choice Symmetric flows (same src/dst swapped) may hash to same link
src-dst-ip-l4port (or src-dst-mixed-ip-port) Source IP + Destination IP + L4 source port + L4 destination port High-traffic data centre uplinks — most granular distribution May not be available on all platforms; requires L4 awareness

Configuring and Verifying the Hash Method

! Configure hash method globally (affects all EtherChannels on switch)
Switch(config)# port-channel load-balance src-dst-ip

! Verify current method
Switch# show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
        src-dst-ip

EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
  IPv4: Source XOR Destination IP address
  IPv6: Source XOR Destination IPv6 address

! Test which physical port a specific flow would use
Switch# test etherchannel load-balance interface port-channel 1 ip 192.168.1.10 10.0.0.5
Would select Gi0/1 of Po1
Choosing the right hash method: The goal is maximum traffic distribution across all member links. For most enterprise uplinks, src-dst-ip or src-dst-ip-l4port provides the best distribution because it uses both endpoints, creating more unique hash values. Pure source or destination hashing creates lopsided load when many clients talk to a single server (which is common in enterprise networks).

5. Hash Polarisation — The Hidden EtherChannel Problem

Hash polarisation is a serious but frequently overlooked problem that occurs in multi-tier switch networks where EtherChannel exists at multiple layers. When two EtherChannels in the same traffic path use the same hashing algorithm, all traffic from a given source may end up on the same physical links at every tier — instead of being spread across all available bandwidth.

  Without addressing polarisation:

  Access Switch → Distribution Switch → Core Switch
  (src-dst-ip)     (src-dst-ip)        (src-dst-ip)

  PC1 → Server traffic always hashes to same link at every tier.
  Result: 1 of 4 links carries 80% of traffic; others idle.

  Fixing polarisation — use different hash methods at each tier:

  Access Switch   → (src-mac)        — vary by source MAC
  Distribution    → (src-dst-ip)     — vary by IP pair
  Core Switch     → (src-dst-ip-l4port) — vary by port pair

  Result: Traffic spreads differently at each hop → balanced utilisation.
Test for polarisation: Use show interfaces port-channel 1 etherchannel to see per-member load counters. If one member consistently shows 10× more traffic than others, polarisation or hash imbalance is the likely cause. Use test etherchannel load-balance with representative source/destination pairs to diagnose which links are over-selected.

6. EtherChannel Configuration — Complete Example

! ── Switch A Configuration ─────────────────────────────────────────────
Switch-A(config)# interface range GigabitEthernet0/1 - 4

! Set identical parameters on all member ports BEFORE creating EtherChannel
Switch-A(config-if-range)# switchport mode trunk              ! Must match
Switch-A(config-if-range)# switchport trunk encapsulation dot1q
Switch-A(config-if-range)# channel-group 1 mode active        ! LACP active
Switch-A(config-if-range)# exit

! Configure the Port-Channel interface (logical interface)
Switch-A(config)# interface port-channel 1
Switch-A(config-if)# switchport mode trunk
Switch-A(config-if)# switchport trunk encapsulation dot1q

! Set the hash method
Switch-A(config)# port-channel load-balance src-dst-ip

! ── Switch B Configuration (must match Switch A) ─────────────────────
Switch-B(config)# interface range GigabitEthernet0/1 - 4
Switch-B(config-if-range)# switchport mode trunk
Switch-B(config-if-range)# switchport trunk encapsulation dot1q
Switch-B(config-if-range)# channel-group 1 mode active        ! LACP active

Switch-B(config)# interface port-channel 1
Switch-B(config-if)# switchport mode trunk
Switch-B(config-if)# switchport trunk encapsulation dot1q

Switch-B(config)# port-channel load-balance src-dst-ip
Critical: Physical ports must match before bundling. All member ports in an EtherChannel must have identical configuration: same speed, same duplex, same VLAN membership (or trunk mode), same native VLAN, same spanning-tree settings. Inconsistent port configurations prevent EtherChannel from forming and may cause STP to block the port.

Verification Commands

! Summary of all EtherChannels — most useful first check
Switch# show etherchannel summary
Flags:  D - down        P - bundled in port-channel
        I - stand-alone s - suspended
        H - Hot-standby (LACP only)
        R - Layer3      S - Layer2
        U - in use      f - failed to allocate aggregator
        M - not in use, minimum links not met

Number of channel-groups in use: 1
Number of aggregators:           1

Group  Port-channel  Protocol    Ports
------+-------------+-----------+-----------------------------------------------
1      Po1(SU)         LACP      Gi0/1(P) Gi0/2(P) Gi0/3(P) Gi0/4(P)

! (SU) = Layer 2, in use; (P) = bundled and active in port-channel

! Detailed port-channel status including LACP negotiation info
Switch# show etherchannel 1 detail

! Per-member traffic counters — useful for load distribution diagnosis
Switch# show interfaces port-channel 1 etherchannel

! Current hash method
Switch# show etherchannel load-balance

! EtherChannel port-level configuration
Switch# show interfaces Gi0/1 etherchannel

7. ECMP — Equal-Cost Multipath Routing

ECMP (Equal-Cost Multipath) operates at Layer 3: when a routing protocol discovers multiple paths to the same destination with identical metrics, it installs all of them in the routing table simultaneously. The router distributes outbound traffic across all equal-cost next-hops.

  ECMP Routing Table Example (OSPF):

  Router# show ip route 10.0.0.0/24

  O    10.0.0.0/24  [110/2] via 192.168.1.2, GigabitEthernet0/0
                    [110/2] via 192.168.2.2, GigabitEthernet0/1

  Both paths have identical metric [110/2]
  Traffic to 10.0.0.0/24 is distributed across Gi0/0 and Gi0/1
  Each flow consistently uses one path (per-flow hashing by default)

ECMP in Routing Protocols

Protocol ECMP Support Default Max Paths Unequal-Cost? Configuration
OSPF Yes — installs all equal-cost paths automatically 4 paths (configurable up to 16 or 32 depending on platform) No — equal-cost only maximum-paths 8 under router ospf 1
EIGRP Yes — equal-cost by default; unequal-cost via variance 4 paths (configurable up to 16) Yes — via variance N command (unique to EIGRP) maximum-paths 8 and variance 2
BGP Yes — requires explicit configuration (maximum-paths) 1 (disabled by default) No for standard ECMP; yes with add-paths maximum-paths 4 under router bgp
RIP Yes — installs equal hop-count paths 4 paths No maximum-paths 4
Static routes Yes — configure multiple static routes to same prefix Unlimited (up to max-paths) No — all static routes have same AD (1) Multiple ip route commands for same prefix

ECMP Configuration Examples

! OSPF — increase max paths
Router(config)# router ospf 1
Router(config-router)# maximum-paths 8

! EIGRP — equal-cost max paths + unequal-cost via variance
Router(config)# router eigrp 100
Router(config-router)# maximum-paths 8
Router(config-router)# variance 2          ! Allow paths with FD ≤ 2 × best FD

! BGP — enable ECMP (disabled by default)
Router(config)# router bgp 65001
Router(config-router)# maximum-paths 4     ! eBGP ECMP
Router(config-router)# maximum-paths ibgp 4  ! iBGP ECMP

! Static ECMP — two equal-cost static routes
Router(config)# ip route 10.0.0.0 255.0.0.0 192.168.1.2
Router(config)# ip route 10.0.0.0 255.0.0.0 192.168.2.2

! Verify ECMP routes in routing table
Router# show ip route 10.0.0.0
! Look for multiple [metric] via entries for same prefix

8. ECMP Hashing at Layer 3

Like EtherChannel, ECMP routers use a hash algorithm to select which path a given flow uses. On Cisco IOS, ECMP hashing is performed by CEF (Cisco Express Forwarding), which maintains a load-sharing table derived from the hash of packet header fields.

! View CEF load sharing for ECMP paths
Router# show ip cef 10.0.0.0/24
10.0.0.0/24, epoch 0, 2 buckets/paths, flags 0x0
  next hop 192.168.1.2, GigabitEthernet0/0
  next hop 192.168.2.2, GigabitEthernet0/1

! View CEF load-sharing algorithm
Router# show ip cef exact-route 192.168.1.10 10.0.0.5
192.168.1.10 -> 10.0.0.5:
  GigabitEthernet0/0  (next hop: 192.168.1.2)
CEF per-destination vs per-packet: IOS CEF defaults to per-destination load sharing for IP ECMP (all flows to the same destination use the same path). To enable per-flow (source + destination) load sharing:
Router(config)# ip cef load-sharing algorithm universal
! or for specific interface:
Router(config-if)# ip load-sharing per-packet   ! enables per-packet (not recommended)
Per-packet ECMP is generally not recommended for the same reasons as per-packet EtherChannel — packet reordering damages TCP performance.

9. Limitations and Challenges

Hash Imbalance

Hash algorithms distribute traffic by computing a function of packet header fields. If the traffic profile is dominated by a small number of unique flow identifiers (e.g., a data centre where all traffic is between a few server IPs), many flows hash to the same value and end up on the same link — while other links carry almost nothing.

  Problem scenario — src-dst-ip hash with few unique IPs:

  All clients: 10.0.0.0/24 (same /24)
  All servers: 192.168.1.1 and 192.168.1.2

  Hash(10.0.0.x, 192.168.1.1) → always link 1
  Hash(10.0.0.x, 192.168.1.2) → always link 2
  Links 3 and 4: zero traffic!

  Fix: Add L4 port to hash (src-dst-ip-l4port)
  Hash(10.0.0.x, 192.168.1.1, src_port_54321, 443) → link 1
  Hash(10.0.0.x, 192.168.1.1, src_port_54322, 443) → link 3
  Better distribution across all 4 links

Asymmetric Routing

ECMP can cause asymmetric routing — where the forward path and reverse path for a connection travel through different routers. This is normally harmless for stateless packet forwarding, but causes problems with:

  • Stateful firewalls: The firewall only sees one direction of the TCP session, cannot track state, and drops or resets the connection.
  • NAT: Translation state exists on one device; reverse traffic hits a different device with no NAT table entry.
  • Troubleshooting: Packet captures on one router miss half the conversation.
  Asymmetric routing with ECMP:

  Client → Router A → Server   (forward path uses R-A)
  Server → Router B → Client   (return path uses R-B)

  If R-A has a stateful firewall: only sees SYN, not SYN-ACK → drops connection
  Fix: Use symmetric hashing or stateful firewall clustering

Flow Stickiness / Elephant Flows

A single high-bandwidth "elephant flow" (e.g., a bulk backup or VM migration) hashes to one link and saturates it while other links remain underutilised. Per-flow hashing cannot split a single flow across multiple links — that would cause reordering. Solutions include:

  • Use more granular hashing fields (add L4 ports) to separate sub-flows
  • QoS to limit elephant flows and protect latency-sensitive traffic
  • Some advanced data-centre fabrics use flowlet switching — detecting bursts within a flow and occasionally switching them to a different path when there is a gap in the flow

10. EtherChannel vs ECMP — Layer 2 vs Layer 3 Load Balancing

Aspect EtherChannel (Layer 2) ECMP (Layer 3)
OSI Layer Layer 2 (Data Link) Layer 3 (Network)
Logical representation One Port-Channel interface — STP sees a single link Multiple next-hop entries in the routing table for one prefix
Protocols LACP (IEEE 802.3ad), PAgP (Cisco), Static (on) OSPF, EIGRP, BGP, RIP, static routes
Hash basis Configured globally: src-mac, dst-ip, src-dst-ip, L4 port, etc. CEF algorithm: typically src/dst IP; configurable per platform
STP interaction All links active — STP treats Port-Channel as single link (no blocking) Not relevant — operates at Layer 3 above STP
Failure behaviour Remaining member ports continue carrying traffic; Port-Channel stays up Failed path removed from routing table; other paths continue
Max links Up to 8 active (LACP); up to 16 configured (8 standby) Typically 4–16 paths (protocol/platform dependent)
Typical use case Switch-to-switch uplinks, server NIC bonding, campus access/distribution Router WAN uplinks, data-centre fabric, ISP peering

11. Troubleshooting Load Balancing

Symptom Likely Cause Diagnostic & Fix
EtherChannel not forming — ports show as "I" (stand-alone) Mismatched LACP modes (passive-passive), mismatched port config, or AS number mismatch show etherchannel summary — check flags. LACP: ensure one side is active. Check port speed/duplex/VLAN config matches across all members.
One EtherChannel member carries all traffic; others near-idle Hash imbalance or polarisation — too few unique hash inputs show interfaces port-channel 1 etherchannel — compare per-member counters. Use test etherchannel load-balance with representative flows. Switch to src-dst-ip-l4port hashing.
EtherChannel formed but no traffic on some members Hash polarisation from upstream switch using same algorithm Use different hash methods at different tiers. Check upstream switch load-balance setting with show etherchannel load-balance.
ECMP routes present but traffic only uses one path CEF per-destination default (all flows to one destination use same path) show ip cef <prefix> — verify multiple paths. show ip cef exact-route src dst — test path selection. Enable ip cef load-sharing algorithm universal for better per-flow distribution.
Stateful firewall dropping connections with ECMP enabled Asymmetric routing — forward/return paths through different devices Verify with packet captures on both ECMP paths. Use symmetric hashing or deploy firewall clustering that shares state across instances.
Port-channel interface shows "D" (down) despite members connected Port config mismatch, LACP negotiation failed, or min-links not met show etherchannel 1 detail — look for "Incompatible" or "Not Bundled" messages. Check port-channel min-links setting.

Key Command Reference

! ── EtherChannel Commands ──────────────────────────────────────────────
show etherchannel summary              ! Overview — all channels, protocol, port status
show etherchannel load-balance         ! Current hashing method
show etherchannel 1 detail             ! Detailed EtherChannel 1 status and port info
show interfaces port-channel 1         ! Port-Channel interface counters
show interfaces port-channel 1 etherchannel  ! Per-member traffic counters
test etherchannel load-balance interface port-channel 1 ip <src> <dst>

! ── ECMP / Routing Commands ─────────────────────────────────────────────
show ip route                          ! Full routing table — check for multiple paths
show ip route 10.0.0.0                 ! Specific prefix — shows all ECMP next-hops
show ip cef 10.0.0.0/24               ! CEF forwarding entry — load-sharing buckets
show ip cef exact-route <src> <dst>  ! Which path a specific flow uses

12. Common Misconceptions

  • "EtherChannel doubles (or quadruples) bandwidth for all flows."
    EtherChannel increases aggregate bandwidth — multiple flows together can use the combined capacity. But any single individual flow is limited to one physical link's bandwidth. A 4-link 1G EtherChannel has 4 Gbps aggregate, but a single TCP session between two specific IPs still maxes out at 1 Gbps because per-flow hashing always puts it on the same link.
  • "LACP timers must match on both sides."
    LACP has fast (1-second) and slow (30-second) PDU rates, but they do not need to match. The default is slow (30s) on both sides. What does matter is that at least one side must be in active mode — both sides in passive mode prevents the channel from forming.
  • "Per-packet load balancing gives better throughput than per-flow."
    Theoretical aggregate throughput is similar for both methods. Per-packet causes TCP reordering which triggers duplicate ACKs and reduces actual goodput. Per-flow has better real-world application performance even though individual links may be less perfectly balanced.
  • "Changing the EtherChannel hash method is traffic-affecting."
    Changing port-channel load-balance takes effect immediately and causes existing flows to be re-mapped to different physical links. This can cause a brief disruption to established TCP sessions. Schedule hash method changes during maintenance windows.

13. Key Points & Exam Tips

  • EtherChannel bundles 2–8 physical ports into one logical Port-Channel. STP treats it as a single link — no blocking of member ports.
  • EtherChannel protocols: LACP (IEEE 802.3ad, open standard — preferred), PAgP (Cisco only), Static/On (no negotiation).
  • LACP modes: active (sends LACP PDUs) and passive (responds only). At least one side must be active — passive-passive does NOT form.
  • PAgP modes: desirable (sends PAgP PDUs) and auto (responds only). Auto-Auto does NOT form.
  • Hash method configured with: port-channel load-balance <method>. Verify with: show etherchannel load-balance.
  • Best general-purpose hash: src-dst-ip or src-dst-ip-l4port — uses both endpoints for more unique values.
  • Hash polarisation: Using the same hash method at multiple tiers can concentrate traffic on the same links. Fix by using different methods per tier.
  • ECMP = Equal-Cost Multipath — multiple routing table entries for same prefix with identical metrics. All major routing protocols support it.
  • OSPF ECMP: equal-cost only (maximum-paths). EIGRP: equal AND unequal-cost via variance. BGP: disabled by default, requires maximum-paths.
  • Per-flow hashing (default) keeps packet order — required for TCP, VoIP. Per-packet causes reordering — avoid in production.
  • Single large flows are limited to one link's bandwidth — EtherChannel cannot accelerate individual flows beyond one member link's speed.

Related pages: EtherChannel Configuration | OSPF Configuration | EIGRP Configuration | Spanning Tree Protocol | STP Overview | VLANs | show ip route | show interfaces | Trunk Port Configuration Lab

14. Load Balancing Quiz

1. A network engineer configures a 4-link EtherChannel between two switches using src-dst-ip hashing. A monitoring tool shows that link 1 carries 90% of traffic while links 2, 3, and 4 carry almost nothing. What is the most likely cause?

Correct answer is D. Hash imbalance occurs when the traffic profile has very few unique values for the hashed fields. With src-dst-ip hashing and only a handful of server IPs, many flows produce the same hash result and land on the same physical link. The solution is to switch to a more granular hashing method like src-dst-ip-l4port — by including Layer 4 port numbers (which are unique per TCP/UDP session), the hash produces far more unique values, spreading traffic across all four links. Use show interfaces port-channel 1 etherchannel to confirm per-member counters and diagnose the imbalance.

2. Switch A has LACP set to passive on all four member ports. Switch B also has LACP set to passive on all four member ports. Both switches are connected with four cables. What state will the EtherChannel be in?

Correct answer is B. LACP passive mode means the port waits to receive LACP PDUs before sending any. If both sides are passive, neither sends the first PDU — LACP negotiation never starts and the EtherChannel fails to form. The ports show as "I" (stand-alone) in show etherchannel summary. For LACP to work, at least one side must be in active mode (which initiates PDU exchange). The equivalent PAgP scenario: Auto-Auto also fails to form because neither side sends PAgP PDUs.

3. A network runs OSPF with two equal-cost paths to 172.16.0.0/16. A network engineer needs to verify which physical interface a flow from 10.1.1.10 to 172.16.1.50 would use. Which command provides this information?

Correct answer is C. The show ip cef exact-route <source> <destination> command uses the same CEF hash calculation that the router would use for actual forwarding and tells you exactly which next-hop interface a specific source-destination pair would use. show ip route shows all ECMP paths but not which one a specific flow uses. This command is invaluable for debugging ECMP: if a stateful device like a firewall is in the path, you can confirm whether forward and reverse traffic use the same or different paths.

4. An engineer has a 4-link EtherChannel between an access switch and a distribution switch. The distribution switch has another 4-link EtherChannel to the core. Both EtherChannels use src-dst-ip hashing. Traffic monitoring shows severe imbalance at the core level. What is this problem called and how is it fixed?

Correct answer is A. Hash polarisation occurs when multiple tiers of EtherChannel in the same traffic path use the same hashing algorithm. The access-to-distribution hash maps certain flows to specific links. The distribution-to-core hash applies the same algorithm to the same header fields — so the same flows consistently land on the same core links. The result is that 1–2 core links carry all traffic while others idle. The fix is deliberate hash diversification: use src-dst-ip at one layer and src-dst-ip-l4port (which includes Layer 4 ports for more entropy) at the next layer, ensuring different distribution decisions at each hop.

5. A company uses ECMP with OSPF across two equal-cost WAN links (via Router B and Router C) to reach 10.0.0.0/8. A stateful firewall sits on the path. Users report intermittent connection resets. What is the most likely cause?

Correct answer is C. This is the classic stateful firewall + ECMP incompatibility. When a TCP session starts, the SYN packet may be forwarded via Router B (and the firewall creates a state entry for this connection). The SYN-ACK return packet from the server may be ECMP-routed via Router C — which bypasses the stateful firewall entirely (or reaches a different firewall instance with no state). The firewall has no state for this connection and drops the SYN-ACK. The connection resets. Solutions: (1) deploy the firewall in a cluster that shares state, (2) use symmetric hashing to ensure forward and reverse flows use the same path, or (3) redesign the topology to avoid asymmetric paths through stateful devices.

6. A network administrator configures a 4-link EtherChannel between two distribution switches. Each link is 1 Gbps. A single workstation is running a large file transfer to a server. What is the maximum throughput that workstation can achieve through the EtherChannel?

Correct answer is B. This is one of the most important and frequently misunderstood aspects of EtherChannel. The 4 Gbps aggregate bandwidth is only achievable when distributed across multiple different flows. Per-flow hashing assigns a single flow (identified by src/dst IP and/or ports) to exactly one physical link and keeps it there for the duration of the flow. A single large file transfer between one workstation and one server is a single flow — it hashes to one link and is limited to that link's 1 Gbps. EtherChannel benefits: multiple concurrent flows from different sources/destinations aggregate to 4 Gbps total; any single flow is still 1 Gbps.

7. An engineer configures EIGRP with variance 2 and maximum-paths 4. A route to 192.168.100.0/24 has these paths:
Path 1 (Successor): FD = 100,000
Path 2 (FS): FD = 180,000
Path 3: FD = 230,000 (does not satisfy FC)
Which paths are installed in the routing table for load balancing?

Correct answer is D. EIGRP unequal-cost load balancing via variance has two strict requirements — both must be met: (1) The path must be a Feasible Successor (RD < current FD — the Feasibility Condition). Path 3 explicitly does not satisfy FC, so variance cannot include it regardless of its FD. (2) The path's FD must be ≤ variance × successor's FD: Path 2: 180,000 ≤ 2 × 100,000 = 200,000 ✓ passes. Path 1 and Path 2 are installed. Traffic is distributed proportionally — Path 1 carries more traffic because it has a lower (better) metric.

8. Why is per-packet load balancing generally avoided in production networks for TCP traffic, even though it provides more even link utilisation?

Correct answer is A. Per-packet load balancing sends consecutive packets across different physical links. Even if all links have the same nominal bandwidth, they have slightly different queuing depths, processing times, and propagation delays. Packets 1, 3, 5 may arrive before packets 2, 4, 6 even though they were sent later. TCP receivers detect gaps in sequence numbers and send duplicate ACKs. After 3 duplicate ACKs, the sender assumes packet loss, halves its congestion window (AIMD algorithm), and retransmits — dramatically reducing goodput. Modern TCP fast retransmit makes this even more pronounced. Per-flow hashing eliminates this by guaranteeing all packets in a session follow the same physical path.

9. Which command tests which physical EtherChannel member link a specific IP flow would be assigned to, without generating actual traffic?

Correct answer is C. The test etherchannel load-balance interface port-channel <N> ip <src> <dst> command simulates the hash calculation and tells you which physical member port a given flow would be assigned to — without sending any actual traffic. This is invaluable for diagnosing hash imbalance, predicting load distribution before deployment, and verifying that a hash method change would improve distribution. show etherchannel load-balance shows the configured method but not the result for specific flows. show interfaces port-channel 1 etherchannel shows historical traffic counters but not future assignments.

10. A router has OSPF ECMP enabled with maximum-paths 4 and four equal-cost paths to 10.100.0.0/16. All four interfaces show traffic, but one interface consistently carries about 70% of all flows while others share the remaining 30%. The traffic mix is predominantly HTTP/HTTPS from many clients to two specific servers. What would best improve the distribution?

Correct answer is B. With only two destination servers, a pure destination-IP hash produces only two unique values — inevitably concentrating traffic on two of the four paths. Even src+dst IP hashing with many clients but only two server IPs will produce limited unique values. Adding the source port (which is unique per TCP/HTTPS session) to the hash calculation dramatically increases the number of unique hash inputs — each HTTPS session from a client has a unique ephemeral source port (1024–65535), so even traffic to the same server IP hashes to different values per session. The ip cef load-sharing algorithm universal command enables a more entropy-aware algorithm that incorporates these additional fields.

← Back to Home