NetFlow Configuration & Traffic Analysis

Knowing that a link is saturated is useful. Knowing exactly which application, host pair, and protocol is consuming the bandwidth — and in what direction — is what NetFlow delivers. NetFlow is a Cisco IOS feature that monitors every IP packet transiting a router interface and groups them into flow records. A flow is defined by a unique combination of source IP, destination IP, source port, destination port, protocol, ToS (type of service), and input interface — the seven-tuple key. These records are cached on the router and periodically exported to a centralised NetFlow collector, where they are stored, aggregated, and visualised to reveal traffic patterns, top talkers, application distribution, and security anomalies.

NetFlow does not capture packet payloads — it records only the flow metadata (who talked to whom, on what protocol and port, how many bytes and packets, for how long). This makes it highly efficient in storage and privacy-safe compared to full packet capture tools like Wireshark or tcpdump, while still providing the visibility needed for capacity planning and security investigation.

This lab covers both Traditional NetFlow v5/v9 (the classic fixed-format approach) and Flexible NetFlow (FnF) (the modern modular framework introduced in IOS 12.4(20)T). Before starting, review NetFlow Monitoring for a conceptual overview, and show ip route to understand the routing context in which flows are measured. For ACL-based traffic filtering that complements NetFlow, see Extended ACL Configuration.

1. NetFlow — Core Concepts

The NetFlow Seven-Tuple Flow Key

Every unique combination of these seven fields constitutes one flow. Two packets are part of the same flow only if all seven fields match. A web browsing session from PC1 to a web server produces two flows — one for each direction:

  Flow Key Fields (all seven must match to belong to the same flow):

  1. Source IP address        — 192.168.10.10
  2. Destination IP address   — 203.0.113.50
  3. Source port              — 49512  (ephemeral client port)
  4. Destination port         — 80     (HTTP)
  5. IP protocol              — TCP (6)
  6. Type of Service (ToS)    — 0x00
  7. Input interface          — GigabitEthernet0/1

  ─── This combination = one distinct flow entry in the NetFlow cache ───

  The return direction (server → client) is a separate flow:
  Src: 203.0.113.50:80  →  Dst: 192.168.10.10:49512  (ingress on Gi0/0)
  

NetFlow Architecture — Three Components

Component Location Function
Flow Cache Router RAM Stores active flow records — updated per packet. Limited size; entries expire after active/inactive timers
Flow Exporter Router Packages expired flow records into UDP datagrams (NetFlow v5/v9/IPFIX) and sends them to the collector
Flow Collector Server (e.g., 192.168.30.60) Receives, stores, and indexes exported flow records. Provides reporting, dashboards, and top-talker analysis

Traditional NetFlow vs. Flexible NetFlow

Feature Traditional NetFlow (v5/v9) Flexible NetFlow (FnF)
Configuration model Simple — interface-level commands only. See Basic Interface Configuration. Modular — flow record + exporter + monitor defined separately
Flow key fields Fixed seven-tuple Customisable — add or remove key and non-key fields
Export format v5 (fixed 48 bytes/record) or v9 (template-based) v9 or IPFIX (RFC 7011) — template-based, extensible
Multiple monitors per interface No — one flow monitor per direction Yes — multiple monitors with different records on same interface
IPv6 support v5: No. v9: Yes Yes — full IPv4 and IPv6 support
IOS support All IOS versions IOS 12.4(20)T and later; IOS XE 2.x and later
CCNA exam relevance Concepts and show commands Awareness — configuration detail is CCNP level

Flow Expiry — When Records Are Exported

Flow entries do not stay in the cache indefinitely. IOS expires and exports flow records based on three conditions:

Expiry Trigger Default Timer Command to Adjust Description
Active timer 30 minutes ip flow-cache timeout active [min] Long-lived flows (e.g., a video stream) are force-exported every 30 minutes even if still active — prevents stale cache entries
Inactive timer 15 seconds ip flow-cache timeout inactive [sec] A flow with no new packets for 15 seconds is considered finished and exported — catches short-lived flows like DNS queries
Cache full N/A ip flow-cache entries [num] When the cache reaches capacity, the oldest entries are forcibly expired and exported to make room

NetFlow Export Versions

Version Format IPv6 BGP AS Info Notes
v5 Fixed 48 bytes per record No Yes (ASN fields) Most widely supported by legacy collectors. IPv4 only. Default on many older IOS versions
v9 Template-based — variable length Yes — see IPv6 Addressing Yes Flexible — collector receives templates first, then data. Required for IPv6 and MPLS flows
IPFIX RFC 7011 — standardised v9 Yes Yes Industry standard — used by Flexible NetFlow and most modern collectors

2. Lab Topology & Scenario

This lab configures NetFlow on NetsTuts_R1 — the edge router sitting between the internal VLANs and the WAN. Flow monitoring is applied on both internal-facing interfaces (to see which hosts are generating traffic) and the WAN interface (to see what is leaving the network). A NetFlow collector at 192.168.30.60 receives all exported records:

    192.168.10.0/24 — Staff VLAN
    [PC1: .10.10]  [PC2: .10.20]
          |
     Gi0/1 (192.168.10.1)  ← NetFlow ingress + egress
          |
     NetsTuts_R1
     Gi0/0: 203.0.113.2 ──── WAN / Internet
          |                 ← NetFlow ingress + egress
     Gi0/2 (192.168.20.1)
          |
    192.168.20.0/24 — Guest VLAN
    [Laptop1: .20.10]

     Gi0/3 (192.168.30.1)
          |
    192.168.30.0/24 — Server VLAN
    [NetFlow Collector: 192.168.30.60  UDP/2055]
    [Syslog Server:     192.168.30.50]
    [NTP Server:        192.168.30.51]

  Lab Goals:
    Part A — Traditional NetFlow v5
      Step 1 — Enable NetFlow ingress on all interfaces
      Step 2 — Configure flow export to collector (UDP 2055)
      Step 3 — Tune timers and cache size
      Step 4 — Verify with show ip cache flow and show ip flow interface

    Part B — Flexible NetFlow (FnF)
      Step 5 — Define flow record (custom key fields)
      Step 6 — Define flow exporter
      Step 7 — Define flow monitor and bind record + exporter
      Step 8 — Apply flow monitor to interfaces
      Step 9 — Verify with show flow monitor and show flow interface
  
Interface Network NetFlow Applied Direction Purpose
Gi0/0 203.0.113.0/30 (WAN) Yes ingress + egress Measure all traffic entering/leaving the internet edge
Gi0/1 192.168.10.0/24 (Staff) Yes ingress Identify top-talker hosts in Staff VLAN
Gi0/2 192.168.20.0/24 (Guest) Yes ingress Monitor Guest VLAN usage patterns
Gi0/3 192.168.30.0/24 (Servers) No Collector is on this segment — no need to monitor management traffic. See NTP Configuration and Syslog Configuration.

3. Part A — Traditional NetFlow v5 Configuration

Step 1 — Enable NetFlow on Interfaces

Traditional NetFlow is enabled directly on each interface using ip flow ingress and/or ip flow egress. Ingress captures packets arriving on the interface; egress captures packets leaving it. For most deployments, ingress-only on all interfaces is sufficient — every packet is the ingress of exactly one interface as it enters the router. See Basic Interface Configuration for interface setup fundamentals and use show ip interface brief to confirm interfaces are up before enabling NetFlow:

NetsTuts_R1>en
NetsTuts_R1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.

! ── WAN interface — both directions (full edge visibility) ────────
NetsTuts_R1(config)#interface GigabitEthernet0/0
NetsTuts_R1(config-if)#ip flow ingress
NetsTuts_R1(config-if)#ip flow egress
NetsTuts_R1(config-if)#exit

! ── Staff VLAN — ingress only (packets from hosts) ────────────────
NetsTuts_R1(config)#interface GigabitEthernet0/1
NetsTuts_R1(config-if)#ip flow ingress
NetsTuts_R1(config-if)#exit

! ── Guest VLAN — ingress only ─────────────────────────────────────
NetsTuts_R1(config)#interface GigabitEthernet0/2
NetsTuts_R1(config-if)#ip flow ingress
NetsTuts_R1(config-if)#exit
  
ip flow ingress is the modern IOS command — older IOS versions used ip route-cache flow which is now deprecated. Enabling both ingress and egress on the same interface doubles the cache entries for that traffic — on high-volume WAN interfaces this is intentional to capture both directions. On LAN interfaces, ingress alone captures all host-originated traffic as it enters the router, which is sufficient for top-talker analysis. Use show interfaces to check interface status and statistics.

Step 2 — Configure Flow Export to Collector

! ── Specify the NetFlow collector destination ─────────────────────
NetsTuts_R1(config)#ip flow-export destination 192.168.30.60 2055

! ── Set export version — v5 for maximum collector compatibility ───
NetsTuts_R1(config)#ip flow-export version 5

! ── Pin export source to Loopback0 (stable IP, same as syslog) ───
NetsTuts_R1(config)#ip flow-export source Loopback0
  
ip flow-export destination [IP] [UDP-port] — the collector listens on UDP. Common ports are 2055, 9995, or 9996 depending on the collector software (ntopng, Elastic, SolarWinds, PRTG, etc.). Always verify the collector's configured receive port before setting this. ip flow-export version 5 uses the fixed-format v5 which all major collectors support. Use version 9 if the collector supports it and IPv6 or MPLS flow visibility is needed. Pinning the source to Loopback0 ensures the collector always sees exports from a consistent IP — same rationale as syslog source interface.

Step 3 — Tune Cache Timers and Size

! ── Active timer — export long-lived flows every 60 seconds ──────
! ── Default is 30 min — reduce for near-real-time visibility
NetsTuts_R1(config)#ip flow-cache timeout active 1

! ── Inactive timer — export idle flows after 15 seconds ──────────
NetsTuts_R1(config)#ip flow-cache timeout inactive 15

! ── Cache size — default 4096 entries; increase for high-traffic routers
NetsTuts_R1(config)#ip flow-cache entries 8192

NetsTuts_R1(config)#end
NetsTuts_R1#wr
Building configuration...
[OK]
NetsTuts_R1#
  
The active timer is reduced from 30 minutes to 1 minute here — this means active flows are exported to the collector every 60 seconds instead of every 30 minutes, enabling near-real-time traffic dashboards. The trade-off is higher export UDP traffic volume to the collector. For a forensic investigation or capacity planning where real-time is not needed, the default 30-minute active timer is more efficient. Cache size of 8192 entries supports busier routers — each entry consumes approximately 64 bytes of router RAM (8192 entries ≈ 512 KB).

4. Traditional NetFlow — Verification

show ip flow interface — Confirm NetFlow Applied

NetsTuts_R1#show ip flow interface
GigabitEthernet0/0
  ip flow ingress
  ip flow egress
GigabitEthernet0/1
  ip flow ingress
GigabitEthernet0/2
  ip flow ingress
  
This is the first verification step — confirm that ip flow ingress (and egress where configured) appears on the correct interfaces before checking the cache. If an interface is missing here, no flow data will be collected for traffic on that segment regardless of what the export configuration shows. Cross-check with show interfaces to confirm the interfaces are up.

show ip cache flow — View the Flow Cache

NetsTuts_R1#show ip cache flow
IP packet size distribution (124578 total packets):
   1-32   64   96  128  160  192  224  256  288  320  352  384  416  448  480
   .000 .412 .051 .143 .021 .008 .004 .003 .002 .002 .001 .001 .001 .001 .001

   512  544  576 1024 1536 2048 2560 3072 3584 4096 4608
   .001 .001 .015 .012 .321 .000 .000 .000 .000 .000 .000

IP Flow Switching Cache, 278544 bytes
  8192 entries, 4762 used, 3430 free
  163 active, 4599 aged, 121 long, 4 overflow

Protocol         Total    Flows   Packets Bytes  Packets Active(Sec) Idle(Sec)
--------         Flows     /Sec     /Flow  /Pkt     /Sec     /Flow     /Flow
TCP-WWW          1842      0.8        48   1456     36.6      15.4       2.1
TCP-other         531      0.2        12    512      2.6       8.2       3.4
UDP-DNS           298      0.1         3     84      0.9       0.8      12.1
UDP-NTP            14      0.0        12    84       0.1       5.2      14.3
ICMP               47      0.0         5    76       0.2       1.2       8.4
Total:           2732      1.1        32   963     40.4      12.1       3.8

SrcIf         SrcIPaddress    DstIf         DstIPaddress    Pr SrcP DstP  Pkts
Gi0/1         192.168.10.10   Gi0/0         203.0.113.50    06 C108 0050   312
Gi0/1         192.168.10.20   Gi0/0         203.0.113.80    06 BF4A 01BB   148
Gi0/2         192.168.20.10   Gi0/0         8.8.8.8         11 9F3C 0035    12
Gi0/0         203.0.113.50    Gi0/1         192.168.10.10   06 0050 C108   289
Gi0/1         192.168.10.10   Gi0/0         52.96.10.14     06 C10A 01BB    87
  
The output has three sections. The packet size distribution shows what proportion of traffic falls into each size bucket — the large 1536-byte peak (0.321) indicates significant large-frame traffic (file transfers or video). The protocol summary shows TCP-WWW (HTTP, port 80) dominates at 1842 flows — immediately identifying web browsing as the primary traffic type. The flow table shows individual flows: SrcP and DstP are in hexadecimal — 0050 hex = 80 decimal (HTTP), 01BB hex = 443 decimal (HTTPS), 0035 hex = 53 decimal (DNS). Convert port hex to decimal to identify the application.

show ip cache flow — Reading Port Numbers

Hex Port (show output) Decimal Protocol
0050 80 HTTP
01BB 443 HTTPS
0035 53 DNS
0016 22 SSH
0017 23 Telnet
0015 21 FTP Control
007B 123 NTP — see NTP Configuration
00A1 161 SNMP — see SNMP Overview

show ip cache verbose flow — Detailed Per-Flow View

NetsTuts_R1#show ip cache verbose flow
...
SrcIf       SrcIPaddress  DstIf       DstIPaddress  Pr TOS Flgs  Pkts
Port Msk AS                Port Msk AS  NextHop       B/Pk  Active

Gi0/1       192.168.10.10  Gi0/0       203.0.113.50  06 00  0000   312
C108 /24  0               0050 /0   0  203.0.113.2   1456   18.4
  
The verbose version adds ToS (quality of service marking), TCP flags, bytes per packet (B/Pk — 1456 here confirms large HTTP responses), active duration in seconds (18.4 seconds), and the next-hop IP. ToS 00 means best-effort (no QoS marking) — if traffic were DSCP-marked for VoIP (EF = 0xB8) or video conferencing, the ToS field would show the marking value, enabling QoS policy verification via NetFlow.

show ip flow export — Export Status

NetsTuts_R1#show ip flow export
Flow export v5 is enabled for main cache
  Export source and destination details :
  VRF ID : Default
    Destination(1) 192.168.30.60 (2055)
    Source(1)       1.1.1.1 (Loopback0)
  Version 5 flow records
  4821 flows exported in 243 udp datagrams
  0 flows failed due to lack of export packet
  0 export packets were sent up to process level
  0 export packets were dropped due to no fib
  0 export packets were dropped due to adjacency issues
  0 export packets were dropped due to fragmentation failures
  0 export packets were dropped due to encapsulation fixup failures
  
This confirms the collector is reachable and exports are succeeding. 4821 flows have been exported in 243 UDP datagrams — each datagram can carry up to 30 flow records (v5 format). Any non-zero drop counters indicate a problem: "no fib" means the router has no route to the collector, "adjacency issues" means an ARP resolution failure for the next-hop toward the collector. If drops appear here, check routing to 192.168.30.60 with show ip route 192.168.30.60 and verify ARP with show ip arp.

5. Part B — Flexible NetFlow (FnF) Configuration

Flexible NetFlow replaces the monolithic ip flow interface command with a three-part modular framework: a flow record defines what to measure, a flow exporter defines where to send it, and a flow monitor binds them together and is applied to interfaces. This allows multiple independent monitoring policies on the same router — for example, a detailed application monitor on the WAN interface and a lightweight bandwidth monitor on LAN interfaces:

Step 5 — Define the Flow Record

NetsTuts_R1(config)#flow record NETSTUTS-RECORD
NetsTuts_R1(config-flow-record)#description IPv4 traffic — src/dst IP, port, protocol
NetsTuts_R1(config-flow-record)#match ipv4 protocol
NetsTuts_R1(config-flow-record)#match ipv4 source address
NetsTuts_R1(config-flow-record)#match ipv4 destination address
NetsTuts_R1(config-flow-record)#match transport source-port
NetsTuts_R1(config-flow-record)#match transport destination-port
NetsTuts_R1(config-flow-record)#match interface input
NetsTuts_R1(config-flow-record)#match ipv4 tos
NetsTuts_R1(config-flow-record)#collect counter bytes long
NetsTuts_R1(config-flow-record)#collect counter packets long
NetsTuts_R1(config-flow-record)#collect timestamp sys-uptime first
NetsTuts_R1(config-flow-record)#collect timestamp sys-uptime last
NetsTuts_R1(config-flow-record)#collect transport tcp flags
NetsTuts_R1(config-flow-record)#exit
  
The flow record has two types of fields: match (key) fields define the flow — two packets must share all match values to belong to the same flow. collect (non-key) fields are counters and metadata accumulated for each flow — bytes, packets, timestamps, TCP flags. The seven match fields here replicate the traditional NetFlow seven-tuple. Adding collect transport tcp flags enables TCP SYN/FIN/RST flag analysis in the exported records — useful for detecting port scans (many SYN, no SYN-ACK) or connection resets. The match ipv4 tos field captures DSCP/QoS markings per flow.

Step 6 — Define the Flow Exporter

NetsTuts_R1(config)#flow exporter NETSTUTS-EXPORTER
NetsTuts_R1(config-flow-exporter)#description Export to NetFlow collector
NetsTuts_R1(config-flow-exporter)#destination 192.168.30.60
NetsTuts_R1(config-flow-exporter)#source Loopback0
NetsTuts_R1(config-flow-exporter)#transport udp 2055
NetsTuts_R1(config-flow-exporter)#export-protocol netflow-v9
NetsTuts_R1(config-flow-exporter)#template data timeout 60
NetsTuts_R1(config-flow-exporter)#exit
  
export-protocol netflow-v9 uses the template-based v9 format — required for Flexible NetFlow because custom flow records cannot be expressed in the fixed v5 format. The collector receives template packets first (describing the field structure), then data packets. template data timeout 60 re-sends the template every 60 seconds — if the collector restarts and loses its template cache, it will re-learn the structure within 60 seconds without requiring a router restart.

Step 7 — Define the Flow Monitor

NetsTuts_R1(config)#flow monitor NETSTUTS-MONITOR
NetsTuts_R1(config-flow-monitor)#description Main traffic monitor
NetsTuts_R1(config-flow-monitor)#record NETSTUTS-RECORD
NetsTuts_R1(config-flow-monitor)#exporter NETSTUTS-EXPORTER
NetsTuts_R1(config-flow-monitor)#cache timeout active 60
NetsTuts_R1(config-flow-monitor)#cache timeout inactive 15
NetsTuts_R1(config-flow-monitor)#cache entries 8192
NetsTuts_R1(config-flow-monitor)#exit
  
The flow monitor is the binding object — it references the record (what to measure) and the exporter (where to send it), then adds cache management settings. Cache timeout active 60 seconds gives near-real-time exports for dashboards while cache entries 8192 supports high-traffic environments. A second monitor with a different record could be created for IPv6 traffic or application-layer classification and applied to the same interface simultaneously.

Step 8 — Apply Flow Monitor to Interfaces

! ── WAN interface — monitor both directions ───────────────────────
NetsTuts_R1(config)#interface GigabitEthernet0/0
NetsTuts_R1(config-if)#ip flow monitor NETSTUTS-MONITOR input
NetsTuts_R1(config-if)#ip flow monitor NETSTUTS-MONITOR output
NetsTuts_R1(config-if)#exit

! ── Staff VLAN — input only ───────────────────────────────────────
NetsTuts_R1(config)#interface GigabitEthernet0/1
NetsTuts_R1(config-if)#ip flow monitor NETSTUTS-MONITOR input
NetsTuts_R1(config-if)#exit

! ── Guest VLAN — input only ───────────────────────────────────────
NetsTuts_R1(config)#interface GigabitEthernet0/2
NetsTuts_R1(config-if)#ip flow monitor NETSTUTS-MONITOR input
NetsTuts_R1(config-if)#exit

NetsTuts_R1(config)#end
NetsTuts_R1#wr
Building configuration...
[OK]
NetsTuts_R1#
  
FnF uses ip flow monitor [name] input|output at the interface level — replacing the traditional ip flow ingress and ip flow egress commands. The same monitor name is applied to multiple interfaces — they all share the same cache, record definition, and exporter. The input keyword corresponds to ingress and output corresponds to egress.

6. Flexible NetFlow — Verification

show flow interface — Confirm FnF Applied

NetsTuts_R1#show flow interface GigabitEthernet0/0
Interface GigabitEthernet0/0
  FNF:  monitor:         NETSTUTS-MONITOR
        direction:       Input
        traffic(ip):     on
  FNF:  monitor:         NETSTUTS-MONITOR
        direction:       Output
        traffic(ip):     on
  

show flow monitor NETSTUTS-MONITOR — Monitor Status

NetsTuts_R1#show flow monitor NETSTUTS-MONITOR
Flow Monitor NETSTUTS-MONITOR:
  Description:       Main traffic monitor
  Flow Record:       NETSTUTS-RECORD
  Flow Exporter:     NETSTUTS-EXPORTER
  Cache:
    Type:              normal
    Status:            allocated
    Size:              8192 entries / 311316 bytes
    Inactive Timeout:  15 secs
    Active Timeout:    60 secs
  

show flow monitor NETSTUTS-MONITOR cache — View Active Flows

NetsTuts_R1#show flow monitor NETSTUTS-MONITOR cache
Processed 4762 flows
Processed 4762 flows
IPV4 SRC ADDR  IPV4 DST ADDR  TRNS SRC PORT  TRNS DST PORT  IP PROT  ip tos  bytes  pkts
--------------  ---------------  -------------  -------------  -------  ------  -----  ----
192.168.10.10  203.0.113.50             49416             80        6    0x00  451968  312
192.168.10.10   52.96.10.14             49418            443        6    0x00  127488   87
192.168.20.10        8.8.8.8             40764             53       17    0x00    1008   12
192.168.10.20  203.0.113.80             49024            443        6    0x00  215040  148
  
The FnF cache output is in a cleaner format than traditional NetFlow — port numbers appear in decimal (no hex conversion needed), bytes and packets are shown directly, and IP protocol is in decimal (6 = TCP, 17 = UDP). From this output: 192.168.10.10 is the top talker with two active flows (HTTP to 203.0.113.50 and HTTPS to 52.96.10.14 totalling 451968 + 127488 = 579456 bytes). 192.168.20.10 made a single DNS query (UDP/53 to 8.8.8.8, 1008 bytes). The data needed for capacity planning, anomaly detection, and billing is all visible here.

show flow exporter NETSTUTS-EXPORTER statistics — Export Health

NetsTuts_R1#show flow exporter NETSTUTS-EXPORTER statistics
Flow Exporter NETSTUTS-EXPORTER:
  Packet send statistics (last cleared 00:14:22 ago):
    Successfully sent:         523              (54178 bytes)
  Client send statistics:
    Client: Flow Monitor NETSTUTS-MONITOR
      Records added:           4762
      Records sent:            4762
      Bytes sent:              476200
      Flow bytes sent:         390400
      Template bytes sent:      85800
  
523 UDP packets sent successfully with zero failures — the collector is reachable and receiving exports. Template bytes (85800) vs flow data bytes (390400) shows a healthy ratio — templates are a small overhead relative to data. If "Records added" significantly exceeds "Records sent," the exporter is dropping records, which may indicate the collector is unreachable or the active timer is too long. Verify connectivity with ping 192.168.30.60 source Loopback0.

Verification Command Summary

Command Traditional / FnF What It Shows
show ip flow interface Traditional Confirms ip flow ingress/egress is applied to each interface
show ip cache flow Traditional Protocol summary, packet size distribution, and per-flow table (ports in hex)
show ip cache verbose flow Traditional Adds ToS, TCP flags, bytes/packet, active duration, and next-hop to flow table
show ip flow export Traditional Export destination, version, packets sent, drop counters
show flow interface [int] FnF FnF monitor name, direction, and traffic type applied to an interface
show flow monitor [name] FnF Monitor status, bound record and exporter, cache configuration
show flow monitor [name] cache FnF Active flow entries — source/destination IP, ports in decimal, bytes, packets
show flow exporter [name] statistics FnF Records added/sent, bytes sent, template bytes vs data bytes
show flow record [name] FnF Full definition of all match (key) and collect (non-key) fields in the record
show ip route [prefix] Both Verify routing to the collector — needed when export drops appear
show running-config | section flow Both Full NetFlow configuration audit — all flow records, exporters, monitors

7. Traffic Analysis — Reading NetFlow Data

Identifying Top Talkers

The flow cache can be sorted on the collector or filtered on the router using show ip cache flow output. The protocol summary section provides the fastest path to identifying dominant traffic types without reading individual flow entries:

NetsTuts_R1#show ip cache flow
...
Protocol         Total    Flows   Packets Bytes  Packets Active(Sec) Idle(Sec)
--------         Flows     /Sec     /Flow  /Pkt     /Sec     /Flow     /Flow
TCP-WWW          1842      0.8        48   1456     36.6      15.4       2.1
TCP-HTTPS         921      0.4        32   1280     12.6      12.1       2.3
TCP-other         531      0.2        12    512      2.6       8.2       3.4
UDP-DNS           298      0.1         3     84      0.9       0.8      12.1
UDP-NTP            14      0.0        12    84       0.1       5.2      14.3
ICMP               47      0.0         5    76       0.2       1.2       8.4
  
From this output: HTTP (TCP-WWW) and HTTPS (TCP-443, shown as TCP-HTTPS in newer IOS) together account for 2763 of 3653 total flows — 75.6% of all flows are web browsing. Bytes/packet of 1456 for HTTP and 1280 for HTTPS confirm these are data-carrying sessions, not just control-plane chatter. The Active(Sec)/Flow column shows how long flows live on average — 15.4 seconds average for HTTP flows suggests a mix of short page loads and some persistent connections.

Identifying Security Anomalies with NetFlow

NetFlow reveals several network security events that are invisible to basic monitoring tools:

Anomaly Pattern NetFlow Indicator Investigation Command
Port scan One source IP, many destination IPs and ports, very low bytes/packet, many flows with TCP SYN flags and no SYN-ACK show ip cache verbose flow — filter by source IP, look for TCP flag patterns
DDoS source Single internal host generating disproportionate outbound flow count and byte volume to external IPs show ip cache flow — sort by bytes in collector dashboard, or examine cache for anomalous hosts
Data exfiltration Internal host with large outbound byte count to unusual external IP on non-standard high port, active at unusual hours Collector time-based analysis — look for flows with high byte counts and long active duration to unknown destinations
Rogue DHCP or DNS Unexpected internal host generating UDP/67 (DHCP) or UDP/53 (DNS) flows to multiple internal destinations Filter collector by UDP port 67 or 53, source IP analysis — rogue servers appear as new unexpected sources
Bandwidth hog Single host or flow consuming significantly more bytes per second than peers — visible in top-talker report show ip cache flow protocol summary, or collector top-talker dashboard ranked by bytes/sec

NetFlow vs. Other Traffic Monitoring Tools

Tool What It Captures Storage Overhead Best Use Case
NetFlow Flow metadata — who, what, when, how much. No payload Very low (~0.5–1% of traffic volume) Capacity planning, top talkers, application distribution, long-term trending
Wireshark / tcpdump Full packet capture — headers + payload Very high (100% of traffic volume) Deep protocol analysis, payload inspection, short-term targeted troubleshooting
SNMP Interface counters — total bytes/packets in/out, error counts. See SNMP Configuration. Negligible Bandwidth utilisation trending, interface health — no per-flow detail
Syslog Event messages — interface state, auth failures, config changes. See show logging. Low Security audit trails, event correlation, alerting on specific conditions

8. Troubleshooting NetFlow Issues

Problem Symptom Cause Fix
No flows in cache show ip cache flow shows 0 flows despite active traffic ip flow ingress not applied to interfaces, or traffic is not passing through the monitored interfaces (routing takes a different path) Verify with show ip flow interface — confirm ingress appears on all target interfaces. Check routing with show ip route [dest] to confirm traffic actually transits the router
Exports not reaching collector show ip flow export shows non-zero drop counters ("no fib" or "adjacency") Router has no route to the collector IP, or ARP for the next-hop toward the collector is failing Run ping 192.168.30.60 source Loopback0 — if ping fails, fix routing. Check show ip arp for the next-hop MAC. Verify the collector is listening on the configured UDP port
Collector receives no data (FnF) show flow exporter statistics shows records added but 0 sent Flow exporter source interface has no IP, or the UDP path to the collector is blocked by an ACL on an intermediate device Verify source Loopback0 has an IP assigned. Check for ACLs blocking UDP 2055 between the router and collector with show running-config | section access-list on intermediate devices
High router CPU after enabling NetFlow CPU spikes to 80–100% after applying ip flow ingress to interfaces NetFlow is applied to a high-PPS interface without hardware acceleration — software-based NetFlow on a busy interface is CPU-intensive Reduce scope: remove NetFlow from the highest-traffic interface and apply only to edge interfaces. Reduce cache size. Consider sampling (NetFlow Sampling) — only monitors 1-in-N packets. Use platform with hardware NetFlow acceleration (ASR, Catalyst 9000)
Flow cache fills rapidly show ip cache flow shows high "overflow" counter — entries being dropped before export Cache too small for the number of concurrent flows on the network, or active timer too long causing stale entries to accumulate Increase cache: ip flow-cache entries 16384. Reduce active timer: ip flow-cache timeout active 1. For FnF: cache entries 16384 and cache timeout active 60 in the flow monitor
FnF monitor shows "not allocated" show flow monitor [name] shows status "not allocated" The flow monitor has been defined but not yet applied to any interface — the cache is not initialised until the monitor is bound to at least one interface Apply the monitor to an interface: ip flow monitor NETSTUTS-MONITOR input. The status will change to "allocated" once traffic hits the interface. Verify with show interfaces that the interface is up.

Key Points & Exam Tips

  • A NetFlow flow is defined by the seven-tuple key: source IP, destination IP, source port, destination port, IP protocol, ToS, and input interface. Two packets with all seven fields matching belong to the same flow.
  • Traditional NetFlow is enabled with ip flow ingress (and optionally ip flow egress) on each interface. The older ip route-cache flow command is deprecated — use ip flow ingress on current IOS. See Basic Interface Configuration.
  • ip flow-export destination [IP] [UDP-port] sends records to the collector. ip flow-export version 5 uses the fixed-format v5 (most compatible). Use version 9 for IPv6 or MPLS flows.
  • Port numbers in show ip cache flow are displayed in hexadecimal — convert to decimal to identify the application (0050 hex = 80 = HTTP, 01BB hex = 443 = HTTPS, 0035 hex = 53 = DNS).
  • Flexible NetFlow (FnF) uses three objects: flow record (defines match/key and collect/non-key fields), flow exporter (destination, port, format), and flow monitor (binds record and exporter, manages cache). The monitor is applied to interfaces with ip flow monitor [name] input|output.
  • The active timer (default 30 min) exports long-lived flows periodically. The inactive timer (default 15 sec) exports flows with no new packets. Reduce the active timer for real-time dashboards; keep defaults for efficient collection.
  • Always pin the export source to a Loopback interface (ip flow-export source Loopback0 / source Loopback0) — prevents the collector from seeing multiple source IPs for the same device.
  • NetFlow captures flow metadata only — no packet payload. Storage overhead is approximately 0.5–1% of traffic volume. This distinguishes it from Wireshark/tcpdump (full capture) and SNMP (aggregate counters only).
  • show ip cache flow is the primary Traditional NetFlow verification command — read the protocol summary section first to identify dominant traffic types, then examine the flow table for individual host-level detail.
  • On the CCNA exam: understand NetFlow's purpose (traffic visibility), the seven-tuple flow key, the three destinations (cache/exporter/collector), and the difference between NetFlow and SNMP. Flexible NetFlow configuration detail is CCNP-level.
Next Steps: With NetFlow configured and exporting, the flow data is most valuable when correlated with other monitoring sources. For event-based alerting alongside flow data see Syslog Configuration and show logging. For SNMP-based interface utilisation counters that complement NetFlow bandwidth data, see SNMP Community Strings, SNMP v2c/v3 Configuration, and SNMP Traps. For a conceptual overview of NetFlow monitoring tools and collectors, see NetFlow Monitoring. For access control that filters the traffic NetFlow measures, see Extended ACL Configuration and ACL Overview. For accurate timestamps in exported records, ensure NTP is synchronised.

TEST WHAT YOU LEARNED

1. A network engineer runs show ip cache flow and sees a flow entry with DstP value of 01BB. What application is this flow associated with, and how was the port number determined?

Correct answer is B. Traditional NetFlow's show ip cache flow displays all port numbers in hexadecimal, not decimal. 0x01BB converts to decimal as follows: 0×16³ + 1×16² + B×16¹ + B×16⁰ = 0 + 256 + 176 + 11 = 443. Port 443 is HTTPS. This is a frequently tested operational skill — engineers reading the raw NetFlow cache must convert hex ports to identify applications. Flexible NetFlow's show flow monitor cache output displays ports in decimal, which is one practical advantage of FnF over traditional NetFlow for direct router-based analysis.

2. What are the three required components of a Flexible NetFlow configuration, and what is the role of each?

Correct answer is D. Flexible NetFlow separates configuration into three independent objects. The flow record defines "what to measure" — using match commands for key fields (the flow identifier) and collect commands for non-key fields (bytes, packets, timestamps, TCP flags). The flow exporter defines "where to send it" — destination IP, UDP port, export format (v9 or IPFIX), and source interface. The flow monitor is the binding object — it references one record and one or more exporters, configures the cache size and timers, and is applied to interfaces. This modular design allows the same record to be sent to multiple exporters, or multiple records to be applied to the same interface simultaneously.

3. What is the difference between the NetFlow active timer and inactive timer, and how would you configure them for near-real-time traffic dashboards?

Correct answer is A. Without the active timer, a persistent TCP session (like a long file transfer or video stream) would remain in the cache for its entire duration — potentially hours — without being exported. The active timer forces periodic exports of still-active flows so the collector has current data. Without the inactive timer, short flows (DNS queries, ping, small HTTP requests) that complete quickly might sit in the cache until the next active timer firing. The inactive timer catches these by exporting any flow that has gone quiet. For real-time dashboards, reduce the active timer from 30 minutes to 1 minute — the collector then receives updated byte/packet counts for all active flows every 60 seconds.

4. A router has ip flow ingress on Gi0/1 (LAN) and Gi0/0 (WAN). A host at 192.168.10.10 sends an HTTP request to 203.0.113.50. How many flow cache entries are created, and on which interfaces?

Correct answer is C. With ingress-only NetFlow, each packet is recorded once — at the interface where it enters the router. The HTTP request (192.168.10.10 → 203.0.113.50 TCP/80) arrives on Gi0/1 from the host and is recorded as one flow entry (input interface = Gi0/1). The HTTP response (203.0.113.50 → 192.168.10.10 TCP/80) arrives on Gi0/0 from the internet and is recorded as a separate flow entry (input interface = Gi0/0). These are two distinct flows because their seven-tuple keys differ (source and destination are swapped, and the input interface differs). This is why ingress-only on all interfaces provides complete bidirectional flow visibility — every packet enters through exactly one interface, so no packet is counted twice.

5. show ip flow export shows non-zero "flows failed due to no fib" counter. What does this mean and how is it fixed?

Correct answer is D. "No FIB" means no Forwarding Information Base entry — the router cannot find a route to forward the export UDP packet toward the collector. IOS builds export packets and then performs a FIB lookup to determine where to send them. If no route exists to 192.168.30.60, the lookup fails and the export packet is dropped, incrementing this counter. The fix is to ensure the router has a route to the collector — either a static route (ip route 192.168.30.0 255.255.255.0 [next-hop]) or a dynamic routing protocol covering the collector's subnet. Always verify with ping from the same source interface used for export (Loopback0 in this lab).

6. A security engineer notices a host at 192.168.10.55 generating 15,000 flows per minute in the NetFlow cache, mostly to different destination IPs on destination port 22. All flows have very low byte counts and TCP SYN flags only. What does this indicate?

Correct answer is B. This NetFlow pattern is a classic SSH port scan signature: one source IP generating flows to many different destination IPs, all on the same destination port (22), with extremely low byte counts (only the initial SYN packet — no data transferred because the destination either didn't respond or sent a RST), and TCP SYN-only flags in the verbose flow output (no ACK, no SYN-ACK). A legitimate workstation generating 15,000 flows per minute is anomalous — normal web browsing generates tens to low hundreds of flows per minute. NetFlow is extremely effective at detecting port scans because the scan creates a unique flow entry for every target IP. Investigate with show flow monitor [name] cache filtered by source address, then apply an ACL or notify the security team.

7. What is the key architectural difference between NetFlow export version 5 and version 9, and which should be used when IPv6 traffic monitoring is required?

Correct answer is C. NetFlow v5 has a completely fixed record format — every export packet contains 48-byte records with exactly the same fields in exactly the same byte offsets. This makes collector parsing simple but limits the format to IPv4 only (no room for 128-bit IPv6 addresses) and prevents adding new fields. v9 introduces templates — the router sends a template flowset first that describes which fields are in which order and how many bytes each occupies, then sends data flowsets that follow that template structure. For any environment running IPv6 (including dual-stack), v5 is insufficient and v9 or IPFIX must be used.

8. In a Flexible NetFlow flow record, what is the difference between a match field and a collect field?

Correct answer is A. This is the core FnF design concept. Match (key) fields define flow identity — if two packets have the same source IP, destination IP, protocol, source port, destination port, ToS, and input interface (the seven match fields in the lab's record), they are the same flow and their statistics are accumulated together. Collect (non-key) fields are running totals and metadata attached to each flow entry — byte counters, packet counters, first/last timestamps, TCP flag accumulation. They grow as more packets match the flow, and are exported with the flow record when the entry expires.

9. Why is enabling both ip flow ingress AND ip flow egress on a high-traffic interface potentially problematic, and when is it justified?

Correct answer is D. With ingress NetFlow on all router interfaces, every packet is recorded exactly once — as it enters through one interface before being forwarded out another. This gives complete coverage with no duplication. Adding egress on the same interface creates a second flow record for the same packet as it leaves — effectively doubling the cache entries for that traffic and increasing CPU overhead for flow table lookups. Use show interfaces to monitor CPU and error counters if performance issues are suspected after enabling both directions.

10. A network manager asks: "Which hosts are consuming the most bandwidth to the internet right now?" The router has Traditional NetFlow configured. What is the most direct path to answering this question from the router CLI, and what collector-side capability would provide a better long-term answer?

Correct answer is C. The router CLI answer — show ip cache flow — provides the raw data but requires manual analysis: identifying flows with WAN destination, converting hex ports, and summing bytes per source IP. It is functional but slow for a large cache. The NetFlow collector is the intended tool for operational questions like "top talkers" — it continuously receives exported records, stores them in a time-series database, and provides dashboards that aggregate flows by source IP, destination, protocol, or application over any time window with a single query. Option D (debug ip packet) would generate enormous output, consume high CPU, and provide no aggregation — it is for short-term per-packet troubleshooting only and should never be used as a bandwidth monitoring tool. See NetFlow Monitoring for collector tool options.