I’ve spent a lot of time studying link state routing protocols and also BGP because they are more prominent in data center and service provider networks but EIGRP also has use cases, particularly for sharing routes in a DMVPN environment. I decided to create a DMVPN topology to study a few advanced EIGRP features. Here’s the setup:
There are three HUB routers, two of which are locally adjacent to each other. I designed the topology to study HA with DMVPN. The gray rectangle in the middle is a public network which I simulated with a router in the lab. All the routers have a static default route out the interface connecting to the public network. The red IPs are tunnel endpoint IPs. Here is a list of things that I’ve configured:
- All spoke routers (SK.N) are manually configured with next hop server addresses of the three hub routers.
- I used IKEv1 and IPSec pre shared key for the tunnel encryption.
- All the routers participate in EIGRP AS 100.
- HUB1 and HUB2 both advertise the 172.16.1.0/24 network into EIGRP but all neighbors should prefer HUB1’s advertisement.
- Spoke 10 (SK.10) advertises a summary route for the locally connected routes to its EIGRP neighbors.
- Shutting down HUB1’s tunnel will not affect reachability from one spoke to another.
First, the DMVPN. The GRE tunnels are standard tunnels. I’ve used either GigabitEthernet 1 or GigabitEthernet 0/1 as the NBMA interfaces that the tunnel interfaces use as their source. The MTU and TCP segment sizes aren’t required for the GRE tunnels but I’ve included them out of habit. There are a lot of different options that can be configured for NHRP and I’ve decided to keep this configuration at phase 2 DMVPN. Add the NHRP shortcut command on spoke routers and the NHRP redirect command on hub routers to convert this into a phase 3 DMVPN configuration.
The HUB routers are at 10.1.1.1, 10.1.1.2 and 10.1.1.3. EIGRP uses multicast address 220.127.116.11 to send hello messages which requires mapping multicast traffic to any router with which you want to form a neighbor relationship with. Here’s the ISAKMP and IPSec configurations which are consistent across all routers.
I’ve used IKEv1 with an ISAKMP policy configured but of course IKEv2 is also possible here with minimal change. Here’s the configuration for HUB1.
Split horizon needs to be disabled in order for the other spoke routers to learn each others advertised routes. As shown in the diagram above, HUB1 and HUB2 are both attached to the same 172.16.1.0/24 subnet and both advertise the prefix into EIGRP. There are many ways to make HUB1’s route preferred. You can change the bandwidth, delay, administrative distance or filter the route at all the spoke routers. I’ve chosen to use an offset list to make HUB2’s prefix have a slightly larger metric.
As you can see, there’s nothing fancy about the offset list. It uses an ACL to choose which prefix(es) to attach the offset to. If you didn’t know, there is a quick way to attach an offset to all prefixes. Just use the special ACL 0 in the offset list command! A quick RIB check on one of the spoke routers confirms that HUB2’s prefix is not installed, though it does show up in the EIGRP topology.
There’s one line to have spoke 10 advertise a summary route into EIGRP. I’ll show it along with the traffic from spoke 10 to spoke 2 before and after HUB1 is disabled.
As the log indicates, the tunnel to HUB1 is torn down and the EIGRP neighbor goes down but there is still reachability to 10.1.1.101 (SK.2) which indicates that another next hop server is being used. The same situation would happen if HUB2 or HUB3 fails. Interestingly, to have reachability from HUB1/2 to HUB3, one of the HUBs needs to be configured as a client of the other.
The first time I saw a demonstration of DMVPN I thought it was amazing. Automatically establish encrypted GRE tunnels? A pseudo-mesh topology that can shrink and grow as the traffic from one spoke to another changes? All of that sounds amazing and though I haven’t configured it here, there are a lot of other options involving timers, NHRP flags and NAT considerations.
Though I think DMPVN is interesting, EIGRP itself also warrants study if only for some of its idiosyncrasies. It’s a distance vector protocol but passes topology information. It’s the only routing protocol that allows for unequal cost multi-pathing and the full metric calculation is a lengthy equation. Though there are multiple K values, I’ve heard that changing the default K configuration is not widely done since all EIGRP routers would need to have the same K configuration to become neighbors. I am interested in studying cases where non-default K values were required and how much more complex traffic steering becomes.