Quantcast
Channel: CCIE Blog | iPexpert
Viewing all articles
Browse latest Browse all 340

DMVPN for IPv6

$
0
0

As you probably already know, every DMVPN network consists of multiple GRE tunnels that are established dynamically. At the beginning, every Spoke in the Cloud is supposed to build a direct tunnel to the Hub. Then, once the Control Plane converges, the Spokes can possibly build tunnels with other DMVPN device(s), of course assuming that our DMVPN deployment (aka “Phase”) allows for that. 

In most cases DMVPN tunnels will be deployed over an IPv4 backbone, interconnecting different sites running IPv4. But since GRE is a multi-protocol tunneling mechanism, we can use it to carry different protocol traffic, like for example IPv6. Frankly, in the newer versions of IOS code you could even change the underlying transport from IPv4 to IPv6. This basically means that you can use an IPv4 OR IPv6 network to tunnel IPv4 OR IPv6 traffic.

In this particular article I am going to discuss a scenario in which the Transport/NBMA network (“Underlay”) uses IPv4 addresses, but the goal will be to use the DMVPN to interconnect sites enabled only for IPv6.

As you can see from the topology below, our private sites are configured with prefixes starting with 2192:1:X::/64, and the VPN (“Overlay”) subnet used is 2001:256::/64 :

DMVPN for IPv6

Note that we are using IPv6 addresses for the VPN, since we will be using an IPv6 routing protocol to exchange the Control Plane information. That’s the most common way of deploying a DMVPN like that (another option would be to use MBGP).

Let’s start our configuration. We will first configure our Hub (R6), then the Spokes (R2, R5), and finally enable routing on the Overlay network. Since IPSec is optional, we will not be using it in this example.

R6 (Hub) configuration. Almost everything is IPv6 here; other than that loopback0 is the NBMA (so it uses IPv4 6.6.6.6), and tunnel mode is set to “gre multipoint” which means that our transport is IPv4. Link-local address was hard-coded (we will do the same on the Spokes) because it must be always unique on a given Cloud. Those addresses are used to source Control Plane messages (e.g. NHRP), and will be used also used by our Routing Protocol (Next-Hop addresses in updates are link-local).

interface Tunnel256
ipv6 address FE80::6 link-local
ipv6 address 2001:256::6/64
ip mtu 1400
ipv6 mtu 1400
ipv6 nhrp map multicast dynamic
ipv6 nhrp network-id 256
tunnel source Loopback0
tunnel mode gre multipoint

Next comes R2. Note the key thing here – the NHRP mapping is between an IPv6 and IPv4 address. Also, since the Hub’s address is always the logical one, we are pointing to IPv6 here; multicast traffic will be sent to NBMA, however :

interface Tunnel256
ipv6 address FE80::2 link-local
ipv6 address 2001:256::2/64
ip mtu 1400
ipv6 mtu 1400
ipv6 nhrp map 2001:256::6/128 6.6.6.6
ipv6 nhrp map multicast 6.6.6.6
ipv6 nhrp network-id 256
ipv6 nhrp nhs 2001:256::6
tunnel source Loopback0
tunnel mode gre multipoint

On my particular code (15.1(3)T4) you could use the Context-Sensitive Help to quickly figure out the syntax (just remember that you have to start with ipv6 nhrp) :

code-DMVPN-1But note that this is not going to work on an IOS that supports IPv6 transport for DMVPN (15.2(1)T and above), which would show you both addresses as a potential argument  :
code-DMVPN-2

Pretty much the same commands go to R5, just remember to use correct addresses and hard-code link-local address :

code-DMVPN-3

Let’s now quickly verify our configuration :

code-DMVPN-4

We see Hub learned the mappings for both IPv6 addresses (global unicast and link-local) via NHRP. DMVPN is up :

code-DMVPN-5
How about one of the Spokes :

code-DMVPN-6

All good. Let’s now look at a GRE debug (debug tunnel) and check connectivity within the Cloud :

code-DMVPN-7

We do have connectivity within the Cloud, at least between the Spokes and the Hub. We will now deploy one of the IPv6 Routing Protocols to extend the original Control Plane with the VPN information. I am going to use EIGRPv6 and we will run DMVPN Phase II :

All devices (R2, R5, R6) :
code-DMVPN-8

EIGRPv6 was also enabled on interfaces emulating our private sites (2192:1:X::/64). Time to verify :

code-DMVPN-9

OK, so Control Plane is fine. Next is the Data Plane. We see that first traceroute goes through the Hub, but when R2 learns the NBMA for FE80::5, a Spoke-to-Spoke tunnel is established.
code-DMVPN-10

code-DMVPN-11

To finish that discussion, let me recap on our configuration :

  1. Even that Tunnel’s address is now IPv6, the encapsulation mode is still set to “tunnel mode GRE multipoint.”
  2. Watch out for “ipv6 nhrp” on the Spokes – depending on the argument this command takes IPv6, IPv4 or both types of addresses as an input.
  3. Link-local addresses must be unique within the Cloud. That’s why you should always hard-code them.

Download the Code here.

Learn more about IPv4 & IPv6 with our Video on Demand Course.


Viewing all articles
Browse latest Browse all 340

Trending Articles