Migrating a Dual-NIC vMotion Network to IPv6

After completing an IPv4 transition in the lab, I wanted to get a bit more stick time with IPv6. vMotion seemed like the logical choice for experimentation; it’s a local traffic type that, when configured incorrectly, simply fails without harming my environment. I’ll leave it to you to figure out when and why you might want to do this, but I had little luck finding any solid documentation on IPv6 out in the web – thus, this post. Kudos to Christian Elsen for his post on vSphere with IPv6 as my starting point.

Enabling IPv6 on the ESXi Hosts

This step was already completed in my new lab network design blog post. Basically, enable IPv6 support on each host, which requires a reboot. Here’s a VMware pub doc that describes the process.

Once completed, all of the VMkernel interfaces will show a link-local unicast address starting with FE80::/10 that can be shown by typing esxcli network ip interface ipv6 address in the ESXi shell.

IPv6 Interface List

These don’t really do all that much because each VMkernel interface is on a different VLAN. I tried sending vMotion traffic over this address but hit a dead end and this error:

2014-10-19T18:56:03.295Z cpu3:156697)Migrate: vm 156700: 3284: Setting VMOTION info: Source ts = 1413744962198089, src ip = <fe80::250:56ff:fe68:2338> dest ip = <fe80::250:56ff:fe67:fdbb> Dest wid = 9871351 using SHARED swap
2014-10-19T18:56:03.295Z cpu3:156697)WARNING: VMotion: 2990: 1413744962198089 S: VMotion init: invalid source IP address specified: <fe80::250:56ff:fe68:2338>

After pondering on this for a little bit, I thought it might be a routing issue. I fired a esxcli network ip route ipv6 list at the ESXi shell to view what routes are available for FE80::/10

IPv6 Routes

Based on this routing table, every interface can get you to the FE80::/10 network. I assume that the hypervisor selects the first one on the list, which would be the loopback interface or perhaps vmk0 (management). Or it just dislikes using the link local addresses. Hard to tell; the VMware documentation for IPv6 is sparse.

vmk0 is on VLAN 20, while my vMotion VMkernel interfaces – vmk1 and vmk2 – are on VLAN 30. Thus, traffic can’t get to the destination. Random interface selection won’t do the trick; I’ll need to ensure that vMotion traffic goes out vmk1 and vmk2.

I suppose at this point I can fuss around with the netstack details on the host, but it seems cleaner to just create an IPv6 network for my vMotion interfaces.

In case you’re wondering, the FE01:: and FE02:: networks are for IPv6 multicast.

IPv6 Network for vMotion

I decided to create a local IPv6 network for vMotion traffic using the FD starting prefix, a global ID of 0000000001, and a subnet ID of 30 to match my VLAN of 30. Thus, my prefix is FD00:0:1:30::/64. I figure this is easy to read and semi-obvious for my brain to correlate to the VLAN. Maybe there are sexier ways to do this.

My first attempt was to hand out IPv6 addresses using my Windows 2008 R2 DHCP server. However, my attempts were not successful. The ESXi interfaces would solicit, the DHCPv6 server would advertise, but no request or reply took place. I chalked this up to either ESXi being goofy, my DHCPv6 server being too old, or I’m just unskilled with DHCPv6 on Windows. Below is the scope used:

DHCPv6 with Windows Server 2008 R2

Interesting enough, when I manually assign a FD00:0:1:30::/64 address to a VMkernel interface, and leave the DHCPv6 option enabled, the ESXi server then goes out and picks up an address via DHCPv6 along with my static entry. Weird. Here’s a few leases that successfully went through.

DHCPv6 Leases

At this point I’ve proven that the DHCPv6 server works, but perhaps ESXi is the one being goofy. No matter, I just manually assigned an IPv6 address to my three lab hosts. Since I run dual-NIC vMotion, it required two addresses. I used the letter A and B at the end to denote which interface is receiving the address.

Assigned IPv6 Addresses

And here’s a view from the vSphere Client. It’s not very exciting, sadly. Note that all IPv4 settings are wiped and disabled. There’s also no gateway because this is a local network without a routed interface or SVI.

IPv6 on vmk1

vMotion Tests

It’s not worth speed testing my gear – there’s plenty of performance tests out there – but I did want to toss a few vMotions back and forth in the lab to ensure it was all operational. I used esxtop to validate that multiple NICs were being used at the same time. Success.

Dual NIC vMotion Validation