One of the neato-frito features of vSphere 6.0 is the supported ability to use Layer 3 vMotion. It’s even something that Jason Nash and I detailed during our vSphere Distributed Switch 6.0 Deep Dive back at VMworld 2015 US. If you’re interested in setting this up for your environment, I’m going to outline the few steps that are required to do so below.
Before continuing, it’s worth taking a pause to think.
Realize that Layer 3 vMotion does nothing to help the virtual machines running in your cluster(s). When a VM moves from one host to the next, there is an expectation that the necessary network is available. This could be done via stretching Layer 2 or using any number of network virtualization technologies. But Layer 3 vMotion, in and of itself, does not address the need for a VM to have the correct network available when it arrives at the target host. I think Tony Bourke did a fair job at expressing this on Twitter in the thread below:
Clarification: No, Layer 3 vMotion is not supported in vSphere 6.0. You still need Layer 2 adjacency between ESXi hosts.
— tbourke (@tbourke) May 23, 2016
Contents
The vMotion TCP/IP Stack
In order to leverage Layer 3 vMotion, you’ll need to take advantage of the vMotion TCP/IP stack. This is a separate stack from the default
one, meaning you can specify a gateway for vMotion traffic. Each host can have a unique gateway. So long as the gateway can route traffic between the different Layer 3 domains, routed vMotion is a reality.
A few things to note:
- You’ll need to craft new vMotion VMkernel adapter(s) for vMotion. There does not seem to be a way to change the TCP/IP stack if you have already created an adapter. Boo.
- The vMotion TCP/IP stack cannot be configured until at least one VMkernel adapter is added to the stack.
- Any VMkernel adapters that have vMotion enabled, but are not using the vMotion TCIP/IP stack, will have the vMotion feature disabled.
- No other features can be enabled on a VMkernel adapter that uses the vMotion TCP/IP stack. If you want to share with anything else – such as Fault Tolerance or VSAN – you’ll need to make another adapter. Right or wrong, I bring this up because I see it in the field. 🙂
In short, existing environments will need to nuke the old VMkernel adapters and create new ones. New environments will be fine so long as the vMotion TCP/IP stack is used during initial configuration of vMotion.
VMkernel Adapter Configuration
The actual configuration is really simple. When creating the adapter, make sure to change the TCP/IP stack from default
to vMotion
. The vMotion service will become enabled, and other services will grey out and remain disabled. If any other VMkernel adapters have the vMotion service enabled, a popup will state that this functionality will be removed.
I’ve enabled Fault Tolerance on this host to show what the impact would look like when switching to the vMotion TCP/IP stack.
If you wish to create multiple VMkernel adapters for vMotion, which is helpful for multi-NIC vMotion, repeat the process of creating VMkernel adapters and placing them on the vMotion TCP/IP stack for the host. Next, configure the TCP/IP stack.
Configuring the vMotion TCP/IP Stack
The only thing that you really need is the default gateway for the vMotion TCP/IP stack. This can be found by browsing to the Host > Manage > Network > TCP/IP configuration and editing the vMotion
TCP/IP stack. For this host, the gateway is 172.17.8.1 /30.
If you’re unable to edit the TCP/IP stack, you do not have any VMkernel adapters using the stack. Note that I have 1 adapter using it in the screenshot above. I’ll repeat the process for a different host using 172.17.12.1 /30.
Testing Layer 3 vMotion Connectivity
Once configured, I’d suggest testing connectivity with a ping using SSH. In the example below, I’ve logged into the host using 172.17.8.0 /30 for vMotion and issued a ping to the host using 172.17.12.0 /30 for vMotion. Note that this requires specifying the vMotion
netstack. A normal vmkping will not work.
ping ++netstack=vmotion -I vmk4 172.17.12.105
And now to try a vMotion across these two hosts using a victim’s volunteer’s VM.
If you need to revert this configuration, delete any VMkernel adapters that are using the vMotion TCP/IP stack. Once the last one is removed, the older VMkernel adapters can once again be configured for vMotion (if desired) and the vMotion TCP/IP stack will go dormant.
This sure beats the old days of having to do the entire set of work via the command line. And I’m sure that you could script the majority of this, if desired. I wanted to show it all “by hand” so that folks would know where all the knobs exist. 🙂