Released with vSphere 5.1, the “enhanced” vMotion is VMware’s answer to the problem of performing a migration of a running workload at both the compute and storage layers simultaneously. One added bonus of this technology is the ability to shift from storage platforms that are either local to the host or not shared – hence the “shared-nothing” verbiage.
After giving a vSphere 5.1 storage deep dive presentation at the Chicago VMUG (VMware User Group), I wanted to go deeper on the enhanced vMotion capabilities and really take them for a spin in the lab.
In order to leverage the new enhanced vMotion option, you’ll need an environment running vSphere 5.1. I’m using vCenter 5.1.0 build 799731 with ESXi 5.1.0 build 838463. Additionally, the feature is only available in the vSphere Web Client, sometimes referred to as the “NGC” or Next Generation Client” – if you try to use the vSphere Client, also known as the C# client or “Thick” client, the option will not be available.
This will continue to be confusing for quite some time, as some features are only available in the vSphere Web Client, while others (like plugins) typically require the vSphere Client. Over time, the vSphere Web Client will ultimately take over.
The option is greyed out in the vSphere Client
Additionally, all of my lab hosts are in a single cluster together and have multi NIC vMotion configured.
In this experiment, I am going to migrate a Linux VM from one host to another. Both hosts are using discrete storage that is not shared with any other host.
The source datastore is called “OldArray” and is mounted only to the ESX1 host. This is where the VM named “Enhanced-vMotion” is currently living.
And here is a shot of the VM. It is running on ESX1.
Finally, here is the destination datastore named “NewArray” that is available only to ESX2.
Enhanced vMotion in Action
Once I initiate the enhanced, shared-nothing vMotion, I’ll have to answer a few questions. Essentially it’s just like doing a vMotion and storage vMotion – what host do you want to migrate to, what datastore, and a priority level.
Next, I select the host ESX2. All of the datastores that the host can see a presented as options. I select the “NewArray” datastore.
A final review of the migration.
The Networking Magic
Because neither host can see both datastores, vSphere utilizes the vMotion network to transfer the storage bits over. Below is a snapshot from ESXTOP that is running on the destination host ESX2. It’s a complicated and busy picture, so I ripped out as much of the non important information as possible.
vMotion vmknics (Red Box)
I’ve drawn a red box around the receive rate of both vmk1 and vmk6 – these are the two vmknics used for vMotion on ESX2. Notice that they are receiving about 190 Mbps of data over both vmknics – this is the power of multi NIC vMotion in action.
NFS vmnic (Yellow Box)
Additionally, I’ve drawn a yellow box around the transfer rate of the vmnic that my NFS vmknic (vmk2) is using.
This is to show that as data comes in (received) from the vMotion network Red Box, it is then written to the NFS datastore (transferred) over the Yellow Box.
If you’ll recall from my NFS on vSphere Deep Dive series, NFS can only create one session to the storage array. As such, I’m limited to a single vmnic uplink for my NFS vmknic. vmk2 (NFS) will only use vmnic0 (the assigned uplink) in this scenario.
This rather small Linux VM (about 1GB) took about 2 minutes to migrate over to ESX2. The transfer process takes a bit to get going, and about a minute of it seemed to be preparation as I didn’t see any traffic on ESXTOP. In this case, I am pretty sure that the bottleneck was the source datastore which was running on a single 2TB SATA drive. I migrated this workload back and forth 3 complete times to verify that the process was similar each time.
I still think this technology can be really handy in both a data center migration scenario (move from old hardware to new hardware in one step) or for other service related activities. Shared-nothing vMotion is a tool, and like any tool it has times when it is useful and times when it is not useful. If you are given the opportunity to migrate a workload in an offline state I would suggest going for it – the migration will be quicker and less stressful to the environment.
1,989 total views, 3 views today