Multi-NIC vMotion is an interesting advantage introduced with vSphere 5. Essentially, it is the act of allowing multiple uplinks to pass along vMotion traffic, even when only a single VM is being migrated. It doesn’t require any static-LAG nonsense or other network voodoo; you just configure it properly in vSphere and it works. Because nearly every vSphere admin is comfortable with vMotion and has used it for years, I get a lot of questions around how to enable Multi-NIC vMotion and how it is configured in my lab. Rather than continue to answer that one on one, I figured it was worth a blog post. 🙂
Additionally, I’m often asked if Multi-NIC vMotion needs jumbo frames enabled. The simple answer is “no” but my video featured below goes over my vMotion vmkernel port configuration, highlighting the fact that my vmkernel ports are set to the default 1500 MTU (Maximum Transmission Unit) instead of a jumbo size of 9000. In reality, I don’t encounter too many environments that are already tuned for end-to-end jumbo frames. I assume this is why it sparks so many conversations.
Note: Designing for Multi-NIC vMotion gets a little interesting and has a few gotchas. I especially call out converged infrastructure architectures, such as an HP BladeSystem or Cisco UCS, due to the limited number of real uplinks to the upstream network fabric (typically just two). Make sure that you don’t saturate all of the available throughput and starve other types of traffic!
Contents
Lab Configuration
It’s not all that difficult to prep your VDS for Multi-NIC vMotions. The basic steps I used in the lab are:
- Create a pair of port groups on a VDS, label them vMotion-A and vMotion-B
- Put them onto the same VLAN (I use 253)
- Set vMotion-A uplinks to Active / Standby, and vMotion-B to Standby / Active
- On the host, create a vmkernel port on each port group using a single subnet (I use 10.0.253.5X and 10.0.253.15X where X = host ID)
- Mark both vmkernel ports for vMotion
- Profit!
Multi-NIC vMotion Design
And here is a graphic look at the home lab network design for vMotion:
Unused vs Standby Uplinks
I’ve always preferred to use Unused uplinks as opposed to Standby uplinks. My line of thought was around the “double dip” that occurs with a NIC failure, thinking that both streams would use the same surviving uplink. However, thanks to some very savvy readers (@virtualpedia, @LiorKamrat, and Peter) who pointed out in the comments that the vMotion will actually fail when using Unused during an uplink failure.
I pulled the uplink from a few servers and tested a vMotion using Unused. The vMotion fails at 14% with an error:
I also tried the other uplink with the same results. The moral of this story is that while Unused will work, Standby uplinks are preferred as they will survive an uplink failure.
Video
As always, please make sure to let me know if the video was helpful with a Like, and support my video efforts with your Subscription!