How To Mix Rackmount and Blade Server NICs in a Distributed Switch

I’ve run into a few folks who ask if it is possible to mix a legacy rackmount design that uses multiple 1Gb NICs (think 4 or more) with a newer converged infrastructure or blade architecture design that supplies a pair of 10Gb NICs inside of a single distributed switch (VDS). The answer is “yes” – and although I’m not a huge fan of doing this, there are times where it’s just necessary, such as when an environment grows into a new blade architecture but still wants to leverage the investment in pizza boxes (slang for 1U rackmount servers).

It’s relatively trivial to mix NIC types when you’re using fiber channel HBAs to provide access to the storage array. It becomes a bit more sticky when providing network attached storage, either via block with iSCSI or via file with NFS.ย Here’s a few designs that you can reference to move forward.

Mixing NIC Types with iSCSI

This seems to be the most popular configuration that is in demand for mixed environment types. In this example, let’s assume the blades are presenting 2 NICs at 10Gb speed and the rackmount servers have a pair of dual port 1Gb NICs (4 total). If you’re constrained to a single VDS for management or simplicity sake, you’d want to generate a single VDS with 4 dvUplinks like shown below.

mixed-nics-iscsi-design2

The blade hosts would show that dvUplink3 and dvUplink4 are in a disconnected state, rendering them unusable by any port groups attached to the blades. This is by design. As such, I have created a design that leverages two sets of port groups for iSCSI with the assumption of using iSCSi port binding (as shown here).

The first pair of iSCSI port groups, named iSCSI-A 10Gb and iSCSI-B 10Gb, would be used only by the blade servers with 10Gb NICs when creating the vmkernel ports for iSCSI traffic. Logically, the remaining port groups, entitled iSCSI-A 1Gb and iSCSI-B 1Gb, would be for the rackmount servers. Make sure to adjust the failover policy accordingly to match the diagram shown.

  • Management is Active to dvUplink1, Standby to dvUplink2, and Unused to dvUplink 3 and 4
  • vMotion is Active to dvUplink2, Standby to dvUplink1, and Unused to dvUplink 3 and 4
  • VM Network(s) are Active to dvUplink1 and 2, and Unused to dvUplink 3 and 4
  • iSCSI-A 10Gb is Active to dvUplink1 and Unused to dvUplink 2, 3, and 4
  • iSCSI-B 10Gb is Active to dvUplink2 and Unused to dvUplink 1, 3, and 4
  • iSCSI-A 1Gb is Active to dvUplink3 and Unused to dvUplink 1, 2, and 4
  • iSCSI-B 1Gb is Active to dvUplink4 and Unused to dvUplink 1, 2, and 3

Continuing down the rabbit hole, a blade server would have a vmkernel port layout like so:

  • vmkernel port on iSCSI-A 10Gb with an IP on the iSCSI subnet
  • vmkernel port on iSCSI-B 10Gb with a different IP on the iSCSI subnet
  • no other vmkernel ports for iSCSI

And the rackmount servers would have a layout like this:

  • vmkernel port on iSCSi-A 1Gb with an IP on the iSCSi subnet
  • vmkernel port on iSCSi-B 1Gb with a different IP on the iSCSI subnet
  • no other vmkernel ports for iSCSI

Both blade and rackmount servers would be configured with iSCSI port binding so that their iSCSI initiators can have two paths to the iSCSI targets.

Mixing NIC Types with NFS

It’s a bit simpler to mix the two types of NICs when NFS is concerned. The type of NFS that vSphere uses can only create one session between the host and an NFS server. Because of this, fewer port groups are needed to isolate out the traffic between the 10Gb and 1Gb NICs. Using a pair of uplinks for NFS is really there to handle a NIC failure.

Interested in more technical goodies on NFS? Check out my NFS on vSphere deep dive series.

mixed-nics-nfs

The port groups are configured like so:

  • Management is Active to dvUplink1, Standby to dvUplink2, and Unused to dvUplink 3 and 4
  • vMotion is Active to dvUplink2, Standby to dvUplink1, and Unused to dvUplink 3 and 4
  • VM Network(s) are Active to dvUplink1 and 2, and Unused to dvUplink 3 and 4
  • NFS 10Gb is Active to dvUplink1, Standby to dvUplink 2, and Unused to dvUplink 3 and 4
  • NFS 1Gb is Active to dvUplink3, Standby to dvUplink 4, and Unused to dvUplink 1 and 2

The blade server vmkernel port layout would be:

  • single vmkernel port on NFS 10Gb on the NFS subnet
  • no other vmkernel ports for NFS

And the rackmount vmkernel port layout is:

  • single vmkernel port on NFS 1Gb on the NFS subnet
  • no other vmkernel ports for NFS

Since there is no such thing as NFS port binding, there’s nothing left to configure.

Thoughts

These designs are by no way the only way to get things done. You could just as easily have the VM Networks port group set to Active on all uplinks and let the fact that they are disconnected on the blades handle uplink selection. Or you may have more than 4 uplinks and choose to create virtual NICs on your blades to mirror your rackmount design – there’s a lot of variation. But, this should be a good start to get you through the design and implementation of a single VDS with mixed NIC types.

As always, I’m open to feedback, corrections, or better ideas. ๐Ÿ™‚