Efficient Virtual Networking Designs for vSphere Home Lab Servers

Although we’d all like to have a plethora of NICs in our physical home lab servers, it’s usually a bit more budget friendly to go with anywhere from two to four NICs. This can be a challenge to create a robust set of virtual networks that can be used to study and test a variety of networking scenarios. This post will cover a few tricks that can be used to help create an efficient virtual network for your home lab servers that will also maximize your testing and learning opportunities.

Note: These designs are not necessarily best practice in a production environment, and are meant to help introduce a beginner to a wide variety of scenarios for a lab situation.

Global Port Groups

For any vSwitch configuration that you create, a home lab will need these port groups (at a minimum).

Normal Traffic

  • Management – For host management traffic
  • Fault Tolerance – Fault tolerance logging to set up an FT VM
  • vMotion – Migration of virtual machines
  • Virtual Machines – VM traffic

Storage Traffic

  • iSCSI-1 – First portgroup for binding an iSCSI vmknic in the iSCSI Software Adapter
  • iSCSI-2 – Second portgroup for binding an iSCSI vmknic in the iSCSI Software Adapter
  • NFS – NFS storage traffic

Two NICs – One vSwitch Configuration

If you’re limited to a pair of NICs, there is still a rather decent way to set things up to tinker with vSphere in your home lab. Here is a high level layout.

Because there are so few NICs in this layout, the connections can be a bit confusing. Here is a list:

Normal Traffic

  • Management – Active on vmnic0, Standby on vmnic1
  • Fault Tolerance – Active on vmnic1, Standby on vmnic0
  • vMotion – Active / Active
  • Virtual Machines – Active / Active

Storage Traffic

  • iSCSI-1 – Active on vmnic0, Unused on vmnic1
  • iSCSI-2 – Active on vmnic1, Unused on vmnic0
  • NFS – Active / Active

From the vSphere Client, the configuration would roughly look like this:

Note the following:

  1. We don’t normally want to mix Storage Traffic with Normal Traffic, but in this case there is no choice.
  2. vMotion is Active / Active so you can experiment with dual vMotion vmkernels, which is a new feature with ESXi 5. You might need to make a second portgroup to ensure that both vmkernel ports choose a different uplink, but I have yet to find that to be the case in my lab (the configuration as shown works).
  3. iSCSI 1 and 2 are configured to allow you to experiment with iSCSI binding. Yes, you can do iSCSI binding with this layout and still have all the other traffic scenarios simultaneously!
  4. You aren’t expected to have two physical Switches (A and B) as shown. If you are limited to one physical switch, plug both cards into it.

Four NICs – Two vSwitch Configuration

The four NIC configuration allows for a more proper setup, as many newer environments today are only given 4 10GbE NICs (especially in the converged infrastructure world). Also, VMware recommends that a server have no less than four NICs and flags any configuration with less during a Health Analyzer check.

And here is the list of connections (normal traffic hasn’t changed):

Normal Traffic

  • Management – Active on vmnic0, Standby on vmnic1
  • Fault Tolerance – Active on vmnic1, Standby on vmnic0
  • vMotion – Active / Active
  • Virtual Machines – Active / Active

Storage Traffic

  • iSCSI-1 – Active on vmnic2, Unused on vmnic3
  • iSCSI-2 – Active on vmnic3, Unused on vmnic2
  • NFS – Active / Active

Here’s a screenshot from my lab showcasing how the switch would look for both normal traffic (which I called Production) and storage traffic. I have an added portgroup called “Storage Trunk” for trunking VLANs into virtual storage appliances that need multiple virtual interfaces.

Note the following:

  1. This scenario properly isolates storage traffic.
  2. vMotion is Active / Active so you can experiment with dual vMotion vmkernels, which is a new feature with ESXi 5. You might need to make a second portgroup to ensure that both vmkernel ports choose a different uplink, but I have yet to find that to be the case in my lab (the configuration as shown works).
  3. iSCSI 1 and 2 are configured to allow you to experiment with iSCSI binding.
  4. You aren’t expected to have two physical Switches (A and B) as shown. If you are limited to one physical switch, plug both cards into it.

Thoughts

These two configurations should get you started down the path of a well designed and highly available virtual networking design in your home lab environment. Just because it’s a home lab doesn’t mean you should just create a single vSwitch and leave everything default – go through the practice of setting everything up properly to reinforce good habits later.