The new Logical Switch

Working with NSX – Fixing VTEPs and Building Logical Switches

In this post, I’ll start setting up Logical Switches. Each Logical Switch has an associated Segment ID pulled from the Segment ID pool; it’s a similar concept to VLAN IDs for Classical Ethernet in that each Segment ID will become a logical layer 2 segment. But first, let’s address an issue that occurred since I last constructed my NSX lab.

Someone Broke my VTEP

In this case, the someone was me. During my lab’s network migration, I deleted a vmkernel interface being used as a VXLAN Tunnel Endpoint (VTEP) on a host. NSX Manager let me know that I broke things when I visited the Networking & Security > Installation > Logical Network Preparation tab.

NSX VTEP Missing
This is why we can’t have nice things.

Fixing this issue is rather simple. Don’t try to create a vmkernel interface manually; that’s not going to work. Instead, have NSX fix it for you by finding an open IP in the pool and crafting up a replacement vmkernel interface.

Switch over to the Host Preparation tab. There will be an Error listed under the VXLAN column. Click on the Error link, then click on Force-Sync Configuration.

Fixing the VTEP
Fixing the VTEP

A few tasks will appear: “Add virtual NIC” followed by “Update network configuration.” These are both performed by your NSX service account.

Tasks to build a new VTEP
Tasks to build a new VTEP

After these are finished (and they only take a few seconds), the missing VTEP vmkernel interface will appear.

The VTEP Returns
The VTEP Returns

That was rather easy, yeah?

Building Logical Switches

As you might have guessed, building new Logical Switches is done in the Logical Switches section of the Networking & Security menu. I’ve created a few of them already.

Existing Logical Switches
Existing Logical Switches

The Server-Templates switch is where I now house all of my VM Templates, while View-Desktops-External is a switch hosting my external facing Windows desktops for remote access.

Let’s build a new one for Desktop-Templates. There are very few options to concern ourselves with:

  1. Name: The name of your switch. I like to use something descriptive and obvious to make administration easier.
  2. Description: As it says. I haven’t found much use for these yet, personally.
  3. Transport Zone: Select a transport zone, which will automatically select the replication mode that was associated with the transport zone.
  4. Enable IP Discovery: If you want to suppress ARP traffic, check this box. As a reminder, ARPs are generated when a source knows a target’s IP address, but not their MAC address. An ARP is a frame that is broadcast (destination of FF:FF:FF:FF:FF:FF) across a layer 2 segment. NSX Controllers has the ability to maintain an ARP table for each VXLAN segment, removing the need for most ARPs.
  5. Enable MAC Learning: A handy feature to enable if your VMs have multiple MAC addresses or are using virtual NICs that are trunking VLANs, which is a bit rare to see. As per the official documentation: Enabling MAC Learning builds a VLAN/MAC pair learning table on each vNic. This table is stored as part of the dvfilter data. During vMotion, dvfilter saves and restores the table at the new location. The switch then issues RARPs for all the VLAN/MAC entries in the table. (source)
Creating a new Logical Switch
Creating a new Logical Switch

That’s it. Click OK. A little bit later, you have a new Logical Switch. Note that it appears as a port group in your distributed vSwitch with a long-ish name, but the name of the switch and its Segment ID is embedded towards the end.

The new Logical Switch
The new Logical Switch

As it stands now, the new Logical Switch is mostly worthless because there’s no way to get traffic in or out of the new segment. We could fix this by using layer 2 bridging with an existing VLAN, or creating a Logical Router and new routes to support layer 3. Or, you might actually have a use case where you want a completely isolated VXLAN segment.

One great example? Disaster Recovery (DR) tests. You could create a number of isolated network segments that have no way of leaving their segment, or can only talk to a specific set of other segments, for your DR tests. This avoids any need to re-IP things. It’s also a much more powerful method compared to the “test bubble” networks offered by products like VMware’s Site Recovery Manager (SRM) – those networks have no physical uplink, meaning that traffic can’t leave an ESXi host (which is mostly worthless).

In the next post, I’ll review layer 2 bridging as a method to help migrate workloads into a new VXLAN segment.