It’s now time to build the logical network and the control plane for NSX between the computer cluster resources and the controllers. To accomplish this, we’ll spend time configuring VXLAN for the corresponding computer clusters, creating VTEP interfaces on the ESXi hosts, defining a range of segment IDs, and boxing it all up into a transport zone. Once this is complete, all of the “one time deployment” tasks for NSX will be completed, and we can move along towards building out the network and security services that tenants will want to consume.
Contents
VXLAN Configuration
Start by navigating to the Web Client > Networking & Security > Installation > Host Preparation. Now that the cluster has been configured for NSX and all hosts show a status of Ready, the option to configure VXLAN becomes available. Click on the Configure link under the VXLAN column for the vSphere cluster.

A wizard will open asking for the VXLAN networking configuration details. Ultimately, this will create a new VMkernel port on each host in the cluster as the VXLAN Tunnel Endpoint (VTEP).
- Switch – The switch used for attaching the new VTEP VMkernel port.
- VLAN – The VLAN ID to use. Enter “0” if you’re not using a VLAN, which will pass along untagged traffic. In my lab environment, I’ve opted to do just that.
- MTU – The Maximum Transmission Unit, or the size of the payload (in bytes) that will be used within the frame. The recommended value is 1600, which allows for the overhead incurred by VXLAN encapsulation. It must be greater than 1550 and the underlying network must support the increased value. I have my distributed vSwitch set to 9000 MTU.
- VMKNic IP Addressing – I’d suggest using an IP Pool over a DHCP Client. If you opt to use a pool, select a pool for VTEP interfaces or create a new one. I cover the specifics in the next section.
- VMKNic Teaming Policy – The method used for bonding the vmnics (physical NICs) for use with the VTEP port group. I’ve opted for fail over since I don’t have LACP configured on the vmnics used by this particular vSwitch. Other options are Static EtherChannel, LACP (Active), LACP (Passive), Load Balance by Source ID, Load Balance by Source MAC, and Enhanced LACP.
- VTEP Value – The number of VTEPs per host. This value should be left at default and is not even configurable if you choose Fail Over, Static EtherChannel, or LACP (v1 or v2).

If you do opt to create an IP Pool, the settings are below:
- Name – Whatever you wish to call the pool. I try to go for something descriptive.
- Gateway – The gateway address to leave the subnet.
- Prefix Length – The subnet mask in CIDR notation, which is the number of bits to be used for the network portion of the IP address.
- Primary DNS – The primary IP address used for DNS lookups.
- Secondary DNS – The secondary IP address used for DNS lookups.
- DNS Suffix – The suffix to use when looking up host names. Everything in my environment belongs to the glacier.local domain, and so I’ve included it here.
- Static IP Pool – The range of IP addresses that you are granting to NSX for VTEP VMkernel interfaces. Make sure nothing else has control over the range to avoid IP conflicts. I’ve defined a rather small range of 5 addresses since the lab only has 4 hosts, but you’d want to make sure your range is large enough to handle your environment plus any forecasted growth.

It will take a few minutes or so for the configuration to complete. You’ll see the VXLAN status for the cluster change to Enabled. Click on the Logical Network Preparation tab and make sure the VXLAN Transport sub-tab is selected. Expand the cluster to see each host’s new VMkernel interface (which is vmk5 in my lab) and assigned IP address from the pool.

You can also cruise on over to your virtual switch to see the new port group that was created. It will have a name similar to vxw-vmknicPg-dvs-UUID as shown below:

It would also be possible to do this via an API POST request.
POST https://nsx_mgr_ipaddress/api/2.0/nwfabric/configure
<nwFabricFeatureConfig>
<featureId>com.vmware.vshield.vsm.vxlan</featureId>
<resourceConfig>
<resourceId>domain-c9</resourceId>
<configSpec class=”clusterMappingSpec”>
<switch><objectId>dvs-645</objectId></switch>
<vlanId>0</vlanId>
<vmknicCount>1</vmknicCount>
<ipPoolId>ipaddresspool-2</ipPoolId>
</configSpec>
</resourceConfig>
<resourceConfig>
<resourceId>dvs-645</resourceId>
<configSpec class=”vdsContext”>
<switch><objectId>dvs-645</objectId></switch>
<mtu>1600</mtu>
<teaming>FAILOVER_ORDER</teaming>
</configSpec>
</resourceConfig>
</nwFabricFeatureConfig>
VXLAN Segment IDs
It’s now time to create Segment IDs. In a way, you can think of these like VLANs for VXLAN … except you can have 16,777,216 of them. Segment IDs will form the basis for how you segment traffic within the virtualized network. Although it is technically possible to use values between 1 and 16 billion, VMware has decided to start the count at 5000. This was done to avoid any confusion between a VLAN ID, which range from 1 to 4094, and a VXLAN Segment ID. I can see how that would make things more troublesome in conversations and potentially cause confusion when communicating between teams.
Click on the Segment ID sub-tab and then click on the Edit button. In my lab, I’ve chosen to use the range 5001-5999. This allows me to use 999 different logical networks in my lab, which is overkill. But it’s fun overkill!
- Segment ID Pool – The range of Segment IDs to use for creating network segments.
- Enable multicast addressing – If you desire to use Hybrid (Unicast + Multicast) or Multicast for your VXLAN network, check this box. I’m only using Unicast mode in the lab, so I’m leaving it unchecked. Keep in mind that multicast is not a requirement for ESXi 5.5 and later, but I tend to prefer Hybrid mode for most production environments because it very cleverly uses Multicast for Layer 2 and Unicast for Layer 3.

Here’s the API POST request for creating a new Segment ID.
POST https://nsx_mgr_ipaddress/api/2.0/vdn/config/segments
<segmentRange>
<id>1</id>
<name>spongebob</name>
<begin>5001</begin>
<end>5999</end>
</segmentRange>
Check this out: Interested in comparing the amount of traffic generated in Unicast Mode versus Hybrid Mode? Check out this NSX BUM Calculator (meaning broadcast, unknown unicast, and multicast) calculator by Dmitri Kalintsev! All credit to him.
Transport Zone
Last, but not least, we’ll need to wrap things up with a new Transport Zone. If you’re familiar with the idea of Network Scopes in prior versions of vCloud Network & Security (vCNS, formerly vShield), this is roughly the same thing. It’s a way to define which clusters of hosts will be able to see and participate in the virtual network that is being defined and configured.
Click on the Transport Zones sub-tab, then click the green plus button to add a new transport zone.
- Name – The fancy name that you’ve decided to call this new zone. I am lame and just go with “Transport-Zone” since I only need one.
- Description – Whatever you feel is worth notating about this particular zone. I chose to go with nothing at all. 🙂
- Control Plane Mode – The method that VXLAN will use to distribute information across the control plane. Here are the official details as per the NSX Installation Guide:
- Multicast: Multicast IP addresses on physical network is used for the control plane. This mode is recommended only when you are upgrading from older VXLAN deployments. Requires PIM/IGMP on physical network.
- Unicast : The control plane is handled by an NSX controller. All unicast traffic leverages headend replication. No multicast IP addresses or special network configuration is required.
- Hybrid : The optimized unicast mode. Offloads local traffic replication to physical network (L2 multicast). This requires IGMP snooping on the first-hop switch, but does not require PIM. Firsthop switch handles traffic replication for the subnet.
- Clusters – Pick the clusters that should be added to the transport zone.

Here’s the API POST request for creating a new Transport Zone (documented as a Network Scope).
POST https://nsx_mgr_ipaddress/api/2.0/vdn/scopes
That’s it for this post! At this point, all of the “guts” of NSX are configured and we can begin creating logical switches, routers, firewalls, or whatever else strikes our fancy. The next section focuses on Creating NSX API Calls with PowerShell.