The VMware acquisition of Nicira (July 2012), a company that focused on decoupling the physical and virtual networking shackles, caused a serious stir in the community. How would the leader of x86 virtualization use a software defined networking startup to further their business and technological goals? I think the emphasis on a software defined data center (SDDC), a phrase often pushed heavily by VMware and its ecosystem, gives the first clue, while all of the dedicated resources towards the NSX project (announced back in March) and the Nicira Network Virtualization Platform (NVP) provide another.
The end result is a synergistic blend of NVP and vCloud Network and Security (vCNS), which was formerly known as vShield, to provide some stellar control over the network plumbing in a hypervisor heterogeneous, mixed physical and virtual environment. The key functions that are being promoted are decouple, reproduce, and automate. Compare that to the abstract, pool, and automate mantra of SDDC, and you get a familiar vibe. I had the distinct pleasure to have a conversation around NSX with a pair of VMware networking thought leaders, Scott Lowe and Brad Hedlund, so that I could provide thoughts here.
The Vision of Network Virtualization
To set a foundation for the discussion, we first talked about the goals of network virtualization. Namely, the idea is to provide programmatic provisioning that wrap into a policy for changing workloads. There’s still a need to properly construct and plumb in the underlying physical network, but a serious reduction in adds / moves /changes (sometimes abbreviated as AMCs or MACs) exists at the physical layer. Much of the layer 2 – 7 work can now be offloaded to a virtual control and management plane that provide for the networking needs of your virtual workloads within a virtual data center (VDC).
To make a generic comparison, network virtualization shares many of the same goals as compute virtualization:

High Level Architecture
In this high level view of the VMware NSX architecture shows, the major components are the controller cluster (shown as three nodes), hypervisors – such as vSphere, KVM, and Xen – with virtual switches that can participate with NSX, and an NSX Gateway to act as a bridge for L2 switching and L3 routing. Here’s a graphic showing the complete solution to whet your appetite, followed by a breakdown of the various components in more detail.

NSX Controllers
The controllers are a net-new, custom solution developed by Nicira and later brought into VMware’s NSX. There are a minimum of 3 controllers for an NSX controller cluster, which avoids any single point of failure and allows other controllers to provide control in the event of a failure in a quorum manner. The controllers can be deployed in various form factors: service virtual machines for a pure vSphere play, or as a physical appliance in a multi-hypervisor environment.
Bot the virtual and physical offerings are running the same code base, but fill in the hole for various use case needs and are optimized for the delivery option they are being used for (meaning either vSphere or mixed hypervisor).
NSX Virtual Switches
NSX takes advantage of programmable virtual switches on the hypervisors deployed in the environment. This boils down to two options:
- In a pure vSphere deployment: NSX relies on the vSphere Distributed Switch (VDS) and a Userworld (UW) Agent to communicate with NSX controllers.
- In a mixed hypervisor deployment: NSX relies on the Open vSwitch for KVM and Xen and a new NSX vSwitch for ESXi. The NSX vSwitch is an in-kernel virtual switch that lives in the hypervisor itself.
NSX Gateways
In almost all environments there will be a need to get traffic in and out of the NSX environment, as there are always going to be traffic flows that need to chat with other objects such as desktops, the Internet, and legacy infrastructure. This is handled by NSX Gateways. They provide both layer 2 and layer 3 functionality and can move traffic from physical to virtual or virtual to virtual network segments. Similar to the NSX Virtual Switches, there are two types of deployments:
- In a pure vSphere deployment: NSX relies on NSX Edge, which was derived from the vCNS Edge (formerly vShield Edge) appliance.
- In a mixed hypervisor deployment: NSX relies physical appliances that can be combined for a scale-out architecture.
Multi-Hypersivor Environment Architecture
Now that you’ve learned about the three major components of NSX, let’s look at the architecture in a mixed environment. In this diagram, we see three different hypervisors in play: KVM, Xen, and ESXi. In this scenario, the NSX Gateway Appliances and NSX Controllers would be physical devices.
I’m curious when Hyper-V support will added (Scott couldn’t release that info). Since Hyper-V has supposedly claimed a good quarter chunk of the hypervisor market and seems to be found near vSphere as a non-production or pre-production counterpart for specific workloads, it would make sense as a future road map item to me.

vSphere-Only Environment Architecture
Also called the “VMware NSX optimized for vSphere” architecture, this layout can leverage virtual appliances for the Controllers and Gateways, and the VDS on the hypervisor. It has a much more complete sort of look to it, as you would expect from a single-vendor architecture approach.
It’s important to note that in today’s architecture, there is a one-to-one relationship between the NSX Manager (a replacement for vCNS Manager) and vCenter Server. This ultimately means that NSX will not span across vCenter Servers.

The little widgets you see on each host are kernel modules:
- Security
- VXLAN
- Distributed Router (DR)
- Distributed Firewall (DFW – basically functionality has been lifted from vCNS App).
There’s also the Userworld (UW) Agent, which is not shown.
NSX Logical Features
There are five areas of functionality that NSX focuses on providing: switching, routing, firewalling, load balancing, and VPNs. Switching seems to have received a lion’s share of the attention, since it’s mostly net-new functionality that isn’t being forklifted from the vCNS ecosystem. Routing is also rather beefed up. The remaining three items seem to mostly be the same features I’ve worked with in vCNS Edge.
Logical Switching
The meat and potatoes of NSX is the ability to perform layer 2 switching independently from the underlying physical topology. This is sometimes referred to as logical layer 2 adjacency. Using a network overlay (when required), NSX is able to use VXLAN in a vSphere-Only environment or a combination of Stateless Transport Tunneling (STT), Generic Routing Encapsulation (GRE), and VXLAN. These are the major heavy hitters in the emulated layer-2 over IP space.
One big evolution with NSX is the removal of a requirement for multicast with VXLAN. The use of multicast, with IGMP snooping and IGMP querier, is typically a tough nut to crack for most shops due to legacy equipment, legacy runtime code, or a general distrust or fear of multicast traffic. Additionally, NSX can do VXLAN-to-VLAN layer 2 bridging, which is very handy if you need to provide layer 2 adjacency from your network virtualized data center stack into physical gateways, physical workloads, or virtual machines running in another physical segment outside of your NSX environment.
An example is provided below:

Other improvements for layer 2 switching include:
- Data plane enhancements
- Support for multiple VXLAN VMkernel NICs
- Dedicated TCP/IP stack for VXLAN
- Ready for VXLAN hardware offloads (in the future)
- Control plane enhancements
- Leverage NSX controllers for highly available and secure control plane
- Eliminates dependency on multicast in physical network
- Provides ARP suppression in VXLAN networks
Hardware VXLAN Tunnel Endpoints (VTEPs) Announced
Upon the initial release of NSX, the following vendors are available with hardware VTEPs, necessary for a multi-hypervisor environment: Arista, HP, Brocade, Dell, Juniper, and Cumulus Networks.
Logical Routing
NSX has the ability to provide layer 3 routing while again being independent of the underlying physical topology. This includes centralized north-south routing (in and out of the environment) and distributed east-west routing (across the environment).
- East-west routing is provided by the in-kernel distributed routing (DR) module in a pure vSphere deployment along with routing protocol support for BGP and OSPF.
- In a multi-hypervisor environment, the Open vSwitch and NSX vSwitch provide east-west routing.
Logical Firewalling
Nearly identical to the idea of routing, NSX offers centralized north-south firewalling (in and out of the environment) and disributed east-west firewalling (across he environment). Supposedly this functionality is a direct evolution of the vCNS App (vShield App) appliance, which fills a very similar role.
- East-west firewalling is provided by the in-kernel distributed firewall (DFW) module in a pure vSphere deployment and is optimized for roughly 18 Gbps of throughput.
- In a multi-hypervisor environment, the use of ACLs and security groups are used.
Logical Load Balancing and VPN
These features will only be available for a vSphere-Only environment upon release.
- The NSX Edge server will provide load balancing, with support for layer 7 rules and SSL termination.
- The NSX Edge server will also provide both site-to-site and remote access VPNs across your choice of IPsec and SSL.
This should sound very familiar to you if you’ve worked with the vCNS Edge appliance before.