The final post in the vSphere 5.5 Improvement series will focus on the various ways VMware has poured awesomesauce on their networking stack beyond the NSX announcements. To begin, we’ll cover some of the more hypervisor-specific improvements to networking, but then dive deeper into Distributed Switch upgrades. After being briefed on the upgrades to the Distributed Switch (VDS), I get the strong impression that VMware is doing everything it can to reach feature parity with the Nexus 1000v switch.
For those who are riding the surfboard of danger on a cutting edge wave, you’ll be pleased to note that vSphere 5.5 now supports 40 Gb network adapters. To be perfectly honest, this is definitely a rarity in the enterprise for server level connectivity. I typically see 40 Gb on spine switches connected to leaf switches, or dropped down to 10 Gb via QSFP+ breakout cables (they look like a octopus). But, the march of progress moves on, and I’m sure we’ll see 40 Gb creep out of the core and into the access layer at some point. It’s always good to be prepared.
Along that same vein, VMware also touts the ability to transmit vMotion traffic at a peak of 36 Gbps. I suppose with all the newly published potential Super Monster VM sizes, we’ll start seeing some mission critical use cases that need to shift around mountains of utilized RAM.

LACP Enhancements
For those still rocking out with port channels to your vSphere hosts, the Distributed Switch (VDS) received some more attention around LACP. To be specific:
- You can now have 64 LAGs (Link Aggregation Group) on a VDS instead of just 1.
- There are now 22 different hashing algorithms supported – there used to be one (src-dst-ip) that was available prior to this upgrade and is how you can tune traffic balancing. This is a great upgrade, because not everything balances evenly over IP address.
- New LAG templates that can be copied from one host to other hosts in the cluster.
- Creation and management of the LAG has been moved to the VDS itself and is no longer part of the port group.
Enhanced SR-IOV
SR-IOV, or Single Root I/O Virtualization, doesn’t seem to get a lot of play in most environments. From my perspective, it rarely enters the conversation because most of the hardware I work with, such as the Cisco VIC adapter, handles the creation of distinct virtual NICs on behalf of the server. SR-IOV allows a physical PCIe network adapter to be logically divided until multiple devices to the hypervisor.
SR-IOV has two tools in its belt: physical functions (PFs) and virtual functions (VFs). Per VMware:
PFs are full PCIe functions that include the SR-IOV Extended Capability which is used to configure and manage the SR-IOV functionality. It is possible to configure or control PCIe devices using PFs, and the PF has full ability to move data in and out of the device. VFs are lightweight PCIe functions that contain all the resources necessary for data movement but have a carefully minimized set of configuration resources. (source)
In vSphere 5.5, port group specific properties are communicated to the VFs. In 5.1, many settings (such as VLAN IDs) were not being passed along. Perhaps this will help spur further adoption, despite all the caveats? 😛
Traffic Filtering
Think of this feature as a type of Access Control List (ACL) for the VDS. Things like port security and traffic classification (IP address, MAC address, ingress / egress, and traffic type) with allow and deny logic.
Below is a highly accurate artist’s rendering of what this feature will most likely look like in a heavily congested traffic scenario.

QoS Tagging
Quality of Service (QoS) has been improved with the addition of layer 3 Differentiated Services Code Point (DSCP) values. This essentially lets your QoS settings float up a layer beyond layer 2 802.1P values, which helps control QoS across network segments. While I’m generally not a huge fan of QoS due to it being complex, challenging, and usually something that requires constant tuning, this is a value add to those who wish to deploy it.
Packet Capture
Traditionally, network engineers had a few options available to monitor or clone traffic on a VDS: NetFlow or Port Mirroring. Both constructs were rather ignorant of the VDS and were more of a “means to an ends” where something else was ultimately responsible for sniffing or analyzing traffic (Wireshark comes to mind). With the 5.5 VDS, you can now leverage a CLI tool that is quite similar to an enhanced version of tcpdump. It can report on traffic at the vNIC, vSwitch, or Uplink level, along with packet path and time stamps. Very handy.