The use of VLANs are rather common in most data center environments as a method to control traffic and partition off pieces of the network. Historically, ports on an access switch or fabric extender, such as the 3750 or Nexus 2K respectively, are connected to the physical server infrastructure NICs as access ports. Most servers never knew (nor cared) what VLAN they were on – the physical switch stripped off the 802.1 VLAN tags before sending any traffic to the server, and also added the tags when traffic was received from the server. Additionally, the typical server doesn’t need access to more than one VLAN. It was also a good security practice to isolate a server to the VLAN necessary to fulfill its role (app, web, database, etc.)
The introduction of server virtualization really shook up this practice and started the trend of requesting trunk ports (sometimes called tagged ports) to the server’s NICs. While I’m not saying that the idea of trunking to a server is completely foreign, I will admit that I didn’t see it much as a network admin.
A Simple Tutorial
As an extension to my porcupine discussion in my “It’s A Trunk! Using Portgroup VLANs with vSphere” post, I’ve created a brief demonstration of the VLAN configuration in my lab environment. I’m using VLAN tags for my Fault Tolerance (252) and vMotion (253) port groups, and a native VLAN (1) for everything else.