Network consolidation is becoming more prevalent in the enterprise data center by means of both converged infrastructure and 10GbE. In situations where a vSphere host no longer has discrete uplinks being used by NFS to pass along traffic, you may wonder why it’s still a good idea to generate a unique VMkernel port for passing along NFS traffic. Or, let’s sum this up into a single question:
If all traffic is going out the same uplink, why do I care to build a unique VMkernel port for passing NFS packets?
I think this is a worthy question, but will first require some assumptions before moving forward.
First, we’ll need to assume that you are using a distinct VLAN and subnet combination for your storage traffic. There are a lot of good reasons to use VLANs to segment traffic, which I’m not going to go in to here. Your management VMkernel port (let’s call it vmk0) will be on one VLAN, and the storage array will be on another. Thus, in order to get from the management subnet to the storage subnet, inter-VLAN routing will be required.
Second, I’ll assume that you’re going with the standard practice of sticking L3 routing at the core layer of the 3 tier network model. Yes, it can be stuck at other layers, but then we’re probably looking at a collapsed model or a 2 tier model, which may change the diagrams below a bit.
NFS Traffic Flows
Based on the assumptions above, let’s build out the logical scenario. VLAN 100 is used for vSphere management traffic, and VLAN 200 is for storage array traffic. The diagrams below show a generic 3 tier networking design. Each layer is represented by only a few switches, but may in actuality comprise of many more switches all connected together. I’m using the Cisco Nexus series for this demonstration.
No NFS VMkernel Port
In this first diagram, I’ve outlined two of the possible routes that traffic would have to take in order to reach your storage array when no VMkernel port exists on the NFS VLAN.
Essentially, the packets must reach the point where they can be routed from one VLAN to another. Since L3 routing is occurring at the core layer, the traffic has to go quite a ways in order to come back to the storage array. In the case of the orange line, this highlights a situation where perhaps an issue with spanning tree or hairpin turns has caused the traffic to go out onto a completely different distribution switch on the way to the storage array.
Put simply, this is an inefficient design that wastes throughput at the higher layers and introduces additional switch overhead and packet latency.
With An NFS VMkernel Port
In this diagram, I’ve added a VMkernel port on the NFS subnet. Traffic no longer has to be routed, it simply needs to be switched. This is because both source and destination are on the same VLAN and subnet. If the vSphere host is not aware of the MAC address of the storage array, it simply needs to send out an ARP request, and can then start forwarding traffic.
In addition to the diagram above, you could also put the storage array on a switch closer to the vSphere host(s) to reduce the hop count. I’ve seen environments where the storage was attached to the upstream distribution switch (in this case, a Nexus 5K) to reduce further hops.
Here’s some other topologies and their flows. Note that the orange path depends on the type of access layer switch being used, as I believe a Nexus 2K, being just a FEX, would still require going up to the 5K.
While there is nothing earth shattering about the networking information above, it should highlight a simple fact that it is always a good idea to create a VMkernel port on the same subnet as the storage array when using NFS. Even in a scenario where only one switch is being used, the switch would still have to route the traffic over from one VLAN to the other, which introduces overhead, and potentially latency, for no real reason. I personally prefer a non-routable VLAN for NFS storage traffic, unless the array cannot handle virtual interfaces or does not contain a management interface that can be placed on separate management VLAN.