13 Responses

  1. NFS on vSphere – A Few Misconceptions « Wahl Network

    […] NFS on vSphere – Technical Deep Dive on Multiple Subnet Storage Traffic […]

  2. Duane Haas (@duhaas)
    Duane Haas (@duhaas) at |

    Another great job, you have a knack for making things easy to understand. If you have a moment, I would love your opinion on a recent post I put up on the EMC community site

    https://community.emc.com/message/616650#616650

    Reply
  3. Nexenta storage for the vLab | Erik Bussink

    […] balancing NFS deep dive with multiple subnet by Chris […]

  4. Duane Haas (@duhaas)
    Duane Haas (@duhaas) at |

    One more question, I just completed setting up a new EMC VNXe for a customer. I have two uplinks for IP storage on the ESX side, and each storage processor on the EMC side has two interfaces one on each of the two networks i’ve carved out for storage. I’m assuming I should make sure that the vsphere vswitch is plugged into trunk ports in order to allow both networks the ability to route over a single vswitch with multiple port groups? I’m just trying to understand should I lose one of those nics in the vswitch, i obviously need the other nic to be able to handle both networks if need be

    Reply
  5. John Terribili (@johnterribili)
    John Terribili (@johnterribili) at |

    Hi Chris! I was wondering if you could shed some light on if this configuration is valid in a situation using a vSwitch as opposed to a dvSwitch. I am having some trouble getting this to work using a vSwitch. Thanks for your time!

    Reply
  6. skynet
    skynet at |

    I want to use NFS on vsphere 5.5 and I have two uplinks. Uplink A is a 10gbe and uplink B is 1gb. I want all NFS traffic on A with B as failover.

    I create two subnets like your pinning example, each subnet has link into the NAS and each subnet has a dedicated physical switch. But it seems like the problem is I have to make two NFS mounts — one for each subnet. If link A goes down how does vmkernel know to switch to the other subnet and mount? Perhaps with some vmkernel routes?

    Reply
  7. kjstech
    kjstech at |

    This is a great article and I thank you for this. I created a filesystem on each subnet and I have a vmk4 on one subnet and vmk5 on another. Using IOMETER I created a test hitting both filesystems hard and in esxtop I am verifying good usage for both vmk4 and vmk5 out my 10gbe nic.

    Great test setup and fun to follow along and test here as well. We are running esxi 5.0 u3. I did have to run export TERM=xterm in SuperPuTTY first before executing esxtop in order for it to display correctly.

    I can say this works as expected with an EMC VNX5200 storage array on 10gbe networking.

    Reply
    1. kjstech
      kjstech at |

      Revisiting this two years later, is this still nessesary for ESXi 6.0 update 2 (and beyond)? I’ve since upgraded our to 6.0 update 2, which preserved this configuration and everything is working fine. However is it really necessary? I’m leaving it as is (if it ain’t broke – don’t fix it), but this is just curiosity speaking…

      Reply
  8. elee
    elee at |

    Could we set something like this up using standard switches? If I have 2 10gbe NICs and i would just set the switches up to be trunked mode with multiple VLANs? On the vSphere side, I would configure multiple subnets to distribute nfs kernel traffic across the two links and have each vSwitch set the other 10gbe link as standby for failover. Would that work?

    Reply

Share your point of view!