28 Responses

  1. NFS on vSphere – Technical Deep Dive on Same Subnet Storage Traffic « Wahl Network

    [...] NFS on vSphere – Technical Deep Dive on Load Based Teaming Share this:MoreLike this:LikeBe the first to like this post. [...]

  2. NFS on vSphere – Technical Deep Dive on Multiple Subnet Storage Traffic « Wahl Network

    [...] NFS on vSphere – Technical Deep Dive on Load Based Teaming Share this:MoreLike this:LikeBe the first to like this post. [...]

  3. NFS on vSphere – A Few Misconceptions « Wahl Network

    [...] NFS on vSphere – Technical Deep Dive on Load Based Teaming Share this:MoreLike this:LikeBe the first to like this post. [...]

  4. drechsau
    drechsau at |

    Wow, that was great, thank you!

    Reply
  5. Adam B
    Adam B at |

    What changes when you use the 1000v DVS? I know LACP is supported on the 1000v, but does it allow us to aggregate bandwidth without having to create multiple IPs/Subnets/VKernels?

    Reply
  6. nOon
    nOon at |

    It’s nice to see another nfs fan for vmware infrastructure.
    I make nearly the same test 2 years ago and i think the same. the only way to make load balancing was to use multiple network and multiple mount point.
    And it’s a shame because one of the advantage of NFS is t have less datastore on our infrastructure.
    I just wait a pnfs implementation on vmware.

    Reply
  7. Marcos
    Marcos at |

    Regarding “Turning on LBT is non-invasive and does not impact the active workloads.” I have to disagree,
    I do have NetApp storage as well, and a configuration pretty much the same as emplained here, and using a VMWare View instance to serve a few hundred Virtual Desktops, and we had constant problems of users being disconnected randomly, and the day that I removed the LBT feature was when it all began to work correctly.

    Reply
  8. Ben
    Ben at |

    Question : does that mean you have tog mount your nfs datastores x times in vmware? As seende in your screenshot….?

    Reply
  9. Nexenta storage for the vLab | Erik Bussink

    [...] NFS on vSphere – Technical Deep Dive on Load Based Teaming by Chris [...]

  10. DR3Z
    DR3Z at |

    Chris,

    Great post! Can you explain the “unique least significant bits”. I’m not network tech my any means and am trying to understand. Can you provide an example?

    Thank you!!

    Reply
  11. Adam
    Adam at |

    Great post. I’m going to try and implement this setup with the new hardware I just got in.

    I’m setting up a 4 port NetApp filer and just wanted to confirm that each port on the filer would have an IP in a different subnet and also reside in it’s own VLAN with this setup? No vifs with alias’ would be setup on the NetApp because this uses LBT and not ip hash correct?

    Thanks.

    Reply
  12. Steve L
    Steve L at |

    No real discussion to add, just kudos. Due to circumstances I have had limited NFS exposure in my career and this thread helped clear every last uncertainty I had in a very concise manner. Well done sir.

    Reply
  13. Paul
    Paul at |

    Hello Chris

    I wonder if you could review the below config and let me know your thoughts on the question that follows. I’ve read your posts and Google’d but yet to find a definitive answer:

    1. We have 4 x ESXi 5.1 hosts
    2. Each host has 6 x 1Gbps physical NICs (vmnic0-vmnic5)
    3. A DVS has been created and a DVS portgroup setup (for NFS) with Route Based on Physical NIC Load set (LBT) and ActiveUplinks set with dvUplink4 and dvUplink5 (therefore two active uplinks). The remaining uplinks are set to unused (assigned to other services, management etc).
    4. The DVS NFS portgroup has vmk1 bound with an IP address assigned.
    5. 4 x NFS datastores have been mounted via the single vmk1 IP address.

    Question: Will LBT drive traffic through both uplinks (dvUplink4 and dvUplink5) if traffic exceeds 75% on the first hypervisor selected uplink?

    Thanks!
    Paul

    Reply
  14. Paul
    Paul at |

    Thanks for your follow up reply, appreciate your time.

    Cheers
    Paul

    Reply
  15. GOOD ARTICLES TO UNDERSTAND LACP & STATIC ETHERCHANNEL | RJ Approves This Message!
  16. Joel Hurford
    Joel Hurford at |

    Nice work, thank you, Chris.

    Help me understand having the same datastore (your SSD config) shared on multiple VLANs. That datastore will have a signature and single datastore name in vcenter inventory. If I try to mount four nfs paths to the same export, it will only show up as one target datastore.

    What keeps your four VMs on the same datastore from choosing the same vmk path? Don’t I need four datastores (one per VLAN) to force traffic down the available vmk paths?

    Reply
    1. kjstech
      kjstech at |

      We present a different file system over a different subnet and vmk. We distribute our vm’s across these filesystems. We are using an EMC VNX2 and it allows multiple IP’s on it’s interfaces. We are using 10gig active twinax cables. Our latency average on one of our visualized 2008 R2 file servers is 0.372 read and 0.839 write. We have a 2008 R2 SQL server virtualized and its read latency average is 1.75 and write 0.156. Our Exchange 2007 on Server 2003 R2 averages 1.561 read and 1.778 write.

      Basically the “Latest” millisecond latencies are in the single digits in this configuration. Before we used to have a Celerra NX4 with one IP on 1gig network and 1 vmk. Those latencies on sas drives were 30-60 and sata drives were even higher.

      Big improvement going to 10gig, jumbo frames, multple vmk’s across multiple subnets and distributing vm’s per load, also using storage io control.

      Reply

Share your point of view!