35 Responses

  1. Julian Wood
    Julian Wood at |

    Great information, Chris, I’m a huge fan of NFS.

    Just one thing, when you say “will only use a single uplink to an NFS server”, I was under the impression that this should maybe be ” will only use a single uplink to a NFS mount point (datastore) ”

    Each datastore uses a separate vmkernel connection so if you have multiple datastores they may load balance across multiple uplinks in a vmkernel port group depending on how it selects which uplink to use. Unfortunately you don’t have control over which uplinks each datastore traffic goes but at least you may have some semblence of using multiple links with multiple datastores.

    Reply
  2. nate
    nate at |

    Of much greater concern to me is not the load balancing of the network but load balancing of the array itself. e.g. in the NetApp world at least with “7 mode” you either have to run active/passive or manually load balance your volumes between the controllers. Seems 8.1 cluster mode just came out and I left a metric ton of questions on a NetApp blog a few minutes ago asking more about it. vs a true active-active cluster where the data is available from all controllers, all ports without any sort of LUN trespassing or whatever.

    As you say – frequently the network is not the bottleneck, especially in this day and age with 10GbE being so affordable. Most VM workloads (I’d say it’s safe to say the vast majority) are random I/O, and your going to blow out your controllers or disks or whatever long before you blow out a fast network.

    Reply
    1. egrigson
      egrigson at |

      I’ve never been too fussed about load balancing the datastores across the Netapp nodes as you need to do this from a controller performance and capacity point of view anyway, regardless of the NFS constraints imposed by vSphere. Like you I’m curious to see what Cluster mode brings to the table although at first glance it’s going to require some serious network infrastructure to be effective.

      Ed.

      ps. I think I’m stalking you across the internet – I’ve just been reading your comments on @that1guynick’s blogpost!

      Reply
  3. egrigson
    egrigson at |

    I’ve been looking at this issue recently as we’re still on GB networks (not 10GB) and the single connection may be a bottleneck for some of our bigger Oracle databases. This is a good summary – bookmarked!

    @Nate – funny, my previous stop was @that1guynick’s blogpost about ONTAP 8.1 and your comment. I’m stalking you around the interweb!

    Reply
  4. Adam B.
    Adam B. at |

    Quick question on the use of etherchannels: If I have four 1GB ports on the VMKernel port in an etherchannel to a cisco switch with IP hash load balancing on, are you saying that the NFS traffic won’t get an effective 4GB bandwidth to the storage? I’m a bit confused as to the need of multiple VLANs/IPs needed from your teaming point above.

    Reply
  5. Nick Triantos
    Nick Triantos at |

    Really good post. There are 2 types of Protocols.

    1) Those that require considerable architectural thought up front but require fewer day to day management tasks once the architecture is in place and are easier/simpler to manage,

    2) The reverse of #1

    NFS falls in the #1 category. FC/FCoE and even iSCSI in #2.

    Reply
  6. Julian Wood
    Julian Wood at |

    I seem to remember reading somewhere that vmkernel load balancing doesn’t take into account “Route by virtual port ID”. This is only load balancing for VM traffic, vmkernel does it’s own thing. Can’t seem to find the original article but I’ll keep digging…

    Reply
  7. Open Tabs 5/20/12
    Open Tabs 5/20/12 at |
  8. NFS on vSphere – Technical Deep Dive on Same Subnet Storage Traffic « Wahl Network

    […] Storage Traffic Apr 23, 2012 Building on the previous post I created to reveal some misconceptions of how NFS traffic is routed on vSphere, this article will be a technical deep dive on same subnet storage traffic. The information […]

  9. NFS on vSphere – Technical Deep Dive on Multiple Subnet Storage Traffic « Wahl Network

    […] Deep Dive on Multiple Subnet Storage Traffic Apr 27, 2012 Now that I’ve gone over misconceptions of NFS on vSphere, as well as a deep dive on same subnet storage traffic, the next discussion will be around […]

  10. NFS on vSphere – Technical Deep Dive on Load Based Teaming « Wahl Network

    […] Deep Dive on Load Based Teaming Apr 30, 2012 In my past three posts, I go into some misconceptions on how NFS behaves on vSphere, along with a pair of deep dives on load balancing in both a single subnet and multiple subnet […]

  11. Nexenta storage for the vLab | Erik Bussink

    […] Misconceptions on how NFS behaves on vSphere by Chris Wahl […]

  12. Technology Short Take #23 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers

    […] Wahl has a good series on NFS with VMware vSphere. You can catch the start of the series here. One comment on the testing he performs in the “Same Subnet” article: if I’m not […]

  13. Synology NAS for SME workloads – my design decisions « PC LOAD LETTER

    […] per this blog post, VMware will preferentially route NFS traffic down a kernel port that is on the same subnet as the […]

  14. Norgs
    Norgs at |

    Thanks for the post.
    This is the first spot I could find to tell me about VMkernel Selection, and what NFS would use (I’m very new to VMware and from a networking background).
    After reading your post here I can now confidently create my UCS server profiles.

    Reply
  15. salman
    salman at |

    ” If no VMkernel ports are on the same subnet, NFS traffic traverses the management VMkernel by way of the default gateway.

    Typically, item #2 is not desired and should be avoided.”

    Could you shed some light on this? Why does it need to be on the same subnet? I am curious because my VMkernel is not on the same subnet as the NFS datastore.

    Reply
  16. Tech Talks, Roundtables, and the VMworld Bag « Wahl Network

    […] be speaking on Monday at 11:15 pacific on NFS with vSphere. Also, check out the complete schedule as they have lined up a ton of great speakers and […]

  17. Tech Talks, Roundtables, and the VMworld Bag « Wahl Network

    […] be speaking on Monday at 11:15 pacific on NFS with vSphere. Also, check out the complete schedule as they have lined up a ton of great speakers and […]

  18. virt
    virt at |

    I stumbled on your post and found it to be really a Godsend. I need some advise here since NFS is new to me. We have 6NICS and I want to dedicate 2 to NFS storage, 2 to VM Traffic and 2 to mgmt, VMotion. I have read that we need to create VMkernel port groups for NFS and have the IPs in the same subnet as the storage Device or NFS Datastore? My switches can’t do etherchannel.

    Reply
  19. VCDA510 – Objective 1.1 – Implement and Manage Complex Storage Solutions – Skills And Abilities | VCDX or Bust

    […] Chris Wahl post on NFS misconceptions – http://wahlnetwork.com/2012/04/19/nfs-on-vsphere-a-few-misconceptions/ […]

  20. NFS through a seperate VMKernel adapter | Breek een been!

    […] the NFS traffic. If you want to know how NFS traffic is handled by ESX, I suggest reading the posts on Chris Wahl his blog, he did a perfect job on explaining […]

  21. Etherchannel Load Balancing |
    Etherchannel Load Balancing | at |

    […] NFS on vSphere Part 1 – A Few … – NFS is my favorite way to attach storage to a vSphere host, but also one of the more annoying protocols to try and design for when shooting for high … […]

  22. Etherchannel Load Balancing | Home
    Etherchannel Load Balancing | Home at |

    […] NFS on vSphere Part 1 – A Few … – NFS is my favorite way to attach storage to a vSphere host, but also one of the more annoying protocols to try and design for when shooting for high … […]

Share your point of view!