19 Responses

  1. NFS on vSphere – A Few Misconceptions « Wahl Network

    […] NFS on vSphere – Technical Deep Dive on Same Subnet Storage Traffic […]

  2. duhaas
    duhaas at |

    Thanks for all the time and effort you put into these posts. Your explanations go a long way. What kind of switch are you using in your lab?

    Reply
  3. NFS on vSphere – Technical Deep Dive on Multiple Subnet Storage Traffic « Wahl Network

    […] Apr 27, 2012 Now that I’ve gone over misconceptions of NFS on vSphere, as well as a deep dive on same subnet storage traffic, the next discussion will be around leveraging multiple subnets and using VMware’s load based […]

  4. virtuallayercake
    virtuallayercake at |

    Wow… Seriously Awesome.

    I cannot agree more with your conclusion and this is something that most NAS vendor do a very poor job at explaining.

    The key to have “true” load balance with NFS is multiple subnet…(your next post) Here is what I found out/understand at this time. Maybe you will validate/debunk in your lab.

    Bottom line… in my own words… there is no loadbalance with NFS, there is just ways to plumb it right… like a plumber bending a metal pipe

    Here are the requirements for proper loadbalaning using NAS.

    1._On the host you need 1 vmkernel in different subnets per uplink
    2._All vmkernel interface need to have sequential LSBs (aaa.aaa.aaa.1 /24, aaa.aaa.bbb.2 /24
    3._Aliases required on the filer
    4._The ip used for mounting is critical

    The IP Hash load balancing only determine wich uplink will be used to send traffic and obviously has no impact on the receiving uplink… unless you look at it from the filer stand point 😉

    My understanding is that, when using etherchannel, you have to keep in mind that the algo will run twice… (Host-switch, switch-filler)

    To understand the IP hash I created an excel spreadsheet to understand the impact of the IP you choose. This is for a 4 uplinks model. This important as the last part of the algo is a modulo#uplinks which gives you the uplink that will be used.

    Src IP LSB Bin (Src IP LSB) Dst IP LSB Bin (DST IP LSB) XOR Modulo
    1 1 1 1 0 0
    1 1 2 10 11 3
    1 1 3 11 10 2
    1 1 4 100 101 1
    2 10 1 1 11 3
    2 10 2 10 0 0
    2 10 3 11 1 1
    2 10 4 100 110 2
    3 11 1 1 10 2
    3 11 2 10 1 1
    3 11 3 11 0 0
    3 11 4 100 111 3
    4 100 1 1 101 1
    4 100 2 10 110 2
    4 100 3 11 111 3
    4 100 4 100 0 0

    The key item to understand here is “sequential IPs”. In a 2 uplinks env…. having target IPs of .101 and .103 would collapse all traffic on the same uplink

    Don’t get me wrong… I love NFS, it is our production standard. But load balancing is not automated and requires some serious thought.

    Again…good stuff

    G,

    Reply
  5. NFS on vSphere – Technical Deep Dive on Load Based Teaming « Wahl Network

    […] on how NFS behaves on vSphere, along with a pair of deep dives on load balancing in both a single subnet and multiple subnet environment. If you’re just catching up on this series and are unfamiliar […]

  6. Nexenta storage for the vLab | Erik Bussink

    […] balancing NFS deep dive in both a single subnet by Chris […]

  7. Aleks
    Aleks at |

    Thanks for the great article Chris, trying to wrap my head around this stuff is already difficult enough your articles help a lot..!

    I’ve read the article a couple of times but don’t understand this bit:
    “All NFS traffic chose vmk7, which is using vmnic6”

    But when I look at your first diagram, vmk7 is shown to be connected to vmnic7 instead of vmnic6 am I correct..?

    I really like your blog, keep cranking out these great articles..!

    Reply
  8. Ryan
    Ryan at |

    What would happen if you had the vmkernel in the management network and then a seperate vmkernel created in the same subnet? I would think that the vmkernel from the management network would be the lowest number since its created with the host and therefore all traffice would route over that vmkernel which is not desirable. Or does any vmkernel that has the management traffic checked get bypassed if there is more than one vmkernel?

    Reply

Share your point of view!