11 Responses

  1. Oliver Kügow
    Oliver Kügow at |

    Your Etherchannel problem in #2 cannot be solved by LACP in any way. LACP has nothing to do with it.

    The only real way to solve this problem will probably be available with the next version of ESX when we finally get pNFS support!

  2. Db
    Db at |

    There’s a bit of a gothca on the point about ether channel; the load balancing algorithm used is route based on IP Hash, which is based on a hash of the source IP and destination IP for the initial connection. This means that in the diagram you have shown you won’t actually use more than one link when there is only 1 source VMK with a single IP and 1 target IP. What would work quite well though is a combination of point one and two but without using multiple subnets, just multiple IPs on the target filer from the same subnet would work, depending of course on the filer supporting doing this

  3. Carl
    Carl at |

    The only way you will get the throughput if with multiple IP targets on the host NAS.

    Some NAS boxes have the facility to allow multiple nic targets, but many will present a single virtual IP address across all of the NICs in the NAS server.

    NAS does look like it will solve a lot of ESX storage problems, but it does introduce others.

  4. NFS on vSphere – A Few Misconceptions « Wahl Network

    […] post, I’m going to expand a bit on the throughs I had back when I wrote my original “A Look At NFS on VMware” post with some additional musings based on misconceptions that I have seen repeated. If […]

  5. gwalker
    gwalker at |

    FWIW, the Cisco 1000v uses TCP hashing which essentially means that each of the NFS-mounted datastores (via LACP, naturally) will go to different physical links.

    The sensitivity of the hash depends on multiple subnets vs multiple IPs: with the old Foundry switches, it appears that there was never enough ‘uniqueness’ in the hashes to get 2 IPs on the same subnet forced across multiple links.

  6. Ashish
    Ashish at |


    Could you explain what would happen if you have configured a vSwitch with only one VMkernel port and 4 physical NIC? How the traffic will go to the storage array? what would be the path selection criteria?


  7. vSphere LBT and Enterprise Plus Switches - a VCDX Constraint ?
  8. massimo
    massimo at |

    I came into a slow down issue by using network isolation to access two different nas (netapp & synology) . The problem shows up when copying from netapp to synology. i got an 10MB/sec average transfert speed. I disabled network isolation (all vmkernel port on the same vswitch) and the speed was better.


Share your point of view!