18 Responses

  1. Will
    Will at |

    Excellent post! Finally spells out overcomplicating a simple situation.

  2. Josh Odgers (VCDX#90)
    Josh Odgers (VCDX#90) at |

    Totally Agree! Well said.

  3. Blazing Fast Workload Migrations with Multi-NIC vMotion [Video] via @ChrisWahl | Wahl Network

    […] along vMotion traffic, even when only a single VM is being migrated. It doesn’t require any static-LAG nonsense or other network voodoo; you just configure it properly in vSphere and it works. Because nearly […]

  4. Mike Brown
    Mike Brown at |

    Great read as always, glad I’m not the only one seeing this in the field :)

  5. Rickard Nobel
    Rickard Nobel at |

    I do also agree. I think the Link Aggregation methods are some of the most least understood technologies in both vSphere and general networking.

  6. vSphere Does Not Need LAG Bandaids | Wahl Network

    […] over a year ago, I wrote a post outlining my distaste for the use of Link Aggregation Groups (LAG) for vSphere hosts when storage is concerned. For those not into network hipster slang, any […]

  7. regmiboyer
    regmiboyer at |

    This one is great, I was aware of the multiple export with different IP to overcome a better design for IP HASH and performance. But I have never thought of the ip address to be used which will be considered for the hash calculation.Good one.

  8. Jason
    Jason at |

    Good read. Question for you, the vSwitch I have has 3 kernel adaptors. One for iSCSI, vMotion, and VMFT. I only have 4 nic’s available. Would you recommend using 2 for iSCSI, 2 for vMotion and then adding VMFT to one of the four? Just not sure which way to go.

  9. KoMV-Music
    KoMV-Music at |

    Your design isn’t as great as you think it is, coming from a network engineer. The reason is the switch will only send the return traffic back on the port it learned the source mac-address. If that is constantly moving around, it messes with the switch or creates limited broadcast groups (depending on how the switch handles it, topology, etc.). Now in a smaller, less redundant network, it is less of an issue. But a more redundant design could potentially create small outages for your system or periodic traffic changes. The correct design would be to upgrade the links to 10gbps.

  10. SilkBC
    SilkBC at |

    Hi Chris.

    What if the NFS storage NICs were each on their own VLAN (corresponding to the VLANs on the ESXi port-binding), and then bonded in “mode 0”, which is round robin? That should improve performance, no?

    I have seen a similar setup (albeit not using VMware) where throughput achieved was about 450MB/s

  11. Justin
    Justin at |

    What about UCS and vPC uplinks to Nexus switches? Should the iSCSI traffic not be on the vPC and instead leverage dedicated network uplinks or appliance ports?


Share your point of view!