20 Responses

  1. Will
    Will at |

    Excellent post! Finally spells out overcomplicating a simple situation.

    Reply
  2. Josh Odgers (VCDX#90)
    Josh Odgers (VCDX#90) at |

    Totally Agree! Well said.

    Reply
  3. Blazing Fast Workload Migrations with Multi-NIC vMotion [Video] via @ChrisWahl | Wahl Network

    […] along vMotion traffic, even when only a single VM is being migrated. It doesn’t require any static-LAG nonsense or other network voodoo; you just configure it properly in vSphere and it works. Because nearly […]

  4. Mike Brown
    Mike Brown at |

    Great read as always, glad I’m not the only one seeing this in the field 🙂

    Reply
  5. Rickard Nobel
    Rickard Nobel at |

    I do also agree. I think the Link Aggregation methods are some of the most least understood technologies in both vSphere and general networking.

    Reply
  6. vSphere Does Not Need LAG Bandaids | Wahl Network

    […] over a year ago, I wrote a post outlining my distaste for the use of Link Aggregation Groups (LAG) for vSphere hosts when storage is concerned. For those not into network hipster slang, any […]

  7. regmiboyer
    regmiboyer at |

    This one is great, I was aware of the multiple export with different IP to overcome a better design for IP HASH and performance. But I have never thought of the ip address to be used which will be considered for the hash calculation.Good one.

    Reply
  8. Jason
    Jason at |

    Good read. Question for you, the vSwitch I have has 3 kernel adaptors. One for iSCSI, vMotion, and VMFT. I only have 4 nic’s available. Would you recommend using 2 for iSCSI, 2 for vMotion and then adding VMFT to one of the four? Just not sure which way to go.

    Reply
  9. KoMV-Music
    KoMV-Music at |

    Your design isn’t as great as you think it is, coming from a network engineer. The reason is the switch will only send the return traffic back on the port it learned the source mac-address. If that is constantly moving around, it messes with the switch or creates limited broadcast groups (depending on how the switch handles it, topology, etc.). Now in a smaller, less redundant network, it is less of an issue. But a more redundant design could potentially create small outages for your system or periodic traffic changes. The correct design would be to upgrade the links to 10gbps.

    Reply
  10. SilkBC
    SilkBC at |

    Hi Chris.

    What if the NFS storage NICs were each on their own VLAN (corresponding to the VLANs on the ESXi port-binding), and then bonded in “mode 0”, which is round robin? That should improve performance, no?

    I have seen a similar setup (albeit not using VMware) where throughput achieved was about 450MB/s

    Reply
  11. Justin
    Justin at |

    What about UCS and vPC uplinks to Nexus switches? Should the iSCSI traffic not be on the vPC and instead leverage dedicated network uplinks or appliance ports?

    Reply
  12. Eric
    Eric at |

    Hello – I’m pretty new to this game so forgive my lack of proper syntax. I inherited a network that has 2 vm hosts (vsphere 5.1) and a dual SP EMC VNXe 3150, connected via iSCSI (2x Cisco 2960s switches). Due to licensing, this config uses the standard switch. The iSCSI network has port channel groups configured. in vSphere I see 3 NIC’s set up on each host iSCSI vSwitch and they route based on IP hash.

    So I decided to do some testing. I converted one of the hosts over to a 3 NIC vmkernel port-binding setup.

    It has only been a couple days, but I have noticed that both hosts behave almost identically. All 3 NIC’s are in use nearly evenly on both systems despite the fact load balancing is not configured. Really the only benefit I seem to have gained from the reconfigure of one of the hosts is that I can now utilize both switches instead of just one for this host. If one of the switches goes down, users won’t lose connection to any vm’s.

    As I am a SAN/iSCSI/VM novice, I am wondering if you can make sense of this. Based on what you’re saying, the load shouldn’t be evenly distributed because I don’t have LBT set up (again, licensing). Etherchannel shouldn’t be performing as well as port-binding, but it is.

    Thanks in advance

    Reply

Share your point of view!