20 Responses

  1. Jonathan Frappier
    Jonathan Frappier at |

    Reblogged this on Jonathan Frappier's Blog.

    Reply
  2. Welcome to vSphere-land! » Home Lab Links

    […] Of RAM (Wahl Network) Synology DS411 vSphere Home Lab Storage – Protocol Bakeoff (Wahl Network) Efficient Virtual Networking Designs for vSphere Home Lab Servers (Wahl Network) The HP ProLiant MicroServer N40L – VMware Home Lab Review [Video] (Wahl Network) […]

  3. Testing vSphere NIOC Host Limits on Multi-NIC vMotion Traffic | Wahl Network

    […] different types of network traffic. Rather than rely on isolation of physical adapters for traffic, as was common with rackmount servers with 6 or more NICs, NIOC allows one to more successfully place a wide variety of network traffic […]

  4. Brian Johnson
    Brian Johnson at |

    Disclaimer: I work at Intel in the Network Division.

    Have you tried setting up vMotion using the same method as with iSCSI? Using multiple port groups with only one active uplink and the other uplinks unused in each. Assigning a vmk with vMotion in each port group uses multi-NIC vMotion feature using both ports. We have setup four 10GbE ports using this method and have seen up to 34Gb of vMotion traffic when moving 8 VMs concurrently.

    Also, in a two host environment, we have also been connecting a 10Gb port on one host directly to the other host without using a switch. This allows for vSphere to use the benefits of 10Gb even if all the other links are only connected to a 1Gb switch. This model does not work with more than two servers or with more than one vMotion port per host but we have seen some big benefits. Several of our customers are testing this and are looking to move it to their production environments.

    Reply
  5. Brian Johnson
    Brian Johnson at |

    Good point, I should restate my question and comment.

    Have you compared vMotion performance between assigning a vmk with vMotion in the same port group with ACTIVE/ACTIVE vs. two port groups using ACTIVE/STANDBY?

    What I have seen is that using multiple port groups with only one ACTIVE uplink and the other uplinks as STANDBY has better performance than using a single port group with ACTIVE/ACTIVE with multiple vmks.

    Reply
  6. Kuntal Patel
    Kuntal Patel at |

    Hi Chris,
    Just curious, Why do you have switch A and Switch B in both the scenarios above? Does it really matter is you are using a single switch or multiple?

    Reply
  7. Alex
    Alex at |

    Hi Chris, hope you doing well.

    I have a question about iSCSI binding for the Two NICs – One vSwitch Configuration.

    please correct me if i’m wrong, its possible to do binding, but it’s mandatory that the storage must reside on the same ip subnet.

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869

    So, the vmnic 0 and 1 must reside on the same ip subnet.

    I’m really thinking about buy 2more nic’s so I can separate vm traffic from datastore traffic.

    Thank you

    Reply
  8. iSCSI traffic limited to 100-150 MB/s?

    […] to unused for each of the iSCSI uplinks on the vmware hosts? See the Storage Traffic piece here: Efficient Virtual Networking Designs for vSphere Home Lab Servers Btw, there something definitely wrong with your MPIO, you have 4 Luns, and 126 connections, which […]

  9. Time for the beast - Study Log for VCAP5-DCA - Page 2

    […] all traffic because it's almost a 10GigE connection with 9 NICs but I know some people split it up like this… HTH If you work hard on your job, you can make a living. But if you work harder on […]

  10. Bjørn-T Nikolaisen (@btn003)

    Hi – Would you use the same setup in a production environment with 2 10 GB NICs and 4 10GB NICs ? (Im studying for the VCAP-DCD exam, and looking for some inputs on 10 GB setups.) 🙂

    Reply
  11. Bjørn-T Nikolaisen (@btn003)

    Hi again! I have now read the book. Very useful.
    I have a question regarding NFS designs in the book: Your discussing multiple networks design on page 278 and following. Your saying that storage array ned to use multiple Ip adresses, create two separate NFS vmk ports and then separate by vmnic active/passive and opposite on the other vmk port. You also say that this design is a bit overkill on 10 GB networks. I agree on that.

    But on page 299, additional vSwitch design scenarios, on the two 10 GB network adapter example you are still using it with vmk NFS1/NFS2 active/passive and opposite.
    This left me a little confused… Do I miss someting here 🙂

    Cheers!

    Reply
  12. Bjørn-T Nikolaisen (@btn003)

    I see, but do you agree that with two 10 GB NICs it is ok to use one vlan, one vmkernel and both NICs active/active with LBT ? That is my understanding of the book and other blogpost you have made 🙂

    Reply
  13. Bjørn-T Nikolaisen (@btn003)

    I see, thank you again!

    Reply
  14. Virtual Servers Compared – Quick Dedicated Server

    […] Efficient Virtual Networking Designs for … – Although we’d all like to have a plethora of NICs in our physical home lab servers, it’s usually a bit more budget friendly to go with anywhere from two to four … […]

Share your point of view!