3 Responses

  1. nate
    nate at |

    I ran into something like this a few weeks ago, my servers have 2x10GbE NICs (4 ports total) for VMs and 2x1GbE ports for service console. I have been having flakey 10GbE NICs (getting a new manufacturing run of the NICs now which should fix the issues), but one day both of the 10GbE NICs failed at the same time, leaving the VMs stranded. vCenter thought everything was A-OK because the management was on the 1GbE side (intentionally put there in the event 10GbE took a crap). So I shut down the VMs individually and manually moved them over since I could not vmotion.

    A week or so later I had another similar issue, where somehow VMware detected the fault, put the host in degraded mode, shut the VMs off and moved them automatically. I didn’t understand how it detected the fault but maybe the failure scenario was different enough that it tripped another type of response.The logs didn’t have much useful information as to what triggered the host to go degraded, since vmware wasn’t monitoring the 10GbE interfaces in the service console.

    I later saw an ability to create a 2nd service console interface so that vmware will fail stuff over if either network goes down I plan to do that some time soon.

    Reply
  2. Ranjna
    Ranjna at |

    Can you explain me how network partition and network isolation is different

    Reply

Share your point of view!