Rodney Buike - Founder and original lazy admin. MVP: System Center Cloud and Datacenter Management

Daniel Nerenberg - Lazy admin 2.0. MVP: Windows Expert - IT Pro

Disclaimer

These postings are provided "AS IS" with no warranties, and confers no rights. You assume all risk for your use.

Live Migration NIC Binding

In a typical Hyper-V R2 cluster built on Microsoft’s best practices will have 6-8 NICs depending on the SAN type (iSCSI or FC) including:

  • Management Network
  • VM Network
  • VM Network
  • CSV Network
  • Live Migration Network
  • Cluster Heartbeat Network
  • iSCSI MPIO (or FC adapter)
  • iSCSI MPIO (or FC adapter)

One common issue that comes up in this scenario is failed Live Migrations, Quick Migrations will work but live ones will not.   When you attempt a Live Migration and it fails due to “A cluster network  is not available for this operation” it is caused by improper NIC Binding Order on the Hyper-V Hosts.  When this happens two events are created in the Microsoft\Windows\Hyper-V High Availability\Admin event log on the destination server.  Look for EventID 21126 and 21111

Event Log 1

Event Log 2

Your first thought will be to check that all the cluster resources are online and you will find they are.  When this happens you need to re-order the NIC binding for Live Migration to succeed.  To do this:

  1. Open up the Network and Sharing Center
  2. Click on Change Adapter Settings
  3. Press the ALT key and click Advanced
  4. Click Advanced Settings

LM NIC Binding 1

Under the Adapters and Bindings tab move the Management Network virtual switch and the associated physical NIC to the top of the list and then place the rest of the NICs in any order.  Repeat this on all nodes in your Hyper-V Cluster.

LM NIC Binding

Once you have completed this you will need to restart the Cluster Service on all nodes in the cluster.  You will need to do this via the Failover Cluster Manager and not Services.msc.  It should be noted that when you stop the cluster service on a node all running VMs will be quick migrated to other nodes.  Because of this it can take a while if there are a lot of running VMs in the cluster.

 

 

Comments are closed.