vCenter

 View Only
  • 1.  Question about moving vMotion to new vSwitch

    Posted Apr 16, 2014 04:49 PM

    I am replacing my ESXi servers and I want to reconfigure vMotion to use a separate vSwitch with multiple NICs. Currently they are on my iSCSI NICs (I know it's not best practice) which is why I want to move it. My question is, and this may be a dumb question, but how can I setup the new servers and vMotion the VMs if it's not on the same network? I tried to enable vMotion on the Management network while leaving the iSCSI vMotion enabled at the same time and setup the new vMotion vSwitch on the new server using the same VLAN as the Management network but it still tries to use the iSCSI NIC. I don't want to disable the vMotion on the iSCSI NIC because it could disrupt connectivity to the datastores. So what am I missing? This is in production and I want to do this without any downtime.



  • 2.  RE: Question about moving vMotion to new vSwitch

    Posted Apr 16, 2014 07:10 PM

    It's not easy to help without knowing more about your network environment (physical as well as virtual). Basically the vMotion port groups of all the hosts need to be configured in the same subnet. What you could do is to configure separate vSwitches for the new hosts, but still keep them connected to the iSCSI network until all hosts have been reinstalled/reconfigured. Once done change the configuration for all vMotion port groups and the uplinks (if applicable).

    André



  • 3.  RE: Question about moving vMotion to new vSwitch

    Posted Apr 16, 2014 07:44 PM

    That makes sense and I can get it to vMotion using the same iSCSI network but the problem is that at some point I still have to disable the vMotion on the iSCSI NICs and I'm worried about the message that states I could lose connectivity to the storage. So in order to disable it I have to vMotion all the VMs off but then I can't vMotion them back on to do the next host. Does that make sense? Here is more information.

    I have 5 ESXi servers in a cluster that I am replacing with new Dell Blade servers. They are connected to 2 separate stacks of Dell 8164 switches. One stack is for iSCSI and one is for network traffic. The blades are connected by Force10 MXL switches with the 40GB trunk cables. I am using 2 NICs for management, 2 NICs for iSCSI, 2 NICs for VM network traffic and on the new servers I want to use 2 NICs for vMotion on the new servers. Management is on VLAN A, iSCSI is on VLAN B and I want to put vMotion on VLAN C.

    Now that I typed all that out I guess my problem is that I want to use the NICs attached to the server stack for vMotion and not the iSCSI stack but those are connected to the using the 40GB uplinks directly to the server stack and there is no routing for the iSCSI VLAN on that switch. There are 4 free open 10GB ports on the MXL blade switches that I might be able to use to connect directly to the iSCSI switches to keep them on the same VLAN.

    Does that make sense? I will have to work on that.