That makes sense and I can get it to vMotion using the same iSCSI network but the problem is that at some point I still have to disable the vMotion on the iSCSI NICs and I'm worried about the message that states I could lose connectivity to the storage. So in order to disable it I have to vMotion all the VMs off but then I can't vMotion them back on to do the next host. Does that make sense? Here is more information.
I have 5 ESXi servers in a cluster that I am replacing with new Dell Blade servers. They are connected to 2 separate stacks of Dell 8164 switches. One stack is for iSCSI and one is for network traffic. The blades are connected by Force10 MXL switches with the 40GB trunk cables. I am using 2 NICs for management, 2 NICs for iSCSI, 2 NICs for VM network traffic and on the new servers I want to use 2 NICs for vMotion on the new servers. Management is on VLAN A, iSCSI is on VLAN B and I want to put vMotion on VLAN C.
Now that I typed all that out I guess my problem is that I want to use the NICs attached to the server stack for vMotion and not the iSCSI stack but those are connected to the using the 40GB uplinks directly to the server stack and there is no routing for the iSCSI VLAN on that switch. There are 4 free open 10GB ports on the MXL blade switches that I might be able to use to connect directly to the iSCSI switches to keep them on the same VLAN.
Does that make sense? I will have to work on that.