ESXi

 View Only
  • 1.  vmotion network interface

    Posted Jun 03, 2011 12:03 PM

    Hi,
    We have a basic setup of two ESXi 4.1 servers, both with two physical network cards.

    Card 1 I have configured to attach to our normal LAN with a VMkernal port group with only "Management Traffic" enabled.

    Card 2 I have configured to use a private vlan, with a VMkernal port group with both "vMotion" and "Fault Tolerance" enabled.

    I can ping across the private vlan between the two servers.

    I  expected ESX to use this private vlan when moving VM's between the two  servers, but watching the network performance stats it is using the LAN  interface.
     
    Is this expected behaviour ?

    Regards
    Chris



  • 2.  RE: vmotion network interface
    Best Answer

    Posted Jun 03, 2011 12:08 PM

    Welcome to the Community,

    what exactly are you doing? vMotion is the process to move only the server workload to another host, leaving the virtual disks in place on shared storage. If you are cold migrating VM's (moving virtual disks) it will use the Management Network.

    André



  • 3.  RE: vmotion network interface

    Posted Jun 03, 2011 12:15 PM

    Hi André,

    I am moving (migrating) the whole VM, including virtual disks from the local storage on one server to the other.

    Thanks for the clarification.

    Regards

    Chris



  • 4.  RE: vmotion network interface

    Posted Jun 03, 2011 12:32 PM

    Hi,

    So you are trying to perform Storage Vmotion. But that requires Storage vMotion and you've mentioned migrating the VM files from the local storage. Storage Vmotion requires shared storage which is accessible to both of your ESX servers and your VM files are placed onto that to use SVMotion or vMotion.



  • 5.  RE: vmotion network interface

    Posted Jun 03, 2011 12:54 PM

    Are you storing your virtual disks (vmdks) on local ESXi hardware or do you have a SAN that you store your virtual disks on? I'm guessing if you are choosing to move the disks when you kick off a move then you are storing your virtual disks local to the ESXi Servers. This isn't a bad thing per say, especially if you were a singer ESXi server setup before and you've taken the awesome steps to get vCenter rolling so you can move the workload between hosts.

    As far as best practices go, it would be better to have shared storage for your virtual disks so they don't really have to go anywhere and you can just use vmotion for what it's really for, which is moving the workload (RAM, CPU, and the rest of the resource usage other that the virtual disks) from one ESXi Server to another. This would also make caring for your ESX servers easier. At that point, you would never have to worry about leaving anything important stranded on the ESX server, should something ever go seriously wrong.



  • 6.  RE: vmotion network interface

    Posted Jun 03, 2011 01:26 PM

    Hi All,

    Thanks for the helpful replies - To explain, we do have a SAN and intend to attach our ESXi servers to shared storage (3 in total), however I wanted to get the basics up and running one step at a time, in this instance the various networking aspects.

    We have 8 physical network cards per server and I intend to use 6 of those - 2 x LAN connections, 2 x SAN connections, and 2 x VMkernal (fault tolerance and vmotion), all fixed 1gb full duplex.

    As mentioned, I expected to see it using the vmotion interface for 'internal' operations (i.e. moving 'things' around) rather than impacting on the general user LAN interface. Having said that, I appreciate once the VM's are moved to the shared storage the issue goes away.

    Regards

    Chris