ESXi

 View Only
  • 1.  Storage vMotion takes ages

    Posted Nov 06, 2012 05:30 PM

    Hi Guys,

    I'm svMotioning VMs between an NFS datastore and an iSCSI one and noticed that the process takes ages.  All vmdk's are thin across the board.

    I noticed that the network usage on the host is very high (to the point of link saturation).  It seems that the "used" space of the VM is being ignored and the storage vMotion is actually migrating the provisioned size of the VM instead!

    As an example:

    A VM could have a 500GB Thin disk with 3GB in use.  I see 500GB get sent across the network from the NFS appliance and then only 3GB gets sent onto the iSCSI device....

    How can I live migrate VM's from one to the other but only have the "in-use" data transferred?  This would be MUCH faster than what I'm enduring at the moment.



  • 2.  RE: Storage vMotion takes ages

    Posted Nov 06, 2012 10:19 PM

    I haven't tested this yet, but from what I've read online I understand that you can reclaim your space by zeroing out the blocks (and this could be scheduled once a week). You first need to defrag the virtual disks to end up with contiguous free space, and then use the sdelete utility to zero the blocks.

    My theory is that this will not only reclaim the space by zeroing the blocks, but also increase the storage vmotion time by reducing the volume that needs to be transferred.

    Here is a link to the sdelete utility;

    http://technet.microsoft.com/en-gb/sysinternals/bb897443.aspx

    Sample script;

    http://www.yellow-bricks.com/2008/01/04/vmware-consolidated-backup-and-deleted-files/

    We make heavy use of thin provisioning, so this is of real interest at the moment. I will post my results once I have had a change to test this theory, but let me know if this works for you?

    Cheers,

    Jon



  • 3.  RE: Storage vMotion takes ages

    Posted Nov 07, 2012 10:04 AM

    Thanks for the response :smileyhappy:

    If I spin up a brand new VM and give it a 100GB Thin disk, then immediately storage vMotion the empty VM container, I can see the 100GB of empty data being "read" by the host from the NFS appliance.  But just so I understand, you're saying I would need to install an OS into the empty VM container, defrag, then perform a storage vMotion and it will run much faster?

    I will test this out although I must say I do have doubts as to whether this will actually work.

    I've used vmkfstools in the past to copy vmdk's which will only move "in-use" data and thus function much faster, however I can't use this utility to live migrate a VM and my requirement here is that none of the VMs can suffer downtime.



  • 4.  RE: Storage vMotion takes ages

    Posted Nov 07, 2012 10:45 AM

    You can test this by adding a new thinly provisioned disk to an existing VM and use the advanced options to sVmotion just this disk. This allows you to test my theory without having to deploy a new OS.

    It will be interesting to see how the back end array influences this, especially I a scenario where you have thin on thin or deduplication in the mix ... I'm on a train into work at the moment but will test this today on both VMFS (thin on thin) and NFS (with deduplication).

    But as I said, it's just a theory at the moment (perhaps even a long shot), if nothing else it might simply prompt some real propeller heads in the community to come up with a tried and tested solution. I'll post my results later.

    Cheers,

    Jon



  • 5.  RE: Storage vMotion takes ages

    Posted Nov 07, 2012 01:05 PM

    Ok so I tried adding an additional 50GB thin disk on a test windows VM I had already deployed.  The VM and all it's disks sat on the NetApp FAS storage (NFS) I have here.

    I formatted the disk and analysed and windows reports the disk as 0% defragmented.  When trying to do a defrag it says the same and doesn't run.  Anyway I tried the storage vmotion with only the new disk and unfortunately I'm seeing the same result.  The target is an iSCSI device and again looks like the whole 50GB of unused data is being sent over the wire...  I'm wondering if this is just expected behaviour when migrating between file and block storage or perhaps this is just the way storage vmotion works in vSphere 5.

    Thanks again for the responses and help, guess I'll have to sit here twiddling my thumbs for the next 10 hours, waiting for my VM's to finish moving :smileysilly:



  • 6.  RE: Storage vMotion takes ages

    Posted Nov 07, 2012 08:20 PM

    I only managed to do some limited testing today, and since I rushed it, the results arent exactly sceintific ... will try do some more tomorrow;

    Here are the results so far;

    Storage   vMotion ScenarioSource StorageDestination   StorageStartFinishDuration
    10GB thinly   provisioned disk with defrag and sdelete -z, empty diskNetApp MetroCluster   (NFS)NetApp MetroCluster   (NFS)19:38:5219:40:5600:02:04
    10GB thinly   provisioned disk with no free space, full disk - used iometer to fill the   diskNetApp MetroCluster   (NFS)NetApp MetroCluster   (NFS)19:56:4219:59:0200:02:20
    10GB thinly   provisioned disk contents deleted (includin recycle bin), no defrag, no   sdelete, empty diskNetApp MetroCluster   (NFS)NetApp MetroCluster   (NFS)20:05:2520:07:2400:01:59
    10GB thinly   provisioned disk with defrag and sdelete -z, empty disk (same as first test,   but this disk has now been used)NetApp MetroCluster   (NFS)NetApp MetroCluster   (NFS)00:00:00

    sVMotion performance;

    ** The three peaks are the three tests above.