VMware vSphere

 View Only
  • 1.  ESXi 7 - did anything change with thin provisioned disks?

    Posted Mar 12, 2022 09:59 PM
      |   view attached

    Hi all,

    apart from the vSphere systems at work I set up a free ESXi 7 at home to toy with and I am a bit confused by the reported amount of storage space taken by VMs with thin provisioned disks.

    E.g. a fresh Ubuntu VM, disk nominal size 60 GB, taken by the guest OS are 16% currently. My UI shows what you can see in the screenshot. This is at least consistent with the used and free space reported for my datastore, but I am still a bit confused.

    In previous vSphere installations thin provisioned disks took only the space actually used by the client. Oversubscription is the point!

    What am I missing?

    Thanks!
    Patrick

    P.S.

    I just checked our 6.7 installation  - same. All thin provisioned VMs take up the maximum space possible. My memory must come from 5.5 ...
    Is there possibly some global setting now to permit oversubscription of storage, now?



  • 2.  RE: ESXi 7 - did anything change with thin provisioned disks?

    Posted Mar 13, 2022 09:43 PM

    So I did some more experiments ...

    Inside the Linux VM that is supposed to be thin provisioned:

    1. Inside the Linux VM that is supposed to be thin provisioned:
      dd if=/dev/zero of=/deleteme bs=1024k; rm /deleteme
      shutdown -P now
    2. On the ESXi host:
      vmkfstools -i TrueCommand.vmdk -d thin foo.vmdk
      mv foo.vmdk TrueCommand.vmdk
      mv foo-flat.vmdk TrueCommand-flat.vmdk
      vi TrueCommand.vmdk # change foo-flat.vmdk to TrueCommand-flat.vmdk
    3. Result:

      -rw-------    1 root     root     64424509440 Mar 13 21:39 TrueCommand-flat.vmdk

    So the nominally 60 G sized thin provisioned disk with 16% used inside the guest and the rest zeroed still takes up 60 G on the ESXi host.

    WTF? everything I thought I knew about disk images in VMware is not valid anymore?



  • 3.  RE: ESXi 7 - did anything change with thin provisioned disks?

    Posted Mar 13, 2022 10:00 PM

    Your dd command will completely turn any type of mixed-fragments-vmdk into a completely eager thick provisioned vmdk. (ignoreing space outside of partitions)
    Check with
    vmkfstools -t 0 name.vmdk > fragments.txt
    fragments.txt should not contain any  lines with thin or lazy zeroed fragments.

    Ulli



  • 4.  RE: ESXi 7 - did anything change with thin provisioned disks?

    Posted Mar 14, 2022 08:58 AM
      |   view attached

    Thanks. I was just following the procedure I had used for years to shrink a disk. Fill with zeroes, then convert. Could you explain a bit more what is happening here? I am 100% sure this used to work in the past - ESXi 5.x as well as VMware Fusion on my Mac. There's a reason sdelete.exe -z exists

    Thanks,
    Patrick

    Attachment(s)

    txt
    fragments.txt   80 KB 1 version


  • 5.  RE: ESXi 7 - did anything change with thin provisioned disks?

    Posted Mar 14, 2022 09:18 AM

    OK - now things are getting weird:

    • State of the VM - before.png
    • Create an additional virtual hard disk, type thin provisioned, size 16 G (the default)
    • Delete new hard disk again
    • State of the VM - after.png

    The state after is what I was looking for. Any idea what is happening here?

    This is the same VM where i did the dd/vmkfs "dance".



  • 6.  RE: ESXi 7 - did anything change with thin provisioned disks?

    Posted Mar 14, 2022 11:08 AM

    Your procedure to shrink the disk is not very helpful to understand thin provisioning.
    Actually a dd command that writes zeros into an empty vmdk creates one fragment in eager zeroed provisioning style.
    I suggest you use the vmkfstools command that shows fragments by fragments to get a better understanding.
    I have written some posts about "what actually happens when provisioning vmdks" but I dont have my notes available at the moment.

    Ulli



  • 7.  RE: ESXi 7 - did anything change with thin provisioned disks?

    Posted Mar 14, 2022 11:17 AM
    [root@esxi:/vmfs/volumes/622905da-0f0cf95d-916c-3cecef46f8d0/TrueCommand] vmkfstools -t 0 TrueCommand.vmdk
    Mapping for file TrueCommand.vmdk (64424509440 bytes in size):
    [           0:     9437184] --> [VMFS -- LVID:622905d9-e1cb2e4a-352e-3cecef46f8d0/622905d9-d4b3cb7e-f0e8-3cecef46f8d0/1:( 240606248960 -->  240615686144)]
    [     9437184:   536870912] --> [NOMP --            0 -->    536870912)]
    [   546308096:   536870912] --> [NOMP --            0 -->    536870912)]
    [  1083179008:    45088768] --> [NOMP --            0 -->     45088768)]

    Line 3 shows an eager zeroed fragment - it references a location backed by VMFS-filesystem on physical storage.
    Line 4 shows a thin provisioned fragment - it is not physically backed up at all - for reading purposes it points to /dev/zero

    In your list no lazy zeroed fragments occur.

    Ulli



  • 8.  RE: ESXi 7 - did anything change with thin provisioned disks?

    Posted Mar 16, 2022 10:34 AM

    What is the recommended procedure to shrink a thin provisioned disk, then? Like after a Windows service pack installation or a major OS upgrade in the guest? As i said I have been zeroing the disks from the guest and then converting them with vmkfstools for years - always worked for me. VMware Fusion, too, is much more effective at cleaning up when you use sdelete.exe -z in your Windows VM first.