VMware vSphere

 View Only
  • 1.  Thick and Thin Provisioning Differences

    Posted Apr 23, 2016 09:15 AM

    I understand it in general but I can't comprehend how it exactly utilizes the space. What I meant. For example, I am having a limitation of 256GB for guest VM hard disk. If I am going to create a new hard disk and will indicate it's size as 256GB (MAXIMUM) and will indicate it's provisioning as THIN then will it expand the size of this hard disk if the content of this hard disk will grow more than 256GB? Please explain it to me. Thanks in advance!



  • 2.  RE: Thick and Thin Provisioning Differences

    Posted Apr 23, 2016 10:42 AM

    Hello,

    Welcome to community!

    Thin- It will consume only that much space that it need initially and will grow based on the demand later. Ex you Provisioned a VM with 4 GB but it need initially only 2gb so it is going to consume only 2gb and can grow later till 4gb because that what u allocated.

    Thick- Will consume the entire space from the datastore for example if you provisioned disk with 4 GB straight away gone from the datastore dedicated for it.

    Kindly mark it as correct/helpful if this answer your query.

    Rgds

    Kanishk



  • 3.  RE: Thick and Thin Provisioning Differences

    Posted Apr 23, 2016 01:17 PM

    If you're using VMFS3 with block size of 1MB, that explain the limitation of 256GB per virtual disk, see: VMware KB: Block size limitations of a VMFS datastore



  • 4.  RE: Thick and Thin Provisioning Differences

    Posted Apr 23, 2016 05:26 PM

    Hi,

    Thin: Allocate and zero on first write

    Thick Lazy: Allocate in advance and zero on first write

    Thick Eager: Allocate and zero in advance

    Thin – These virtual disks do not reserve space on the VMFS filesystem, nor do they reserve space on the back-end storage. They only consume blocks when data is written to disk from within the VM/Guest OS. The amount of actual space consumed by the VMDK starts out small, but grows in size as the Guest OS commits more I/O to disk, up to a maximum size set at VMDK creation time. The Guest OS believes that it has the maximum disk size available to it as storage space from the start.

    Thick (aka LazyZeroedThick) – These disks reserve space on the VMFS filesystem but there is an interesting caveat. Although they are called thick disks, they behave similar to thinly provisioned disks. Disk blocks are only used on the back-end (array) when they get written to inside in the VM/Guest OS. Again, the Guest OS inside this VM thinks it has this maximum size from the start.

    NAMEDESCRIPTION
    Eager Zeroed ThickAn eager zeroed thick disk has all space allocated and wiped clean of any previous contents on the physical media at creation time. Such disks may take longer time during creation compared to other disk formats. The entire disk space is reserved and unavailable for use by other virtual machines.
    Thick or Lazy Zeroed ThickA thick disk has all space allocated at creation time. This space may contain stale data on the physical media. Before writing to a new block a zero has to be written, increasing the IOPS on new blocks compare to Eager disks. The entire disk space is reserved and unavailable for use by other virtual machines.
    ThinSpace required for thin-provisioned virtual disk is allocated and zeroed on demand as the space is used. Unused space is available for use by other virtual machines.


  • 5.  RE: Thick and Thin Provisioning Differences

    Posted Apr 29, 2016 12:02 AM

    just another aspect ...

    NAMEpossible result after a recovery with corrupt VMFS-Metadata - using an NTFS-guest bootable system disk as example ...
    EagerZeroed Thickcan survive without any damage
    this format was formerly known as dd-image of a full disk
    Lazy Zeroed Thicksimilar to above but full with garbage that has to be cleaned by a checkdisk run
    Thinwithout the VMFS-metadata a thin disk is just a bunch of fragments that is completely useless


  • 6.  RE: Thick and Thin Provisioning Differences

    Posted Mar 30, 2024 08:58 PM

    Just curios do you have some benchmark of that different types?