I have an ESXi datastore that is on a 2TB SSD. In reality, it provides 1.82TB of available space. It contains a single VM with a 2TB virtual disk, brought over from a vSAN datastore (which had much more than 2TB available space). It is a thin disk and the OS is only using 720GB of space, so this VM worked fine of course on the new datastore despite being over provisioned.
Problem is.... after freeing up a lot of space within the guest OS, I wanted to zero out the free space and compact the thin vmdk. But when I ran the dd command to zero free space, it thinking the available space is 2TB, ran all the way up to 1.82TB for the vmdk, which filled the datastore 100% and froze the VM up since no space was left on datastore for cache, nvram, etc... I guess.
I thought, no problem, I will just run "vmkfstools -K" (the punch zero command) with the VM powered off and it will shrink the vmdk by the space already zeroed out, freeing up space on datastore again. Problem is.... after running the hole punch command, and it runs to 100% over the course of an hour.... the vmdk is still 1.82TB and the datastore is still 100% full. I used "du" command to check size of vmdk, not "ls -l" and before running vmkfstools -K I deleted the "zeroes" file from /tmp on the VM using a live CD.
Any ideas why despite deleting the zeroes file and hole punching, it isn't removing zeroed out space? How can I solve this to get my space back and fix the datastore being full problem?
The virtual disk is encrypted within Ubuntu with LUKS using LVM volume. Could this be why ESXi can't hole punch zeroes? Because it doesn't "see" any? I couldn't find anything on vmware site or 3rd party sites mentioning if encrypted disks work with this process or not. Is this is why, any way to get around it?
Needless to say, after this is resolved, I'll be converting to a smaller thick disk to avoid issues again in future.
EDIT: I also thought maybe it wasn't properly zeroing out in the guest OS because once it got full it locked up, so using a live CD I tried creating only a 50GB zero file on the guest VM vmdk, that was then removed, and hole punch ran again, trying to buy 50GB of space at least, but it didn't work, vmdk stays 1.82TB after hole punch.