I have a fleet of hosts running ESXi 7.0 U3c - these hosts all have motherboard-based micro-SD card boot devices. As part of the recommended mitigations to reduce I/O to this type of boot device configuration, I have from the beginning (even back when these hosts were ESXi 6.5/6.7) always configured productLocker to point to a shared storage location. In addition to the I/O reduction to the boot device, this is a convenient way to keep all vm's tools installation current and consistent regardless of what host they are running on, because vmTools are backward and forward compatible to host version. When a new vmtools version is released, I simply replace the files in my shared storage location (and restart mgmt agents on hosts) to compare guests' Tools version against the newly-uploaded version.
The desired productLocker symlink is shown from the ls -n output:
productLocker -> /vmfs/volumes/5e2b25a0-281972a8-507b-5cb901ffba10/SharedLocker
After upgrading these hosts from 7.0U2d to 7.0 U3c (using the vendor-custom image profile) whenever I run any subsequent VUM updates to patch drivers, etc and reboot, the productLocker value reverts to:
productLocker -> /SharedLocker
where the SharedLocker text is in red indicating it is an invalid or accessible location. I can reset this using the API call like:
$esxName = '<hostname>.<domain>.net'
$dsName = '<Datastore label>'
$dsFolder = 'SharedLocker'
$esx = Get-VMHost -Name $esxName
$ds = Get-Datastore -Name $dsName
$oldLocation = $esx.ExtensionData.QueryProductLockerLocation()
$location = "/$($ds.ExtensionData.Info.Url.TrimStart('ds:/'))$dsFolder"
$esx.ExtensionData.UpdateProductLockerLocation($location)
Write-Host "Tools repository moved from"
Write-Host $oldLocation
Write-Host "to"
Write-Host $location
which resets the value to the desired location appropriately, and this setting DOES then persist across reboots from that point on...but if I then subsequently apply ANY additional driver update using VUM it resets the productLocker symlink value back to /SharedLocker on reboot. Has anyone else seen this behavior or know the reason why this happens and how to prevent it? Note, I do NOT see this behavior on my few hosts that have local high-endurance bootable media. and yes, I will be retrofitting all my hosts with M.2 disks, but I'd prefer to not have to deal with this in the interim every time I patch a host.