Well, without tiering active: no problem and with memory tiering active : PROBLEMS ! π€£
I still need to test the fresh VM thing... I'll let you know
-------------------------------------------
Original Message:
Sent: Dec 19, 2025 06:49 AM
From: Duncan Epping
Subject: Problem with memory tiering and client VMs
I am just happy it rules out issues with memory tiering :-)
Original Message:
Sent: Dec 18, 2025 08:53 AM
From: fehret
Subject: Problem with memory tiering and client VMs
Hi Duncan,
I tested the command and I can confirm tiering works if I put "pressure" on it.
In the example below, the first command is when all 3 hosts are available and the second one is when one is in maintenance. We can clearly see the difference.
BTW: nice article on your blog! ππ
But regardless if I'm in situation with tiring occuring or not, my Win11 VM will start getting crazy (of course, except when memory tiering is completelly off)

But chat GPT gave me some interresting thoughts (no real solution) that it might be related to vTPM but I can turn this issue in all directions, I don't understand why ONLY Win11 VMs will have such issue. I've a couple of Win2019, Win2022 VMs and none of them had issues, despite they also encrypted and have vTPM. Also GPOs are almost all similar as I use CIS benchmarks for all.
I'll try a brand fresh VM soon to see if it has something to do with legacy stuff (the current VMs were upgraded from Win10 to Win11) but still no clue why for the moment! π
PS: the fact it is crashing when all 3 hosts are on and no memory tiering is occuring rules out thermal throttling IMHO.
Original Message:
Sent: Dec 17, 2025 10:40 AM
From: Duncan Epping
Subject: Problem with memory tiering and client VMs
I just tested it, and if you go to the command line you can indeed see the stats. Just as an example, I powered on a lab in my own environment with Memory Tiering, and I deployed some VMs and overloaded to host to ensure there was memory pressure, and below you can see that there are memory pages stored in Tier1 (NVMe).
memstats -r vmtier-stats -u mb -s name:memSize:isTiered:active:tier1Target:tier1Alloc:consumed:tier1Consumed:tier1ConsumedPeak VIRTUAL MACHINE MEMORY TIER STATS: Wed Dec 17 15:27:27 2025 ----------------------------------------------- Start Group ID : 0 No. of levels : 12 Unit : MB Selected columns : name:memSize:active:tier1Target:consumed:tier1Consumed-------------------------------------------------------------------------- name memSize active tier1Target consumed tier1Consumed-------------------------------------------------------------------------- vm.533611 4096 384 0 371 5 vm.533612 4096 382 0 368 4 vm.533613 4096 379 0 365 4 vm.533614 4096 353 0 336 1 vm.533615 4096 386 0 374 5-------------------------------------------------------------------------- Total 20480 1883 0 1812 18--------------------------------------------------------------------------
Original Message:
Sent: Dec 17, 2025 07:17 AM
From: Duncan Epping
Subject: Problem with memory tiering and client VMs
I have not tried this on 8.x, but on the commandline you can look at "memstats -r vmtier-stats" to see if the tier is actively being used or not. Note, and this is often forgotten, we tier-out memory when there's host pressure, not just because we can. There needs to be a reason, so if there's no reason, you won't see tiering.
Original Message:
Sent: Dec 17, 2025 04:38 AM
From: fehret
Subject: Problem with memory tiering and client VMs
Update ratio : not working either with 25%... below, there is no interrest! π€£
Original Message:
Sent: Dec 17, 2025 02:18 AM
From: fehret
Subject: Problem with memory tiering and client VMs
Hi Dave,
I was using 100% but my current workloads are really not high on those hosts.
I don't even have enough workload to fll my 96 GB of RAM yet, I wanted memory tiering to spin up labs and nested VMs (which is also NOT recommended, I know)
I still have an old Dell server with enough RAM, but in Europe, electricity bill hurts a little and it is really not practical to work on labs when you need 15-20 minutes to spin everything up.
Concerning checking if pages are moving to NVME, do you have any recommandations/docs how to do that?
But I'll try the ratio down and let you know, thanks for the suggestion! π
Original Message:
Sent: Dec 16, 2025 09:32 AM
From: Dave Morera
Subject: Problem with memory tiering and client VMs
What ratio are you using? The differences between the tech preview version (8.0U3) and 9.0 are night and day in many areas including performance. The recommended DRAM:NVMe ratio for 8.0U3 is 4:1 so only 25% comes from NVMe.
Are you actually seeing pages move to NVMe? Is the NVMe device showing r/w?
I would start by putting the ratio back to 25% or lower even. Changing this ratio higher in 8.0U3 could be the culprit, in addition to unsupported devices.
Original Message:
Sent: Dec 15, 2025 04:55 AM
From: fehret
Subject: Problem with memory tiering and client VMs
Hi there,
It's in my homelab, so YES, it is unsupported hardware! But I'll be happy if somebody has some new ideas to investigate. π
The context : 3 Minisforum MS-01 cluster with ESXi 8.0 U3g, all cores active (performance & effiency) and 96GB of RAM - All identical - No VSAN
I purchased 3 dedicated NVME disks and I wanted to activate memory tiering.
Starting that point, I started to have issues with some VMS with one common thing : all are Windows 11 VMs.
All other VMs didn't show any sign of issue.
Symptoms:
- not available anymore, no network connectivity. Can happen when using it.
- black screen in vCenter console
- no VMtools reporting anymore
- very high CPU consumption, which turns the cluster in a very bad state (DRS score below 30% when usually >95%)
- comes back like nothing happened when migrated to another host, until it starts again. There is also no trace in VM's event viewer.
What I've tried so far but not working:
- host afinity rule (VM will become unavailable at some point, even if not moving, even if powered on on host where it should)
- changed CPU allocation at start (both "assigned at power on" or else)
- different memory tiering ratios
- changed scheduler settings on VM : bcdedit /set hypervisorschedulertype classic
As soon as I turned the tiering feature off, it is OK again (and it has been rock stable for months before)
Any idea I could try out? Did somebody achieve proper memory tiering in a mini-PCs lab?
Thanks in advance! π
-------------------------------------------