Glad it is easy to read! Full disclosure, I'm no shell wizard either, it is just a handful of (b)ash principles and sed constructs that can be put together in different ways, everything that extends on that is googled and "stackexchanged"
This is basically just iterating through all the sched-stats options that start with an n, which gives you most of what you care about when looking at NUMA. This match instead of a list of options was done because we removed / added some options going from 6.7 to 7, so this way it works across all versions.
In your case, you have 16 core pNUMA nodes and two VMs which fit into that, so you have 2 x 16 vCPU NUMA clients, which is the "atomic" unit the NUMA scheduler deals with, so unless you go above that, the VM runs on either one of the physical nodes.
Both of those large VMs are currently on the same node, maybe because of a device / IO relation (i.e. both use an IO device that is attached to the 2nd socket) or because there is IO between the VMs. Usually, that will have VMs run more efficiently, but esp. at that size (when VMs fit tightly into pNUMA nodes), the "locality" scheduling might be a bit overeager. Try: https://kb.vmware.com/s/article/2097369 (after changing the setting, you need to migrate the VMs off and back on, alternatively power-cycle, not guest OS reboot).