You can indeed have many more vCPU than cores, up to twenty times more I think, but there is a fundament issue here, vmware nor any other hypervisor can add real processing capacity. If you have four 4 vCPU VMs, only two (at best) will actually be running concurrently on an 8 core server. In other words, when all vCPU are fully loaded, each VM will receive at best 50% of 4 physical cores of processing capacity. It cannot suddenly do more actual work because it is somehow virtualised than the underlying hardware can complete.
Re CPU hog, yes it will start 16 threads of execution on a physical server install on this machine, and yes these will all show 100% load, but that is mearly an indication that there is no spare capacity on any thread to run the Windows idle thread. In fact 16 'cores' throughput will not be achieved, at best it is 9 to 10 physical cores worth of actual throughput. In a time-line view,
- Windows starts the 16 threads on the 16 processing queues
- Only threads 1, 3, 5, 7, 9, 11, 13 and 15 are actually started, since there are only 8 cores
- Thread 1 (say) stalls waiting for memory fetch
- Core 1 starts work on thread 2 whilst it is otherwise stalled
- Once thread 2 similarly stalls, it picks up again on thread 1
and so on with the other cores. Obviously in reality it's much more complicated than that, but this is the jist of what is happening. Note there is no time spent running the WIndows idle thread, hence the CPU load shows as 100% in task manager for all available CPUs (which equals threads in this case).
Please award points to any useful answer.