Old thread but better late than never...
"I have changed this setting to 2 and it works fine. Now the cluster is moving one machine after the other."
And with that, you disabled Storage vMotion completely. Your sVmotion jobs will hang at 13% and not finish.
A single vMotion needs "2" as you established. But Storage vMotion needs "8". Mind you all of this is "per host".
If you want Storage vMotion to work at all again, make it "8". However with this a single sVmotion can now run (per host) but a regular vMotion on the SAME host cannot run while the sVmotion is running (the vMotion task will be stuck at 13% until the sVmotion task finishes).
If you want to be able to do a sVmotion + a regular vMotion on the same host at the same time, make it "10".
If set to "8", a single host can not do a sVmotion + regular vMotion at the same time any more. But remember, this is all "per host". So *another* host can do multiple regular vMotions at the same time because it has an allowance of 8 also (every host has).
I was in a situation once where I had to throttle down the number of regular _simultaneous_ vMotions between two datacentres, as too many vMotions at once would saturate the ISL causing havoc.
So I purposely and consciously set the value to "2" to slow down the total number of all those vMotions (every host doing one essentially).
I knew this would break Storage vMotion. But that was acceptable during that action.
After transferring all the VM's to the other datacentre, I set it to "10" to allow a single sVmotion per host (which is what we want due to the storage back-end) + one regular vMotion OR no sVmotion but several vMotions at the same time (again, per host).
Before anyone starts to mess about with this setting, please know what you want to achieve and what the consequences are. Just setting it to "2" kills Storage vMotion so you might want to re-think that...
After adding / edting the value, restart the "VMware vCenter-Services" service. No need to reboot anything.