Thanks Rickard...my response inline...
rshroff wrote:
I'm sure there would be a scenario when a 1Gbps pNIC may not suffice for vMotion traffic and we can scaleout using multiple pNICs for the vMotion port group.
In 4.x there are only one pNIC actually used for vMotion no matter which load balancing option being used. In 5.0 we have multi-NIC vMotion where several pNICs could be used together during a single vMotion.
So I take ESXi 5.0 is smart enough to do true load balancing for vMotion based traffic with no concern to the load balancing algorithm specified for the corresponding port group.
rshroff wrote:
My question is, will both the pNICs be used equally in a true load balancing fashion? If not, what does it demand to achieve true load balancing in case of VMkernel ports?
Since the VMkernel ports could be used for many things (management, iSCSI, vMotion, NFS, Fault Tolerance) there is different configurations that has to be done have any load balancing. Which of these use cases do you want more "real" load balancing over pNICs?
I was referring to the case of VMkernel ports used for vMotion traffic. Actually, if I go by the setup based on the yellowbrick article (posted on this thread earlier), I can use say 3 pNICs, one for the Management port group (with 2 standby pNICs) and other two for the vMotion port group (with 1 standby pNIC used by the management port group). In such a configuration scenario, I would expect ESXi 5.0 to do true load balancing when it comes to vMotion traffic. Please correct me if I'm wrong in my understanding.
Rolf