ESXi

 View Only
  • 1.  LACP link bandwith aggregation

    Posted Dec 02, 2013 07:17 PM

    Hi,

    I'm a little bit confused with all the documentation I have read regarding this issue.

    Is it possible to have a 2Gb connection to a single datastore using LACP or any other method for NFS and/or iSCSI protocols?

    Some of the documentation I read so far:

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004048

    http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf

    http://www.vmware.com/files/pdf/techpaper/Storage_Protocol_Comparison.pdf

    Could you please explain me without detail the steps or considerations needed and the results obtained?

    Thanks!

    elgreco81



  • 2.  RE: LACP link bandwith aggregation

    Posted Dec 02, 2013 10:46 PM

    LACP calculates hashes based on e.g. source and destination addresses to determine which link to use, so in worst case you will even end up with an idle link. For storage access I'd suggest you check the storage system's capabilities, and - if supported - configure the Round-Robin path policy, which distributes the I/Os amongst the two uplinks.

    André



  • 3.  RE: LACP link bandwith aggregation

    Posted Dec 03, 2013 08:02 AM

    Hi André,

    I have no storage, I'm just studying and trying to understand how this technology works.

    As you say "IP Hash load balancing" calculates hashes for each "source & dest. IP" and this load balancing technique is required for a LACP configuration. So far so good...even better, with LBT vSphere can now detect that if the load of any given NIC into the same group has a load over a 75% of its capacity, then it will rebalance the load.

    But my question is other:

    "Is it possible to have a 2Gb connection to a single datastore using LACP or any other method for NFS and/or iSCSI protocols?" (host or vm level)

    "Could you please explain me without detail the steps or considerations needed and the results obtained?"

    Some parts of the documentation say that NFS v3 (the one that vSphere supports) will only work with 1Gbps (or 10Gbps) as only "one session" is supported in the protocol version. I have never read any documentation that states CLEARILY how LACP and/or LBT affects bandwidth...and that's my question.

    I assumed that this was a very "silly" question but after asking a couple of colleagues I see that I'm not the only one with this question unanswered.

    Thanks,

    elgreco81



  • 4.  RE: LACP link bandwith aggregation

    Posted Dec 03, 2013 05:13 PM

    Anyone?



  • 5.  RE: LACP link bandwith aggregation

    Posted Dec 09, 2013 07:49 PM

    Hi,

    Just in case anyone comes across this post looking for an answer to this question, the short answer that I found after reading lots of material and testing by myself with tools like lperf is that NO, at least with version 5.5 of vSphere is not possible to "add" bandwidth of a team of NICs to a particular VM.

    This is due to the algorithms used by vSphere. For LACP configurations, the only one supported is "IP hash" which does not expands the traffic of one VM across more than 1 NIC at a time.

    Please note that this is possible with other softwares that let you used other algorithms but at least for now, I don't see how this could be accomplished using vSphere. For example https://onapp.zendesk.com/entries/30919056-LACP-4-NICS-bond-Mode-4-traffic-goes-through-1-NIC-instead-of-4

    At least for me, until I see otherwise or somebody is able to answer this question I posted here (and in twitter/linkeding), LACP and LBT are great options but they just are not meant to provide a single VM with the sum of all the bandwidth available in a NIC team configuration. (so, you go with 1Gbps or 10Gbps).

    Regards,

    elgreco81

    PS: I'm always learning and I like to think of this community as a learning and sharing place. Hope this info helps and if it is wrong, please share your knowledge so I can learn too :smileyhappy:



  • 6.  RE: LACP link bandwith aggregation

    Posted Sep 16, 2014 08:05 PM

    Thanks for your post. I too have been looking into this and came to the same conclusion.