VMware vSphere

 View Only
  • 1.  NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 20, 2012 11:08 PM

    I am preparing to deploy a brand new vSphere 5 (Enterprise license) environment and would really appreciate a little feedback to some best practices or real world experiences. For our HA cluster we will be using Dell R720's with 2 Broadcom 5720 Quad-Port daughter cards (totalling 8 NICs per server) connecting back to a Dell EqualLogic PS6500X SAN (using the Dell Multipathing Extension Module for MPIO).

    I would like to know what would be the best distribution of NICs among my vSwitch's. I figure I have four different ways I can break the NIC's up and was looking for feedback as to wich would offer the best performance. (Note: the iSCSI Network will be using two Force10 S25N switches and the "public" Network will have two Cisco 3750-X switches).

    Option #1 (most balanced)


         vSwitch0 - 3 NIC's dedicated to iSCSI

         vSwitch1 - 3 NIC's dedicated to VM Network

         vSwitch2 - 2 NIC's to share Management and vMotion

    Option #2 (less iSCSI, more VM Network)

         vSwitch0 - 2 NIC's dedicated to iSCSI

         vSwitch1 - 4 NIC's dedicated to VM Network

         vSwitch2 - 2 NIC's to share Mangement and vMotion

    Option #3 (less VM Network, more iSCSI)

         vSwitch0 - 4 NIC's dedicated to iSCSI

         vSwitch1 - 2 NIC's dedicated to VM Network

         vSwitch2 - 2 NIC's to share Mangement and vMotion

    Option #4 (have not seen this as a best practice anywhere, but my collegue would like to do it this way)

         vSwitch0 - 4 NIC's dedicated to iSCSI

         vSwitch1 - 4 NIC's to share VM Network, Management, and vMotion

    I guess it really comes down to the following questions:

    Will 2Gbps be enough for connecting back to our SAN? Is 4Gbps overkill?

    Will 2-3Gbps be enough for all of our VM's? Or do I need all 4? (each host has 256GB of memory so the VM's per host will be pretty dense)

    Will the vMotion traffic have a major performance impact on the VM Network if I go with Option #4?

    Thank you in advance.



  • 2.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 21, 2012 01:53 AM

    I would suggest you consider using a mix of standard and distributed virtual switches.

    Configure your switch ports with 802.1q and tag all the various VLANs (separate VLANs for iSCSI / vMotion etc). No need for Etherchannel or anything fancy.

    Example

    vSwitch0 - 2 vmNICs (on different physical cards for redundancy)

    eg: vmNIC0 / vmNIC2

    ESXi Management VMKernel0 (Active/Standby)

    dvSwitch0 - 6 vmNICs

    VMKernel1 (iSCSI) - Active on all NICs - Depending on the Dell MPIO plugin you may have more than one VMKernel for iSCSI.

    VMKernel2 (vMotion) - Active/Standby on vmNICX / All other NICs standby

    VMKernel3 (vMotion) - Active/Standby on vmNICY / All other NICs standby

    VMNetwork - Active on all NICs

    Use "Route based on physical NIC load" for your VMNetwork dvPortgroup and this will dynamically load balance traffic when a physical NIC is using >75% of the bandwidth

    Note: Two VMKernels will allow use of the Multi-NIC vMotion feature in vSphere 5. Since you have 256Gb per host, I expect you have lots of VMs per host so VM evacuation time for Maintenance etc needs to be considered.

    Enable Network I/O Control and consider shares like the below

    iSCSI - 5000

    vMotion - 1000

    Virtual Machine Traffic - 3000

    The above will ensure iSCSI is prioritized in the event of contention, this will help avoid CPU %WAIT or latency issues.

    vMotion will be able to "burst" and utilize multi-nic's and vMotion quickly when there is no contention, and in the event of contention, still be able to perform vMotion's without significant impact to the Storage or VM traffic.

    Network I/O Control is an excellent feature, and in my experience in numerous IP storage environments, it works very well.

    Also check out my blog for tips on HA and IP Storage. http://joshodgers.com/2012/05/30/vmware-ha-and-ip-storage/



  • 3.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 21, 2012 04:21 AM

    Do you suggest active/active because of switch reasons?

    isnt it better to have max bandwith capacity available?



  • 4.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 21, 2012 07:10 AM

    Are you referring to vMotion and saying you think it should be Active/Active?

    If so, the reason I suggest to set it as Active/Standby is to ensure there is no contention between the two vMotion VMKernels.

    If your refering the VM Networking, Active/Active w/ Route based on physical NIC load ensures maximum available bandwidth and minimal contention via NIOC.

    Hope that helps.



  • 5.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 21, 2012 08:03 AM

    if you have enterprise plus license, as per the Josh Odgers, it is good to use LBT and NIOC, if you have enterprise license see the below

    - take each pNIC from each quad port card

    - all the portgroup should have vlan id assinged

    - 2 force 10 pSwitches has to be connected together via etherchannel (LACP) and 2 cisco also need to like this

    1

    vSwitch0

    - 2 pNICS , connect each pNIC to each pSwitch

    - 1 vmkernel portgroup, for management

    - teaming policy = Route based on the originating port ID

    - 2 vmkernel port groups for vmotion, use multinic vmotion, on the each port group, in the NIC teaming, select the "override Failover Order", select on pnic acitve and other pnic as standby.

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007467

    - since the vmotion buffer size is 512 kb, and you will get maxumum of 255 MB/s of network speed for vmotion with 1 Gig nic, for vmotion

    - so you can accomodate 1 or 2 virtual machine portgroup also

    2

    vSwitch1

    - 2 pNICS , connect each pNIC to each pSwitch

    - 1 or 2 virtual machine port groups or as your wish

    - teaming policy = Route based on the originating port ID

    3

    vswitch2

    - 4 pNICS , connect 2 pNIC to each pSwitch

    - teaming policy = Route based on the originating port ID

    -  4 vmkernel portgroup, for ISCSI, on the each port group, in the NIC teaming, select the "override Failover Order", select on pnic acitve and other pnics as standby.

    I think the storage is a dual controller, and each controller have 4 one Gig nics, so for controller 1 take 2 pnics and connect to pswitch1 and other 2 pnics to pswitch1 and use teaming for 2 pnics group, that is use LACP, between force 10 switch and storage,

    do the same for other controller, this will give redundancy and laodbalancing.

    use round robin as mutipathing policy in the esxi and also JUMBO frames

    Hi Josh Odgers - please verify this design and correct me if any thing wrong



  • 6.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 21, 2012 08:31 AM

    For a non Enterprise Plus licence, I would suggest a similar design but would obviously use a standard vSwitch (rather than dvSwitch0 in the above example) then leave the VMKernels for vMotion with the same, Active/Standby setup, the VM Networking the same (all NICs active) but change the teaming policy to "Route Based on originating port ID" and leave the physical switch configuration at 802.1q. (Route based on physical NIC load is only available on dvSwitch)

    For the VMKernel for iSCSI, I dont see any major reason not to have all NICs active and use Round Robin multipathing which should deliver low latency and solid throughput. If using multiple VMKernels, then you could consider 3 NICs active per VMKernel, with the other 3 standby to prevent the chance of contention between the VMKernels unless there was a physical switch or multiple uplink failure/s.



  • 7.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 21, 2012 08:56 AM

    a Small correction

    vswitch2

    - 4 pNICS , connect 2 pNIC to each pSwitch

    - teaming policy = Route based on the originating port ID

    -  4 vmkernel portgroup, for ISCSI, on the each port group, in the NIC teaming, select the "override Failover Order", select on pnic acitve and other pnics as unused adapters, So each port group will have one dedicated pNICS

    - use round robin as mutipathing policy in the esxi and also JUMBO frames

    josh - why i choose 4 vmkernel port group was, when you group the 8 nic of the storage (team 2 nics) you will get 4 ISCSI target IP's, so 4 paths, and also form the dell architecture says, for each path each vmkernel port group is needed.

    Please share your comment.



  • 8.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 21, 2012 03:14 PM

    I have Enterprise, not Enterprise Plus, so I do not believe I can use DVS or NIOC.

    You are correct that the storage has two controllers each with 4 1Gbs ports.

    I have not used DRS before, as the last time I used VMware was 4 Standard. When I put a host in maintenance mode it automatically vMotion all guests off correct? Does it try to move them all at once, or does it queue them up and move them a couple at a time?

    Is 2 physical NICs enough for VM traffic? This cluster is going to be starting out for our production Dynamics AX environment. We will have our AOS, SharePoint, SQL RS and AS, and several Remote Desktop Services VMs. We will then migrate other VMs and physical to it including a couple Exchange 2010.

    I just want to make sure that the VM traffic is not the bottleneck. I would rather have slower vMotion than a slower daily VM Network.



  • 9.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 21, 2012 09:56 PM

    Although I dont have any capacity planning data to go off, in my experience, VMs dont typically use as much network as people expect.

    In my example, your VMs would have access to 6 x 1GB NICs (on vSwitch1) although shared with iSCSI & vMotion this is probably the best solution you will get considering your constraints (1GB & Enterprise Licence).

    vMotion sharing 2 of the 6 NICs shouldnt cause you any issues as vMotion is only burst traffic, so you shouldnt see any vMotion traffic under normal curcumstances assuming your cluster isnt overloaded and DRS isnt set to apply priority 1,2,3,4,5 recommendations. (Use default DRS setting , of apply priority 1,2,3). When vMotion does happen for whatever reason, using Multi-NIC vMotion should ensure it happens it a timely manner with minimal impact to your cluster.

    iSCSI using "round robin" or your vendors MPIO spreading the load across all 6 NICs (3 active/3 passive per VMKernel) should also ensure each individual link isnt heavily used.

    Then the VM Traffic will distribute across all 6 NICs and I would be surprised if you had any issues. I have put in basically the same solutions numerous times and network / IP storage was not an issue.

    The beauty of having as many NICs available to all the various vNetworking functions is that they can "burst" if required and not be constrained by only having a small number of NICs available (eg: If you had several vSwitches with only 2 NICs).

    If you want to be 100% sure, run the VMware Capacity Planner and this wil show you the network bandwidth requirements of your server. Regardless of the capacity planner result, if the above solution isnt good enough, id suggest you would need to look at upgrading too 10Gb as its unlikley you could configure the vNetworking significantly more efficient than the example I provided.

    Hope that helps.



  • 10.  RE: NIC Teaming Best Practice with 8 Physical NICs

    Posted Jul 24, 2012 06:06 PM

    Thanks guys. We got our rack in today so we will building these out in the next few days. I will let you know how it goes.