VMware vSphere

 View Only
  • 1.  NIOC or Broadcom Network Partitioning on 10Gig adapter

    Posted Jun 19, 2012 04:39 AM

    We are in the process of upgrading our vsphere 4.0 U4 16 host CLuster to vsphere 5 with new hardware based on Dell M1000E, M8024K switches, M620 blades having 2 port Broadcom 10Gig adapter and SAN Passthrough modules. The broadcom 10 gig adapters allows to partition the adapter in 4 network partitions. Currently I have standard switches but with upgrade we are also plan on moving to distrbuted switches.

    I am trying to find out thoughts which one is better over other among following approaches : if NIOC is better for traffic management or putting network partitions for each kind of traffic and hardcode the bandwidth will be better.

    Please share your experiences and thoughts around this.



  • 2.  RE: NIOC or Broadcom Network Partitioning on 10Gig adapter

    Posted Jun 19, 2012 08:10 PM

    Both have their ups and downs. I personaly prefer NIOC, as the vSphere 5 host is closer to the traffic source and can control egress packets before they hit the wire. The decision point is usually focused around understanding the applications and traffic patterns (NIOC) or not (partitioned vNICs).

    TL;DR - I've deployed both for customers, and they both work :smileyhappy:



  • 3.  RE: NIOC or Broadcom Network Partitioning on 10Gig adapter

    Posted Jun 19, 2012 11:53 PM

    Thanks Charles for the Reply. The other point of view that I had was if I could manage my traffic without putting additional burdens at ESXi layer then going the network partitioning way might be the way to go. Besides tradionally the VMware admin teams are not experienced on troubleshooting network related issues so keeping less complex configuration at vSphere layer might help in that perspective as well.

    Are there any issues / gotchas that we shall be aware of before going the network partitioning way ?



  • 4.  RE: NIOC or Broadcom Network Partitioning on 10Gig adapter

    Posted Jun 20, 2012 12:36 AM

    True enough. Partitioning the uplinks is an easier approach to troubleshoot. One gotcha is that you're going to take longer to do host level evacuations of VMs due to reduced overall bandwidth for vMotion. Also, 10Gb uplinks allow for 8 simultaneous vMotions instead of the standard 4 (refer to config maximums in vSphere 5).

    For blades, I typically see a layout similar to:

    2 Gb management / vMotion

    4 Gb VM traffic

    4 Gb FCoE (SAN)



  • 5.  RE: NIOC or Broadcom Network Partitioning on 10Gig adapter

    Posted Jun 20, 2012 12:45 AM

    I tend to agree with Chris, and prefer a mixed approach.

    Partition out by major traffic type (1GBit for mgmt, 9 for the rest, maybe?) and then use NIOC on everyhting.



  • 6.  RE: NIOC or Broadcom Network Partitioning on 10Gig adapter

    Posted Jun 20, 2012 12:49 AM

    Here is how I was planning to distribute my bandwidth based on 2 port 10G card.

    On Port1

    P1 => 1Gb ==> Management

    P2 => 2Gb ==> vMotion

    P3 => 1Gb ==> VM Misc (Prod)

    P4 => 6Gb ==> VM Traffic prod

    On Port 2

    P1 => 1Gb ==> Management

    P2 => 2Gb ==> vMotion

    P3 => 1Gb ==> VM Misc (Prod)

    P4 => 6Gb ==> VM Traffic prod

    Then use both P1 in failover mode for management with one portgroup. Create P2 in active mode by assigning multiple IP addresses and use them as standby for each other. Configure P3s and P4s in LBT. How does this sound ?