ESXi

 View Only
  • 1.  4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 12:15 PM

    I have 4 physical networks available per ESXi host to allocate towards VM traffic.  2 ports are onboard, and the other 2 are on a PCI card.  The NICs need to be connected to 2 physical switches for redudancy, with 1 onboard and 1 PCI going to switch 1, and likewise with the other pair.  My guest VM's are located on several VLANs.

    I initially configured a vDS with 4 uplink ports, added all 4 available NICs, and created all the required portgroups.  But the problem is on each physical switch STP blocks one of the connections, either the onboard or the PCI card.  I actually lose guest VM connectivity if a blocked uplink is used.

    I considered combining 2 links into an etherchannel, with one running to each physical switch, but a limitation of vDS only allows for one etherchannel per cluster.

    Any suggestions or am I stuck with using only 2 NICs per host?  Any solution needs to have redudancy (ie NICs split across the physical switches).



  • 2.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 12:24 PM


  • 3.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 12:46 PM

    Thanks for the link, but that design is using 4 physical NIC ports for all ESXi traffic; managment, vmotion, and guest VM traffic.  In my situation, I already have all my management and vmotion ports allocated, and after that I'm left with 4 physical NICs strictly for VM traffic.



  • 4.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 12:50 PM

    greenpride32 wrote:

    I initially configured a vDS with 4 uplink ports, added all 4 available NICs, and created all the required portgroups.  But the problem is on each physical switch STP blocks one of the connections, either the onboard or the PCI card.  I actually lose guest VM connectivity if a blocked uplink is used.

    Does STP really block the link? It is very strange, since there should not be any "visible loop" by attaching several NICs from a physical switch into a virtual switch, as long as you use Port ID load balancing on the vSwitch. Can you confirm that you have that NIC teaming policy?

    Also, since there "should" not be any loops, often STP is disabled on ports against ESXi hosts.



  • 5.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 04:54 PM

    My virtualportgroups are configured as route based on originating virtual port.  Only one NIC per switch will show CDP, and only that interface is able to pass any traffic.



  • 6.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 06:18 PM

    greenpride32 wrote:

    My virtualportgroups are configured as route based on originating virtual port.  Only one NIC per switch will show CDP, and only that interface is able to pass any traffic.

    CDP should work and advertise even if a link is logically disabled by STP. Are you sure STP is involved?



  • 7.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 09:52 PM

    It's common to turn on portfast and bpduguard for the links going to a vSphere server to avoid STP blocking a port.

    http://wahlnetwork.com/2012/05/07/its-a-trunk-using-portgroup-vlans-with-vsphere/



  • 8.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 11:00 PM

    Chris Wahl wrote:

    It's common to turn on portfast and bpduguard for the links going to a vSphere server to avoid STP blocking a port.

    http://wahlnetwork.com/2012/05/07/its-a-trunk-using-portgroup-vlans-with-vsphere/

    It's common to turn it on as a best practice networking implementation. VMware doesn't generate BPDU's, so they should never land on the switch.

    STP should only block a port if something "else" is wrong. Disabling STP is never your answer to fixing some sort of loop.

    I'm unsure why you're experiencing the issue in the original post except to say that, with "route by port ID", having a switch block a port means one of your VMs is bridging traffic. Do you have any VMs with two NICs doing anything special?

    There is no reason you shouldn't be able to use the configuration presented. As stated, are you sure the switch is blocking with STP? What sort of switch is it?

    Finally, you may wish to investigate etherchannel, if you actually need four NICs you would likely rather utilise it.



  • 9.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 14, 2012 11:40 PM

    Some of the VM guests do have multiple NICs on different LAN segments.  The purpose is to provide both internal access, and firewalled external access, and not necessarily to bridge traffic.  But the issue of CDP only working on either one of the interfaces arose before any VM's were migrated to the newly installed ESXi hosts.  Now with guests in the picture I can vMotion every guest off a host and it's the same result.

    Since I have 2 physical switches and a requirement is to split the links between them, I cannot etherchannel since vDS only supports just a single etherchannel.  Ideally without that limiation I would have created a 2 port etherchannel to each physical switch.

    I'm not sure what else would be blocking the NIC if not STP.   I will read up on the portfast option.  Thanks.



  • 10.  RE: 4 NICs available for VM traffic per host; how to configure?

    Posted Jun 19, 2012 11:55 AM

    As asked above, what kind of physical switch do you have?

    Have you access to the shell on this switch? It would be interesting to see some command outputs related to both CDP and STP.

    Also, your VMs with multiple NICs - are you sure there are no bridging inside them? This could very well create network loops and also as noted by Josh26, the answer is not to disable STP or set it to portfast, but to break the loop, since such much not exist on Ethernet.