I have 4 physical networks available per ESXi host to allocate towards VM traffic. 2 ports are onboard, and the other 2 are on a PCI card. The NICs need to be connected to 2 physical switches for redudancy, with 1 onboard and 1 PCI going to switch 1, and likewise with the other pair. My guest VM's are located on several VLANs.
I initially configured a vDS with 4 uplink ports, added all 4 available NICs, and created all the required portgroups. But the problem is on each physical switch STP blocks one of the connections, either the onboard or the PCI card. I actually lose guest VM connectivity if a blocked uplink is used.
I considered combining 2 links into an etherchannel, with one running to each physical switch, but a limitation of vDS only allows for one etherchannel per cluster.
Any suggestions or am I stuck with using only 2 NICs per host? Any solution needs to have redudancy (ie NICs split across the physical switches).