I am having some issues with our production ESXi server. We have 4 NIC's in this server and every time I try and plug more than 1 port into the network, we starting having catastrophic network connectivity problems. When I plug in a second NIC, I'll go into the vSwitch0, go to network adapters and add the connected adapter. I can either leave it active or put it into standby. I leave ALL other settings default, including the NIC teaming settings on the vSwitch settings. Now, My whitebox at home works fine with this setup. I have my home whitebox with two NICs connected and active for what I hope to be load balancing. At home, I have both NICs going into a cheap $60 8 port Netgear Gigabit switch and it works fine. In the office, I would have both NICs connected to a Cisco/Linksys SOHO 25 port Gigabit switch which all the workstations are plugged into. From the Cisco/Linksys switch, it goes up to the Sonicwall router and from there, we have another older 24 port 10/100 Dlink switch for our VoIP system.
I am really thinking it's our switch that is getting confused with (my guess) ARP caching or whatever technology the switch needs to support for what I'm trying to accomplish. Either the switch is not supporting my setup or the router is blocking it. (I also run Sonicwall at home with my "working" setup)
So my question is, is there anything at the network level that maybe not all SOHO switches support? I will be doing more physical switch testing, but I wouldn’t think there would be anything, but something is going on.
Upon further testing, we just took a cheap 5 port Netgear switch and plugged two NIC's from the ESXi server and it seems to be working. Bad Cisco/Linksys switch or lacking features? I'll see later today if we continue to have network problems with this 5 port Netgear running two active NIC's