Months ago, when I set up a lab using a handful of old Hpe ProLiant G6's (which run ESXi 6.0U3 Hpe version flawlessly), I had only plugged in one patch cable to the two onboard nics. ESXi installs with only a vmnic0. I've even played with vMotion on that setup.
I decided to patch cable the second nic to the switch as well, but wasn't sure what I should do with the vmnic1 that appeared.
The simple thing I did was just add vmnic1 to the vSwitch0, but I have no idea about the "Teaming and failover", I left that default.
I noticed when I click "vmnic0 1000 Full", it would orange highlight the vmk0 and the vm's.
When I click "vmnic1 1000 Full", it would orange only the vm's.
Since vm's are orange on both nics, this means when a client does … rdp/ica into a vm, the host will decide which nic the client goes through?
I'd like a standard set up where host1 will run vm's off the datastore of host2. Because running vm's off their own host's datastore is a no no. Want to keep it simple with all hosts being an ESXi box before I dive into dedicated iSCSI server or vSAN.
I'm mostly interested in performance, and less interested in redundancy, I don't want to do anything production fancy like getting a whole second switch for failovering. If a nic is ever going to die on these 10 year old boxes, they're getting tossed out.