Hello all,
I have a design question concerning the network config of an ESX environment.
Briefly, we need to decide how many NICs per server we need.
It's an ESX cluster for a cloud environment (hosting).
I told my boss about 8 NICs per server would be appropriate (4 dual port ethernet cards).
He however said that I'm crazy in the coconut with so many NICs per host,
because of complex network management / cabling,
and said that 2 NICs should be sufficient, or maximum 4.
We are not sure yet whether we will use NETIOC or the traditional approach
with multiple VSWITCHEs to separate network flows.
This is what I had in mind when not using NETIOC:
VM NETWORK:
VSWITCH1 ---- ETH0 (active) === physical NIC 0_port0
ETH1 (standby) === physical NIC 1_port0
VMOTION NETWORK:
VSWITCH2 ---- ETH2 (active) === physical NIC 0_port1
ETH3 (standby) === physical NIC 1_port1
IP STORAGE NETWORK:
VSWITCH3 ---- ETH4 (active) === physical NIC 2_port0
ETH5 (passive) === physical NIC 3_port0
FAULT TOLERANCE NETWORK:
VSWITCH4 ---- ETH6 (active) === physical NIC 2_port1
ETH7 (passive) === physical NIC 3_port1
Is it really that crazy to have 8 NICs per ESX host?
If so, is 6 acceptable ?
I think 6 would work if we combine vmotion and IP storage on the same VSWITCH,
or vmotion and fault tolerance on the same VSWITCH.
It think 4 is an absolute minimum.
Somehow I don't think it's a good idea to combine vmotion/ipstorage/and fault tolerance
on the same network adapter. I think if we only get 4 adapters per host,
we should forget about IP storage and keep everything storage related
connected with Fibre Channel.
But maybe I'm too NIC-gready here ?
We currently do not use fault tolerance however,
but I think there will be demand for this in the future.
So maybe it is overkill to assign seperate physical adapters for this,
maybe it would be better to combine this with the VMOTION flow ?
This is what I had in mind when using NETIOC:
1 virtual switch, with shares for balancing the load,
and with Load Based Teaming enabled.
NETWORK FLOW SHARE:
VMOTION 20 | VSWITCH1 | eth0_ACTIVE
MGMT 10 | | eth1_ACTIVE
NFS 20 | | eth2_ACTIVE
FT 10 | | eth3_ACTIVE
VM 40 | | eth4_ACTIVE
| | eth5_ACTIVE
| | eth6_STANDBY
| | eth7_STANDBY
Maybe a higher share valuie will be needed for NFS.
We do not yet know what kind of purpose the NFS datastores will serve,
there will also be Fibre Channel connected datastores.
In the NETIOC scenario, if 8 physical NICs are really too much, I guess we could
do with 6
But 6 also seems like a minimum in this situation to me,
or could we get away with 4 ?
Also, about NETIOC, since this stuff is still pretty new (since 4.1),
does anyone here have experience with the new NETIOC feature on
a distributed switch?
I would say to get at least 6 NICs per server,
better one or two NICs too much that you do not use in the beginning,
but have the possibility to use later, than to be blocked later,
not being able to implement a functionality (eg. Fault Tolerance)
because you don't have any free NICs anymore.
Or otherwise they should just go with 2x10GB NICs,
and distribute the flow with NETIOC.
This would greatly simplify cabling and management.,
and give more bandwidth
ah well.. :smileyhappy:
Any input would be greatly appreciated.