ESXi

 View Only
  • 1.  ESX, number of NICs, NETIOC or traditional approach

    Posted Jun 23, 2011 09:28 AM

    Hello all,

    I have a design question concerning the network config of an ESX environment.

    Briefly, we need to decide how many NICs per server we need.

    It's an ESX cluster for a cloud environment (hosting).

    I told my boss about 8 NICs per server would be appropriate (4 dual port ethernet cards).

    He however said that I'm crazy in the coconut with so many NICs per host,

    because of complex network management / cabling,

    and said that 2 NICs should be sufficient, or maximum 4.

    We are not sure yet whether we will use NETIOC or the traditional approach

    with multiple VSWITCHEs to separate network flows.

    This is what I had in mind when not using NETIOC:

    VM NETWORK:

            VSWITCH1 ---- ETH0 (active)    === physical NIC 0_port0

                                      ETH1 (standby)   === physical NIC 1_port0

    VMOTION NETWORK:

                    VSWITCH2 ---- ETH2 (active)    === physical NIC 0_port1

                                              ETH3 (standby)   === physical NIC 1_port1

    IP STORAGE NETWORK:

            VSWITCH3 ---- ETH4 (active)   === physical NIC 2_port0

                                      ETH5 (passive)  === physical NIC 3_port0

    FAULT TOLERANCE NETWORK:

                    VSWITCH4 ---- ETH6 (active)   === physical NIC 2_port1

                                              ETH7 (passive)  === physical NIC 3_port1

    Is it really that crazy to have 8 NICs per ESX host?

    If so, is 6 acceptable ?

    I think 6 would work if we combine vmotion and IP storage on the same VSWITCH,

    or vmotion and fault tolerance on the same VSWITCH.

    It think 4 is an absolute minimum.

    Somehow I don't think it's a good idea to combine vmotion/ipstorage/and fault tolerance

    on the same network adapter. I think if we only get 4 adapters per host,

    we should forget about IP storage and keep everything storage related

    connected with Fibre Channel.

    But maybe I'm too NIC-gready here ?

    We currently do not use fault tolerance however,

    but I think there will be demand for this in the future.

    So maybe it is overkill to assign seperate physical adapters for this,

    maybe it would be better to combine this with the VMOTION flow ?

    This is what I had in mind when using NETIOC:

    1 virtual switch, with shares for balancing the load,

    and with Load Based Teaming enabled.

    NETWORK FLOW   SHARE:

    VMOTION     20  |  VSWITCH1    | eth0_ACTIVE

    MGMT           10  |                         | eth1_ACTIVE

    NFS               20  |                         | eth2_ACTIVE

    FT                  10  |                         | eth3_ACTIVE

    VM                 40  |                         | eth4_ACTIVE

                                 |                         | eth5_ACTIVE

                                 |                         | eth6_STANDBY

                                 |                         | eth7_STANDBY

    Maybe a higher share valuie will be needed for NFS.

    We do not yet know what kind of purpose the NFS datastores will serve,

    there will also be Fibre Channel connected datastores.

    In the NETIOC scenario, if 8 physical NICs are really too much, I guess we could

    do with 6

    But 6 also seems like a minimum in this situation to me,

    or could we get away with 4 ?

    Also, about NETIOC, since this stuff is still pretty new (since 4.1),

    does anyone here have experience with the new NETIOC feature on

    a distributed switch?

    I would say to get at least 6 NICs per server,

    better one or two NICs too much that you do not use in the beginning,

    but have the possibility to use later, than to be blocked later,

    not being able to implement a functionality (eg. Fault Tolerance)

    because you don't have any free NICs anymore.

    Or otherwise they should just go with 2x10GB NICs,

    and distribute the flow with NETIOC.

    This would greatly simplify cabling and management.,

    and give more bandwidth

    ah well.. :smileyhappy:

    Any input would be greatly appreciated.



  • 2.  RE: ESX, number of NICs, NETIOC or traditional approach
    Best Answer

    Posted Jun 23, 2011 01:03 PM

    For best practice you will need 8 -

    vMotion and your management network(you had left this off) can share a pare - woth the pair configured active/standby for management and standby/active for vMotion - the FT network, VM Network and IP Sotrage should all be isolated and redundnat - depending on your I/O load you can condense down to four nics with FT/VMs and IP Storage  sharing the same NICs but on different vlans -



  • 3.  RE: ESX, number of NICs, NETIOC or traditional approach

    Posted Jun 24, 2011 11:30 AM

    Thanks a lot for your reaction.

    We'll have to look around to see if blades with 8 ethernet ports + 2 Fibre Channel ports exist,

    otherwise we'll have to take standalone rack servers then.

    We're not looking forward to do all the cabling when taking standalone servers though...

    Blade servers are so easy, don't take that much rack space, no cabling for network, fibre channel, ...

    Also replacing a blade in case of a hardware problem is so much easier and faster.

    Does anybody have any practival experience with NETIOC ?

    It looks pretty promising to me.

    I read through this whitepaper on the issue:

    http://www.vmware.com/files/pdf/techpaper/VMW_Netioc_BestPractices.pdf

    It compares the traditional approach with multiple 1GB NICs

    to an approach with just 2 10GB NICs and traffic flow control

    with NETIOC on a distributed switch.

    This would dramatically simplify network management outside of vmware (switches, less cabling, etc.)

    Anybody running this kind of setup in a production environment yet here ?



  • 4.  RE: ESX, number of NICs, NETIOC or traditional approach

    Posted Jun 29, 2011 11:06 AM

    I recommend the vDS netioc approach with 1gb or 10gb

    PG's per service type, the shares and percentages in the white paper are great starting ground

    As far as the blades supporting those vmnic counts look into UCS or xsigio (w/ dell, hitachi, hp, ?) or hp flex10 or ?.

    If you go the UCS route use QoS from UCSM as bandwidth assumed by VMware does not include vHBA's whereas in UCSM it does.

    If you are fine with 2x 10gb and netioc / nexus1k wfe look into HP SL-series and supermicro twinblade or other high density scalable HPC type setups

    If you are talking small or large cloud of few 100 vm's, dont do anything less than 96gb of memory, dont use 2 vmnics without class type qos

    If you are set on FCP block and need to buy FC ports definitley instead look at UCS or 10gbE CNA's w/ FCOE instead

    disclaimer, I am a vmware employee however all comments are my own and do not reflect vmware or any other entity.



  • 5.  RE: ESX, number of NICs, NETIOC or traditional approach

    Posted Jul 25, 2011 09:04 AM

    Thanks to all for your reactions,

    very valuable info!

    sorry for my late reaction, I was on holiday.

    We've decided to go with DELL blades with 8 NICs.