VMware vSphere

 View Only
Expand all | Collapse all

ESXi 5 and Network Diagram

  • 1.  ESXi 5 and Network Diagram

    Posted Nov 17, 2011 02:16 AM

    Hi,

    I was hoping to get a little insight as to how I want to configure my network adapters, I currently have 6x1GigE network connections, 2 onboard and 2 dual-port adapters.

    My Idea is to use 4 adapters for Management, vMotion, VM Network and 2 for iSCSI:

    VM Network, Management Console, vMotion - vSwitch1

    vmnet0 - onboard1

    vmnet1 - onboard2

    vmnet2 - add-on 1-1

    vmnet4 - add-on 2-1

    iSCSI - vSwitch2

    vmnet3 - add-on 1-2

    vmnet5 - add-on 2-2

    I would also like to create 2 LACP groups spanning 2 stacked switches (no single point of failure) for both vSwitch's and use a trunk for vSwitch1.

    Does anyone see an issue with this design, or have a better design?

    I haven't yet bought an iSCSI SAN but am in the process of reviewing them and am looking at a Dell PowerVault MD3200i, they are somewhat within my shoe string budget so I'm preparing for having one by configuring the iSCSI ports, my switches also support Jumbo Frames per port so I was also looking at enabling that as well.

    The switches in question are a pair of Dell PowerConnect 6224's in a stack with the stacking modules and 48Gb stacking cables.

    Servers in question are Dell PowerEdge R510's, I am planning to use the local storage to get started and then use Storage vMotion to move everything to a SAN once I have it.

    Any comments are suggestions would be great, thanks for your help.

    Any help with the adapter configuration within ESXi would be helpful as well:)



  • 2.  RE: ESXi 5 and Network Diagram

    Posted Nov 17, 2011 04:24 AM

    I typically put my management/vmotion/vmware stuff on one vSwitch, put data on another, put VMs on another.

    So my setup would look like this:

    vSwitch0 - vNic 0 + 3

    • Port Groups:
      • Management
        Explicit failover - vNic0 primary, vNic1 standby
        • VMK Interface w/ Management enabled
      • vMotion 1 - VLAN TAG
        Explicit failover - vNic1 primary, vNic0 standby
        • VMK Interface w/ vMotion enabled
      • vMotion 2 - VLAN TAG (optional if you want multi-nic vMotion in vSphere 5)
        Explicit failover - vNic 0 primary, vNic 1 standby
        • VMK Interface w/ vMotion enabled

    Note: I don't tag the management interface for the fact that in an emergency I want to be able to crossover to my management interface with a laptop with minimal effort and be able to directly-manage the host.  Also, don't use etherchannel with the "management" vSwitch.  Again for the same reasons as I want to be able to jack right into the server if needed without worrying about bonus network settings.

    vSwitch1 - vNic 1 + 4

         vSwitch load balancing set to IP Hash, switch using EtherChannel across stack members

    • Port Groups:
      • iSCSI
        • VMK interface for storage

    vSwitch2 - vNic 2 + 5

         vSwitch load balancing set to IP Hash, switch using EtherChannel across stack members

    • Port Groups:
      • VMs w/ VLAN tagging as necessary

    I just prefer to do it this way.  It allows me to easily expand the storage network without having to worry about breaking anything with the VMs or the management interface.  I also, just as preference, like to use uplinks for my management networks that the guests aren't even using.

    There's nothing in particular wrong with your layout, though.  You've made sure to elliminate failure points with the onboard/addon NICs -- just be sure to pay attention to the load balancing options on the vSwitch/port group overrides.



  • 3.  RE: ESXi 5 and Network Diagram

    Posted Nov 17, 2011 05:31 AM

    Thanks for your reply, I had also debated a configuration like you suggest and came up with the one I outlined after viewing dell reference sheets on VMware deployments. For you management interface with vmotion on a seperate vlan are your ports in general mode and pvid set to the vlan you use for management? Also for your ether channel are you using a trunk or general mode switch port config? I do like the idea of that interface being easily accessed in case of an issue, thanks.



  • 4.  RE: ESXi 5 and Network Diagram

    Posted Nov 17, 2011 05:59 AM

    I generally put all vMotion traffic for a datacenter in it's own VLAN, which is why the config is as it is above.  I haven't checked the suggested design now for vMotion in vSphere 5.0 -- I assume the same answer from VMware is put vMotion on it's own isolate network.  So I put a VLAN ID on the port groups VMK interfaces that are handling vMotion.  I leave management untagged.  Is there some overhead for putting those tags on, especially on the vMotion network? Yeah, but hardly anything.  And to me, being able to jack right into the server if I really, really need to get in is worth the insignificant difference.



  • 5.  RE: ESXi 5 and Network Diagram

    Posted Nov 17, 2011 10:33 PM

    with the dell switches if I remember correctly they have to be in Trunk mode as it creates the channel groups.  Everything else from either designs will work nicely.

    if you only have 6 nics another option is this

    vSwitch0                         - VMNIC0,1,2,3 in IP Hash config with LACP setup across the stacked switches

    Service Console

    vMotion - VLAN 10

    VM Network VLAN ?? if needed

    vSwitch1

    iSCSI - Vmnic 4,5

    I belive you said you only have 6 nics, this will give you 4 GB worth of throughput to any VM,vMotion, or service console while the VLAN's seperate the network.  Just a thought



  • 6.  RE: ESXi 5 and Network Diagram

    Posted Nov 18, 2011 03:20 AM

    I was unde rthe impression that you had to NOT enable LACP or Trunks for the Etherchannel to work correctly? At least, that was the advice given to me on these forums only a few weeks ago.

    For the OP: I've jsut put in a very similar setup to what you are looking at: 2x DellR710's (dual 6 core Xeons, 48gb ram each etc) MD3200i (dual controller) with 12x600gb 15kSAS drives (split into 2x 6 drive RAID 10's) plugging into 2x PowerConnect 5424's for iSCSI traffic only and a PowerConnect 6248 for all my Layer 3 routing needs. I went with 2x 4 port 1000mb NICs ontop of the 4 onboard to give me a total of 12 NICs - to give me a bit of 'play around' room. The config I've setup is, in a nutshell:

    C0P0: 10.0.5.1

    C0P1: 10.0.6.1

    C0P2: 10.0.7.1

    C0P3: 10.0.8.1

    C1P0: 10.0.5.2

    C1P1: 10.0.6.2

    C1P2: 10.0.7.2

    C1P3: 10.0.8.2

    Subnets 5 and 7 plug into SW1 and 6 and 8 into SW2

    I didn't bother setting them up in their own VLANs. Seems to be a bit of conflicting advice around on this....aroudn a 50-50 split on those who say DO vLAN them and another 50% who say DON'T bother. I went Don't just to save another few clicks of management headaches. No other reason.

    In ESXi Ihave vSwitch0 dedicated to Management and vMotion split as per every other article/post on this

    for iSCSI I have setup:

    vSwitch1

    iSCSI1: vmnic 6 and 11 - usual setup  10.0.5.20  SW1

    iSCSI2: vmnic 11 and 6 - usual setup  10.0.7.20  SW1

    vSwitch2

    iSCSI3: vmnic 5 and 10 - usual setup  10.0.6.20  SW2

    iSCSI4: vmnic 10 and 5 - usual setup  10.0.8.20  SW2

    The VM's use all the other NICs which are plugged into the 6248 Switch as a Trunk and set to vLAN10 - no LACP. In ESXi this is setup as vSwitch3. I have not setup vLAN10 in vSwitch3 as it doesn't seem to like it for some reason. I figure that the physical switch can handle the vLAN'ning.

    All in all - seems to be working OK so far. I have 2 Windows 2008R2 VM's running on each physical host. vMotion and migration between the two seems to work well. Will install Exchange 2010 on one of them over the weekend and bring it into the live domain to see how it goes. The major issue I came across was the extremly slow boot up times after a reboot or shutdown. A patch has just been released which seems to fix this. The only other niggling issue I seem to have at the moment, and it could very well be me or my setup is that at times I get an error in the Dell MD3200i software that tells me that all the LUNs I mapped in the second Raid10 array for some reason migrate temselves onto Controller 0 (which is their not prefferred owner). When they are on Controller 1 (prefferred owner) I sometimes cannot map additional LUNs....or it times out. Manually move it across to controller 0 and it works straight away. Still sorting that one out......



  • 7.  RE: ESXi 5 and Network Diagram

    Posted Nov 18, 2011 03:34 PM

    I was told by VMware awhile back if you have any kind of channeling setup on your switch(Etherchannel, LACP, LAG) you need to change your teaming on the virtual switch to IP Hash for it to work properly.  If you leave it as Orig Port ID it will cause problems.  However, if you just have a trunk to pass VLANS you want to leave it as Orig Port ID.  I could be wrong or things could of changed over the years but that was my understanding of it.

    Also to use Etherchannel, LACP or LAG you would have to have all your connections running to the same switch(which isn't desired) or have switches that have single STP stacking capabilities like the Dell 6220/6248 or the 3750's in the Cisco line.



  • 8.  RE: ESXi 5 and Network Diagram

    Posted Nov 20, 2011 03:06 AM

    Well I opted for my first plan using 4 NICs in an EtherChannel so far so good.

    I configured my switches like so:

    interface range ethernet 1/g5-1/g6,2/g5-2/g6

    spanning-tree portfast

    switchport mode trunk

    switchport mode trunk allowed vlan add 900-901

    channel-group 3 mode auto

    interface port-channel 3

    spanning-tree portfast

    switchport mode trunk

    switchport mode trunk allowed vlan add 900-901

    For the vSwitch I configured NIC Teaming like so:

    Load Balancing: Route based on IP hash

    Network Failover Detection: Link status only

    Notify Switches: Yes

    Failback: Yes

    vmnet0, 1, 2, and 4 are all active!

    VM Network is inheriting these settings from vSwitch

    I configured the VLAN ID for VM Network and Management Console, now about the Management Console and vMotion are you inheriting the settings from the vSwitch or setting Active/Standby connections? By default it uses Route based on the originating virtual port ID, so do you uncheck everything and bring all the NICs into active as I see it sets one NIC to unused.

    Also I imagine you create multiple VM Networks for every VLAN ID you want to be able to access?

    I played around with unplugging cables and the failover is pretty fast only dropping a single ping, Hyper-V R2 it would drop about 2 to 3 pings before failing over:)

    Thanks for your alls comments they have been very helpfull.



  • 9.  RE: ESXi 5 and Network Diagram

    Posted Nov 20, 2011 03:10 AM

    One more reply, I guess you setup your VLANs at the Networking configuration level? I couldn't find anything in the VM configuration where you could define a VLAN ID, that's what I would do in Hyper-V.

    Great learning curve but I think I am getting the hang of it:)



  • 10.  RE: ESXi 5 and Network Diagram

    Posted Nov 20, 2011 03:32 AM

    Think of port groups as the literal ports on managed switches.  You have to specify the VLAN that port is on.  But rather than vmware saying you plug in to port 0/35, you plug in to a port group.  The port groups receive the VLAN information.  So if you specify 0, it passes whatever the native VLAN is that the uplinks are on right into the vms.

    If you specify a VLAN, as VM traffic leaves the VM and hits the vSwitch, VMware will add the VLAN ID at that point and then send it on to the physical uplink (assuming that's where it's switched to).

    So yes, VLANs are not specified on a VM level.  You specify the VLANs that Port Groups are on and then associate VMs with Port Groups.

    Note: Multiple Port Groups can be on the same VLAN ID.  For instance, you can have "Production Servers" and specify VLAN 61 and then have "Production SQL Servers" and specify VLAN 61 also.  Both will work just fine.

    If you want to pass all VLANs into a VM use VLAN ID 4095.  Your VM will need to know how to handle the VLAN IDs, however.



  • 11.  RE: ESXi 5 and Network Diagram

    Posted Nov 20, 2011 04:28 AM

    Looking at your layout how would you enable multi-path vMotion if the port is in access mode? Are you setting a native vlan on the trunk and not assigning a VLAN ID for the management network?