VMware vSphere

 View Only
  • 1.  What is the best practice for dual management interfaces?

    Posted Apr 06, 2011 03:55 PM

    Hello community!

    I am upgrading a couple ESX 4.0 hosts to ESXi 4.1U1 in the coming weeks. My question here is about how to setup the management networks. Obviously in ESX classic 4.0 I have a Service Console port group (on vSwitch0) as well as a VMkernel port group (also on vSwitch0) which provides my host with SC and vmotion capabilities, as we all know. Note: my vSwitch0 has two vmnics attached to it, one is standby and one is active. This is just how we have our dual switches setup, so it needs to be active/standby.

    I got to thinking (while reading the awesome HA and DRS deepdive book from Duncap Epping and Frank Denneman), that I should carefully consider my mangement networking once I upgrade these hosts to ESXi 4.1 - which of course does away with the Service Console and uses the vmkernel instead.

    The question is, with regards to best practices and given my setup: should I have two vmkernel ports? If so, how should I configure each vmkernel for management traffic and vmotion?

    I think this will be a good discussion to have.

    Thanks all,

    Matt



  • 2.  RE: What is the best practice for dual management interfaces?

    Posted Apr 06, 2011 03:57 PM

    For example, one scenario I can see is doing this:

    vmk0 - management traffic

    vmk1 - vmotion

    Another scenario:

    vmk0 - management traffic, vmotion

    vmk1 - management traffic, vmotion



  • 3.  RE: What is the best practice for dual management interfaces?

    Posted Apr 06, 2011 09:20 PM

    I focus on making sure there are no single points of failure in regards to the management network. Having two vmknics with Management Traffic on them is relatively worthless if they plug into the same switch, or use the same card on the host.

    With that said, I would suggest having two vmknics with Management Traffic that traverse different physical paths to reach each other. You could mix vMotion and Management traffic together using an active/standby arranagement.

    Example:

    vSwitch0 with 2 physical nics (vmnic0 and vmnic4) and 2 vmknics (vmk0 and vmk1)

    vmk0 - management

    vmk1 - vMotion

    vmnic0 - NIC 0 port 0

    Active for management, standby (failover) for vMotion

    vmnic4 - NIC 1 port 0

    Active for vMotion, standby (failover) for management



  • 4.  RE: What is the best practice for dual management interfaces?

    Posted Apr 07, 2011 03:28 PM

    @chriswahl00

    Thanks for the response. We have the redundancy down at the physical layer. There are two pNICs attached to vSwitch0 and each pNIC is on a different network card. If one were to fail, the other would take over (image attached).

    So the real question is: With regards to the vmkernels is this a good network setup when I move to ESXi? This is essentially the same as I have it now just with two vmkernels instead:

    So I would setup the vmkernels like so:

    vmk0 - management traffic

    vmk1 - vmotion

    What I like about this setup is that I have one vmkernel that deals with management traffic and one that deals with vmotion. Again, these hosts are currently setup this way (SC = mgmt traffic; vmk = vmotion). I just want to make sure that it makes sense and that there isn't a better way.



  • 5.  RE: What is the best practice for dual management interfaces?

    Posted Apr 07, 2011 03:44 PM

    Your setup for ESXi looks fine. The only suggestion I would make is to set an active/standby configuration at the vmk level, not at the vswitch level. This eliminates any bottlenecks from doing a heavy vMotion session that may cause an HA failure false positive.

    Here's an older graphic, just pretend Service Console is Management Traffic. :smileyhappy:

    http://1.bp.blogspot.com/_cBnYHZ4IsuY/SbWh7XFAODI/AAAAAAAAAkA/xs5HejztlJc/s400/vSwitch0-3.png



  • 6.  RE: What is the best practice for dual management interfaces?

    Posted Apr 07, 2011 03:47 PM

    So you're saying that I can actually set both Management Traffic Enabled and vMotion Enabled on both vmkernels?



  • 7.  RE: What is the best practice for dual management interfaces?

    Posted Apr 07, 2011 04:00 PM

    No, you definitely should have the two unique vmk's with seperate roles (1 for vMotion and 1 for Management). :smileyhappy:

    However, in the NIC Teaming configuration for the vmkernel (located in the properties of the vmk) there is a tab for NIC Teaming. That is where you set the vmk to use an active / standby adapter by overriding the vSwitch failover order.

    I bring this up because the picture in your thread shows the word "standby" by vmnic6. That tells me that your physical NIC is in standby mode and not being used by either vmk/vswif at the vSwitch level.

    Here's a photo with an example of setting an override for a specific vmk port.



  • 8.  RE: What is the best practice for dual management interfaces?

    Posted Apr 07, 2011 04:12 PM

    Okay I see what you mean. Here is my NIC Teaming setup:

    vmkernel properties

    vSwitch0 properties



  • 9.  RE: What is the best practice for dual management interfaces?
    Best Answer

    Posted Apr 07, 2011 04:19 PM

    Set the vSwitch0 NIC Team to Active/Active, set the vswif (and future vmkernel management) NIC Team to vmnic6 Active and vmnic1 standby, and leave the vMotion vmkernel NIC Team as is.

    This will allow you to use both physical NICs at the same time while also having a failover plan and keeping your management and vMotion traffic physically separate.

    Ultimately:

    vSwitch: vmnic1 active, vmnic6 active.

    vmk (Mgt): vmnic6 active, vmnic1 standby.

    vmk (vMotion): vmnic1 active, vmnic6 standby.



  • 10.  RE: What is the best practice for dual management interfaces?

    Posted Apr 07, 2011 04:32 PM

    OHHH I know what you mean now, but I had a roadblock in my head because I know something about our infrastructure that you don't:

    One of our Cisco 6509 switches is configured to be standby only and we can't have our switches active/active. I would have to ask my network engineer again the reason why, but at this time, we're stuck with the actual physical switches as active/standby.

    So that is the reason that one of the vmnics is standby. It has to be that way because of our switches.

    So considering that, I figure that in our current situation, if one vmnic fails, the other will take over all traffic. Not bad, but not as good as what you've proposed.