VMware vSphere

 View Only
  • 1.  Mixing Management and Backup traffic

    Posted Jul 15, 2013 05:35 PM

    I am using 5.1 and have a dvSwitch that has a vmkernel port for management traffic as well as a virtual machine port group for traffic used to backup the environment.  Backups run nightly and use a lot of bandwidth and I need to make sure that management access is always available.  Enterprise plus is being used and NIOC Is available.  Should I use NIOC and what is the best configuration in terms of shares and reservations?  Also there are 2 physical NICs associated with this dvswitch - should my backup port group use an active / active load balancing policy?  Also - should I use route based on originitating port ID?

    Thanks!



  • 2.  RE: Mixing Management and Backup traffic

    Posted Jul 15, 2013 06:43 PM

    Hi,

    You can find information here to help you understand how to use and implemant NIOC. VMware vSphere Blog: vSphere 5.1 – Network I/O Control (NIOC) Architecture –... - Spiceworks

    Julien



  • 3.  RE: Mixing Management and Backup traffic

    Posted Jul 16, 2013 09:23 AM

    My take on this is that you don't really need to concern yourself with QoS, specific failover orders or anything like that for basic management traffic. You can literally run plain management traffic over a 64kbit/s ISDN link just fine, the management, HA and other protocols are pretty lightweight.

    If you have at least gigabit links, which I assume, you'll never notice any issues with management connectivity even during backup hours, maybe a bit sluggish VM remote console performance at worst.

    Going with that reasoning, you don't need to consider anything special for management and should just have your backup network VM port group use both links active with either the load-based or port-ID based teaming policy (unless you can/want to implement Etherchannel/LACP).



  • 4.  RE: Mixing Management and Backup traffic

    Posted Jul 16, 2013 01:04 PM

    OK great thanks.  Do you think backup traffic and vmotion traffic and management traffic could go over the same 2 physical nics, or should backup traffic and vmotion traffic be separated?  (There are 8 nics total and the other traffic that needs to be handled is production virtual machine traffic and NFS - no isci, no ft)



  • 5.  RE: Mixing Management and Backup traffic

    Posted Jul 16, 2013 03:37 PM

    I would generally separate backup and vMotion traffic even though both types of traffic are bursty, event-driven and maybe rarely actually active (in smaller, underutilized DRS-clusters, vMotions tend to happen not too often)

    If you're not interested in multi-NIC vMotion and one physical NIC is enough for your backup load, you could setup something like this:

    vmk0 Management vmkernel interface - vmnic0/active, vmnic1/standby (doesn't really matter here though)

    vmk1 vMotion vmkernel interface - vmnic1/active, vmnic0/standby

    VM backup port group - vmnic0/active, vmnic1/standby

    But with 8 NICs available I'd probably go for this kind of example configuration:

    Team1, Management+vMotion:

    vmk0 Management vmkernel interface - both links active

    vmk1 vMotion vmkernel interface 1 - vmnic1/active, vmnic0/standby
    vmk2 vMotion vmkernel interface 2 - vmnic0/active, vmnic1/standby

    Team2, NFS:

    In detail this depends heavily on whether you have multiple NFS arrays/target IPs, can run Etherchannel/LACP etc. In any case, at least 2 dedicated vmnic2 and vmnic3 uplinks will go here.

    Team3, VM public traffic:

    VM public port groups - vmnic4 and vmnic5, both links active with load-based teaming

    Team4, VM backup traffic

    VM backup port group - vmnic6 and vmnic7, both links active with load-based teaming

    Note that this is just an example. It also depends a lot on if you think you need 4 NICs for public/NFS traffic.

    Btw, is there a specific reason why you have a dedicated backup network directly attached to your VMs through a 2nd NIC (if I understood you correctly) instead of leveraging virtualization/array-side backup solutions? Are you doing that for all VMs or just some special cases like databases or mailbox servers?



  • 6.  RE: Mixing Management and Backup traffic

    Posted Jul 16, 2013 05:16 PM

    For NFS, I have only one target IP.  In that case should I Use only 2 physical NICS?

    For VM traffic it seems that 2 physical NICs would be sufficient as these are blades and don't have a high number of VMs per server and therefore not a high amount of network traffic per server.

    As far as backup, the Avamar backup clients use a dedicated network, which we have segregated to guarantee it the bandwidth it needs when image-level backups are taking place.  That was the reasoning for segregating the backup traffic.



  • 7.  RE: Mixing Management and Backup traffic

    Posted Jul 17, 2013 08:00 AM

    For NFS, I have only one target IP.  In that case should I Use only 2 physical NICS?

    If you only have one IP on your NFS target then traffic will only be able to flow through one physical uplink at any given point in time, so two teamed NICs are sufficient.

    As far as backup, the Avamar backup clients use a dedicated network, which we have segregated to guarantee it the bandwidth it needs when image-level backups are taking place.  That was the reasoning for segregating the backup traffic.

    So it's a vmkernel port group and you're doing VADP-based backups right? Ok, I was confused because in your original post you called it a "VM port group":

    as well as a virtual machine port group for traffic used to backup the environment



  • 8.  RE: Mixing Management and Backup traffic

    Posted Jul 17, 2013 04:28 PM

    If you only have one IP on your NFS target then traffic will only be able to flow through one physical uplink at any given point in time, so two teamed NICs are sufficient.

    OK - so I will have two physical NICS dedicated to NFS - can I use active/active on those NICs or active/passive?  What should my load balancing policy be?

    So it's a vmkernel port group and you're doing VADP-based backups right? Ok, I was confused because in your original post you called it a "VM port group":

    I'm using VADP based backups.  But with Avamar, it requires a proxy server on each compute cluster.  The proxy server has a regular virtual machine IP address.   The IP addresses for these proxy servers are all on one separate subnet from all other virtual machines.  So yes it is in fact a port group rather than a VMkernel Port.  The proxy server connects to the avamar management server.  The avamar management server is connected to vcenter.  By putting these proxy servers on a dedicated port group, is that in fact accomplishing the goal of isolating backup traffic or do I need to rethink that as well?