VMware vSphere

 View Only
  • 1.  Vmware Switch Networking

    Posted Sep 23, 2020 01:40 PM

    Good morning, I currently have servers with 6 network cards, 2 physical 1 Gb. And 4 10 Gb.

    In each site we are going to install a vmware cluster with 4 nodes. The customer purchased an enterprise plus license. I would like to configure a vDS for host management in each vCenter, but would like to hear your recommendations regarding the use of physical devices for management traffic, such as vmotion traffic.

    My ideas:

    1- Create a vSS for the management network and VM. Then create a vDS for the vMotion network (using the 10Gb NICs)

    2- Create a vSS for the management network and VM. Then create a vSS for the vMotion network.

    3- Create a vSS for the management network (1 Gb NIC). Then create a vDS for the VM and vMotion network (10 Gb NIC), using the Physical NIC LOAD policy.

    4- Create a vDS and place all the traffic there, administration, vm and vmotion.

    I would like your suggestions and comments, thank you.



  • 2.  RE: Vmware Switch Networking

    Posted Sep 23, 2020 02:13 PM

    Moderator: Thread moved to the vSphere area.



  • 3.  RE: Vmware Switch Networking

    Posted Sep 23, 2020 06:44 PM

    Hi

    It depends really at amount of traffic you require to put through.

    Few things to consider:

    1. Do you have NFS or VSAN ?

    2. How much memory you ESXi hosts have, or rather how much data your hosts will have to transfer during the maintenance mode

    3. What are your VMs requirements in regard to traffic. Do you have some needs in regard to the network separation - like DMZ traffic on dedicated links ?

    4. Do you plan to utilize ESXi mgmt network for VM backup ?

    5. Are there any NSX considerations ?

    With all that to ponder upon, I'd do smthg like that:

    1st vds for "infrastructure" (ESXi mgmt, vMotion, network based storage) - load based teaming policy, in NIOC vMotion on low, 2 10g uplinks per ESXi node, physical switch config: trunked VLANs

    2nd vds for VMs - load based teaming policy, 2 10g uplinks per ESXi node, physical switch config: trunked VLANs

    if you'd like you can also add:

    3rd vSS for backup ESXi mgmt  - extra vmkernel in different subnet with rules on FW to allow https and ssh



  • 4.  RE: Vmware Switch Networking

    Posted Sep 23, 2020 07:09 PM

    I answer your questions.

    1. Do you have NFS or VSAN?

    The customer does not have vSAN, this is a new environment, I know they have an HP 3PAR

    2. How much memory you ESXi hosts have, or rather how much data your hosts will have to transfer during the maintenance mode

    RAM memory: 16 DIMM x 64 GB for 1024 GB

    3. What are your VMs requirements in regard to traffic. Do you have some needs in regard to the network separation - like DMZ traffic on dedicated links?

    It is a totally new environment, I do not know the VMs that the client will install in the future.

    4. Do you plan to use ESXi mgmt network for VM backup?

    I listen to recommendations.

    5. Are there any NSX considerations?

    NO



  • 5.  RE: Vmware Switch Networking

    Posted Sep 23, 2020 07:31 PM

    2. How much memory you ESXi hosts have, or rather how much data your hosts will have to transfer during the maintenance mode

    RAM memory: 16 DIMM x 64 GB for 1024 GB

    10g is bit low for such amount of RAM

    Consider MultiNIC vMotion or LACP



  • 6.  RE: Vmware Switch Networking

    Posted Sep 23, 2020 10:41 PM

    Multi-Nic vmotion is configured with two port groups and two vmkernel adapters, correct?



  • 7.  RE: Vmware Switch Networking

    Posted Sep 24, 2020 08:47 PM

    Yup

    Two port groups, the same vlan id, the same address space

    First port group 1st NIC active, 2nd NIC standby

    Second port group 1st NIC standby, 2nd NIC active

    VMware Knowledge Base

    Works quite OK and it's immune to link failure.