vSphere Availability

 View Only

Designing network connections for the cluster

  • 1.  Designing network connections for the cluster

    Posted Jul 21, 2023 06:55 PM

    Hi,
    I am in the process of preparing a vSphere High Availability cluster. I have already purchased devices (HPE MSA, HP Proliant servers, Cisco switch). However, I have dilemmas how to design switch <-> server network connections and server1 <-> server2 connections to do it in the best way, minimizing the risk of isolation network / partition network / failure.

    I have subnets:
    1. SRV - for virtual machines
    2. Mgmt - for management: iLO, ESXi, switches
    3. HeartBeat - internal, non-routable network
    4. vMotion - internal, non-routable network for live machine migration (SFP+)
    5. iSCSI - internal, non-routable network for access to disk resources (FC)

    The vMotion and iSCSI subnets I'm sure will stay that way. However, I am wondering how best to connect the remaining networks. I marked the problematic links with colors. We leave the white ones as is.

    For these purposes, I have 4 ETH ports in servers and 12 free ones in switches. The others are occupied.

    Below are the concepts that come to my mind. Which is the best and why? Maybe you have another idea how to connect it? Thanks for all the advice!

    Concept1

    Running these subnets with the same cables, connecting EtherChannel
    Full: https://imgur.com/gwVjNDL

    usernamewoj_3-1689964032129.png

     

    Concept2

    One common link (SFP+) for vMotion and Heartbeat; virtual machines and management via etherchannel.
    Full: https://imgur.com/yOcuIU7

    usernamewoj_5-1689964692858.png

     

    Concept3

    Without EtherChannel; One common connection to MGMT and VM; In addition, a separate cable for vMotion and a separate one for heartbeat.
    Full: https://imgur.com/1tJnwkK

    usernamewoj_6-1689965102001.png