VMware vSphere

 View Only
  • 1.  iSCSI Best Practices - Standard vs Distributed vSwitches

    Posted Mar 01, 2019 04:50 PM

    Hello-

    We have a new vCenter 6.7 and I've been doing a lot of research regarding the best practices around iSCSI using Standard vs Distributed vSwitches. I could not find any documentation that mentions the use of a DVS for iSCSI connectivity, only for standard switches, but also don't even see anything that recommends against the use of a DVS other than a few forum posts here.

    So my question is:

    • What is the current best practice for vSphere 6.5/6.7 for iSCSI vSwitches: Standard or Distributed or is Either acceptable?
      • If Distributed is an acceptable option, then what is the best practice for implementing that?
        • 1 dvSwitch with 2 dvPort Groups, each PG with a single VMKernal and a single Uplink
        • 2 dvSwitches, each with 1 dvPort Group, each PG with a single VMKernal and a single Uplink

    Thank you in advance!

    Tim



  • 2.  RE: iSCSI Best Practices - Standard vs Distributed vSwitches
    Best Answer

    Posted Mar 01, 2019 09:05 PM

    You can use either virtual switch type for iSCSI.

    I usually use a single standard vSwitch with an Active/Unused failover policy on the port group level. IMO this is an easy to implement, and maintain setup for dedicated iSCSI vmnics.

    In case you do/can not physically separate storage (iSCSI) traffic, and other traffic types (e.g. vMotion, VM Network) you may be better of using distributed virtual switches with Network I/O control being activated.


    André



  • 3.  RE: iSCSI Best Practices - Standard vs Distributed vSwitches

    Posted Mar 02, 2019 12:33 PM

    Tim,

    Overall, you can really use either switch architecture.  It's true that many manufacturers don't publish as much detail about dVS best practices, although I know of at least one that does.  The important part is to match the best practice in the dVS.  If the uplinks need to be active, unused you need to match that, MTU, Delayed Ack, etc...

    In my opinion, if you have the license to use it, a dVS is superior in features and more importantly enforces the configuration to be the same on every host.  This is particularly important in iSCSI.  It's relatively easy to enforce configuration with a small amount of hosts, but once you start getting above three hosts or so, it becomes exponentially harder to ensure sameness.

    To return to your question, your vendor's best practices are going to come into play for how you setup a dVS.  Generally, I've setup a single dVS, with two port groups, two vmkernels, and Active/Unused and port binding.  This is a vendor best practice though.  I'm guessing that if you reach out to your vendor, they'll be able to help you interpret their standard switch best practice into the dVS equivalent. 

    Best of Luck,

    David



  • 4.  RE: iSCSI Best Practices - Standard vs Distributed vSwitches

    Posted Mar 04, 2019 09:41 PM

    Thank you both for your insight on this topic, I actually did wind up reaching out to our vendor (HPE/Nimble) and their recommendation was to use standard vswitches. I felt that it was best to go with our storage vendor recommendation as well as keep vCenter out of the equation when it comes to availability.

    Best Regards,

    Tim



  • 5.  RE: iSCSI Best Practices - Standard vs Distributed vSwitches

    Posted May 02, 2019 06:26 PM

    Tim,

    It's still best to go with the vendor's recommendation, but I believe that the way that a dVS works is that it is centrally managed and then a read only copy is transferred to each of the hosts.  That way if your vCenter becomes unavailable, the individual ESXi hosts still understand the network.  The other nice thing about a dVS is that you can backup the configuration.

    Either way, it's best to stick with the vendor, but I don't believe that vCenter availability should be a concern.

    Best Regards,

    David



  • 6.  RE: iSCSI Best Practices - Standard vs Distributed vSwitches

    Posted Feb 13, 2023 07:05 PM

    Issue with VDS is if your VCenter is setting on that iscsi provided storage and you need to adjust the networking you better not forget to set anything for example vlan information.  If that connection goes offline  then your vcenter will go offline and you will not be able to edit the distributed portgroups from the host to correct the issue causing you to build a new iscsi connection on the fly.  I recommend using local switches for those connections unless vcenter is not sitting on them otherwise you could find yourself in a bind.  



  • 7.  RE: iSCSI Best Practices - Standard vs Distributed vSwitches

    Posted Nov 08, 2023 12:11 AM

    Thank you all, for this conversation. I will go with Standard switches. I have plenty of ports and can dedicate 4x10GbE on each host for iSCSI traffic exclusively. 



  • 8.  RE: iSCSI Best Practices - Standard vs Distributed vSwitches

    Posted Nov 08, 2023 11:35 PM

    I've been using vDS for years with iSCSI no issues. I have one datacenter that uses a single vDS for the storage and another one that uses dual vDS switches. The reason is due to how those networks communicate to different SAN manufactures.

    --Alan--

     

     



  • 9.  RE: iSCSI Best Practices - Standard vs Distributed vSwitches

    Posted Mar 07, 2024 08:36 PM

    Just to throw my two cents in on this.  If using vDS for something critical like iSCSI, probably should use a ephemeral portgroup so that the host can bootstrap the networking without vCenter being active.