Storage

 View Only
  • 1.  Edit VMkernal adapter causes connectivity issues?

    Posted Mar 17, 2023 03:28 PM

    Not sure where to post this, so Mods feel free to move it to the appropriate place.

    Question: If I add "vmotion" to a vmkernal adapter that's being used for an iscsi connection to a nas, will it cause an outage between the VMs that are currently running and the nas? If so, for how long?

    Background: I just tried this, and at the last step I get a dialog that pops up and says:

    6.png

    (In case the screenshot doesn't appear above, it says: "Edit VMkernel Adapter: This vmkernel adapter is bound to an iSCSI initiator. Changing its settings might cause connectivity issues with the associated iSCSI host bus adapter.")

    That's a little vague, so I'm looking for clarification. 



  • 2.  RE: Edit VMkernal adapter causes connectivity issues?

    Posted Mar 18, 2023 05:09 PM

    Best practice is to have a dedicated LAN for iSCSI traffic and not share the network with other network traffic. It is also best practice not to oversubscribe the dedicated LAN

    Please go through this docs

    https://core.vmware.com/resource/best-practices-running-vmware-vsphere-iscsi#sec7249-sub1

     



  • 3.  RE: Edit VMkernal adapter causes connectivity issues?
    Best Answer

    Posted Mar 18, 2023 06:57 PM

    kindly use a separate network stack for iscsi traffic.

    • It's recommended to have all the vmk stacks on differenVLANan.
    • Using the same stack for multiple purposes will cause latency and threshold.
    • Also for storage-specific, its even recommended to dedicate the vmnic for that traffic to avoid latency 

    Further Info: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.hostclient.doc/GUID-4C19E34E-764C-4069-9D9F-D0F779F2A96C.html

    Regards

    Harry



  • 4.  RE: Edit VMkernal adapter causes connectivity issues?

    Posted Mar 22, 2023 05:47 PM

    Thanks to both of you for replying! In my position, our vmware infrastructure is almost 100% "set it and forget it", so it's been a while since I've been in this scenario.

    Adding a new vmkernal to another virtual switch was the answer. I got that setup and tested a live vm migration with a test vm that I built, and dropped zero packets. Now I can confidently schedule a time to get the rest of the production VMs migrated.