VMware vSphere

 View Only
  • 1.  Multipathing issue using multiple subnets and Nimble arrays

    Posted Mar 07, 2013 01:06 AM

    We have a requirement to use multiple subnets for multipathing because we also use XenServer. This is the first time we setup multipathing like this and may be missing a step.

    Our host use two physical nics for iscsi traffic connected to 2 switches stacked. We typically bind a kernel port to each vmnic for multipathing. This time I have setup each pnic on its own vswitch and binded to a kernel port with iscsi enabled. The array has 5 nics and I set two ports on one subnet and 3 on the other. The Nimble has a "discovery" ip that is virtual which is what we use for dynamic discovery. The switches are setup with their vlans respectively with routing disabled. I can vmkping all ip’s with no problem.

    When I rescan the adapter or refresh the storage I only see 3 paths. When I look at the network config on the iscsi software adaptor, it shows the path status as “Not Used” for the vmk port in question.



  • 2.  RE: Multipathing issue using multiple subnets and Nimble arrays

    Broadcom Employee
    Posted Mar 07, 2013 02:20 AM

    Hi,

        Please do check >http://kb.vmware.com/kb/1003681,http://kb.vmware.com/kb/2038869 and let me know your findings



  • 3.  RE: Multipathing issue using multiple subnets and Nimble arrays

    Posted Mar 07, 2013 03:56 PM

    I have performed all of those suggestions prior to posting except vmkping with the "-d" switch. I am getting the following when using jumbo frames (-s 9000)

    what is intersting is that i am getting the following error on all arrays including the existing equallogics.
    ~ # vmkping 192.168.23.9 -s 9000 -d
    PING 192.168.23.9 (192.168.23.9): 9000 data bytes
    sendto() failed (Message too long)
    sendto() failed (Message too long)
    if i chose a smaller size packet it works
    ~ # vmkping 192.168.23.9 -s 8784 -d
    PING 192.168.23.9 (192.168.23.9): 8784 data bytes
    8792 bytes from 192.168.23.9: icmp_seq=0 ttl=64 time=0.863 ms
    8792 bytes from 192.168.23.9: icmp_seq=1 ttl=64 time=0.887 ms
    8792 bytes from 192.168.23.9: icmp_seq=2 ttl=64 time=0.861 ms

    without the "-d" it works

    ~ # vmkping 192.168.23.9 -s 9000

    PING 192.168.23.9 (192.168.23.9): 9000 data bytes

    9008 bytes from 192.168.23.9: icmp_seq=0 ttl=64 time=0.896 ms

    9008 bytes from 192.168.23.9: icmp_seq=1 ttl=64 time=0.855 ms

    9008 bytes from 192.168.23.9: icmp_seq=2 ttl=64 time=0.854 ms

    the switch ports are set to 9216 and the vnics and vswitch are set to 9000. its almost if jumboframes are not configured properly. I have already rebooted the host just because and plan on rebooting the swithces tonight.



  • 4.  RE: Multipathing issue using multiple subnets and Nimble arrays

    Posted Mar 12, 2013 07:31 PM

    We just installed the 220G which has 2 active 10g ports and can now see both paths active/active. Not sure why the 1gb model would not balance across all links. Since we only plan to use the 1gb model for DR, we are not going to further troubleshoot this.



  • 5.  RE: Multipathing issue using multiple subnets and Nimble arrays

    Posted Mar 07, 2013 03:10 AM

    Is the path selection for the LUNs set to Nimble_PSP_Directed?



  • 6.  RE: Multipathing issue using multiple subnets and Nimble arrays

    Posted Mar 07, 2013 03:33 PM

    Apparently, we do not have that plugin in our version. I guess that is in the 2.0 release.