VMware vSphere

 View Only
  • 1.  Re: iscsi port binding

    Posted Jan 30, 2018 08:47 PM

    this question is regarding SANconnection to host using 2 x 10GbE Uplink in standard vswitch with iscsi port binding.

    Is the host essentially getting 20 GbE throughput put to the SAN or is it alternating between paths at 10 GbE a piece?

    Can somebody explain to me what is actually going on here?

    Thanks,



  • 2.  RE: Re: iscsi port binding

    Posted Jan 30, 2018 08:56 PM

    When using iSCSI port binding, if you have 2 x 10 GbE uplinks to the storage, assuming you are configuring those datastores for a multipathing policy like Round Robin which can use all paths, then all physical links will be used to establish iSCSI sessions to the targets. As far as the path switching frequency, this is dependent upon what the PSP has written in it. On some arrays, the PSP switches ever 1,000 I/Os and others it switches every 1 I/O. When using proprietary MPIO solutions like PowerPath or such, this can vary even further.



  • 3.  RE: Re: iscsi port binding

    Posted Jan 30, 2018 09:17 PM

    so you would not get 20GbE throughput technically at any given time?



  • 4.  RE: Re: iscsi port binding

    Posted Jan 30, 2018 09:22 PM

    No, you would get that increased throughput, I was just explaining how that comes to be in the form of path switching frequency.



  • 5.  RE: Re: iscsi port binding

    Posted Jan 30, 2018 09:38 PM

    so its theoretically providing 20GbE throughput using multiple paths. got it. thanks..



  • 6.  RE: Re: iscsi port binding

    Posted Jan 30, 2018 09:41 PM

    It *could theoretically provide 20 GbE throughput* using multiple paths, yes.