VMware vSphere

 View Only
Expand all | Collapse all

10 GB iscsi realistic throughput

  • 1.  10 GB iscsi realistic throughput

    Posted Jan 22, 2013 04:34 PM

    Hi All,

    For our Iscsi network( 10gb*2), during SVmotion we are getting a throughout of 300 to 500 MBps. Is this  the max I can get from Esxi ?

    backend storage is VNX 5300 with SAS and FC SSD tier.

    Regards,

    DMK



  • 2.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 06:29 PM

    Are you sure hardware acceleration isn't offloading the task to the storage system? I'm pretty sure EMC systems support VAAI. With 2x10Gb iSCSI I have seen sustained throughput >  2Gbps from a single host. This was on a large system with multiple storage processors.



  • 3.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 06:33 PM

    ESXi is capable of well in excess of 10GBit.

    As is a VNX5300 (if you have the right disks.....are you sure you do?)



  • 4.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 06:37 PM

    Let me clarify. This is the throughput I am seeing from single Host during Storage Vmotion. I am looking at VAAI counters in ESXtop

    MAX  is always at 500MBps ( ie 4000gbps).

    During Vmotion,  VMkernal logs ( separate  10gb*2 network )  shows estimated bandwidth as 700 MBps( 5600gbps).

    But I am hardly going above 500MBps during sVmotion.

    The backend is 4+1  SSD and 36 15K SAS Pool in Auto tier. Is my Bandwidth limited by the disk Pool?



  • 5.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 06:47 PM

    dm_khan wrote:

    The backend is 4+1  SSD and 36 15K SAS Pool in Auto tier. Is my Bandwidth limited by the disk Pool?

    YES



  • 6.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 06:55 PM

    Possibly.  But Also, VAAI accelerated activities like SVmotion are reported differently...you may not be looking at the correct counters.

    Your drive count is nowhere near whats required to sustain 10Gbit of write activity, though.



  • 7.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:00 PM

    Matt wrote:

    Possibly.  But Also, VAAI accelerated activities like SVmotion are reported differently...you may not be looking at the correct counters.

    Your drive count is nowhere near whats required to sustain 10Gbit of write activity, though.

    4 SSD?  That's plenty..the 36 SAS drives are auto tier, which means they will escalate the IOPS UP to SSD drives..

    4 SSD in RAID is really good, 1 SSD can sustain 2Gbs throughput on it's own 4 is more than enough.



  • 8.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:09 PM

    Not on a VNX it can't....



  • 9.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:12 PM

    Matt wrote:

    Not on a VNX it can't....

    Not according to specs:

    A dual controller VNX7500 array in a single rack configuration now offers up to 14GB/sec throughput. On the VNX5500, the high bandwidth option delivers up to 6.5 GB/sec performance.

    http://www.computerworld.com/s/article/9221061/EMC_unveils_all_SSD_VNX_high_bandwidth_arrays



  • 10.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:14 PM

    I am using the formula for calculating the  max bandwidth  I can get  from the disks


    MBps = (IOPS * KB per IO) /1024

    IOPS
    SAS- 36*180
    SSD- 5*5000

    6480+25000= 31480 IOPS

    = 31480*32( esxi IO size) /1024
    =983 MBps i.e 7870 Gbps

    Is this correct ?

    When I use 2*10 GB  shouldn't I get at least 700+ MBPs during Svmotion?

    Currently we  are not using Jumbo frames.

    I am checking with  EMC and  VMware and the engineers are not giving me a definite answer.



  • 11.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:27 PM

    dm_khan wrote:

    I am using the formula for calculating the  max bandwidth  I can get  from the disks

    Is this correct ?

    When I use 2*10 GB  shouldn't I get at least 700+ MBPs during Svmotion?

    Yes, almost.  You can't add SSD to SAS drive throughput, however I found this:

    2820 IOPS for VNX 5300

    Page 26: http://www.emc.com/collateral/hardware/white-papers/h8158-exchange-performance-vnx-wp.pdf

    That means 11,018 Mbs throughput..

    However.. there are still some factors, your switches, your distance between the ESX host and storage.. the RAID type, and load on the other disks at the same time..and these numbers are OPTIMAL in a perfect environment.  And those are THEORHETICAL numbers, not guaranteed

    I still say 500 MB/s is GREAT, I don't think you can complain.  It looks good to me.

    Currently we  are not using Jumbo frames.

    I am checking with  EMC and  VMware and the engineers are not giving me a definite answer.

    They will not give you what you are SUPPOSED to get, their answer will be a definitive "it depends".. on your exact environment.. they will not give you a guarantee.



  • 12.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:29 PM

    I have to agree with Parker - given your system specs, I think your numbers are right in line with what you should expect to achieve.



  • 13.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:27 PM

    Thats total system backend + frontend combined throughput - not actually achieved....and it ignores response time.



  • 14.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:29 PM

    Matt wrote:

    Thats total system backend + frontend combined throughput - not actually achieved....and it ignores response time.

    http://www.emc.com/collateral/hardware/white-papers/h8158-exchange-performance-vnx-wp.pdf

    Tested, read it on page 27.  4560 IOPS, ACTUAL throughput...

    That is 11,000 Mbps throughput, over 1300 MB/s for those keeping score.

    What else you got, I can do this ALL day...



  • 15.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:39 PM

    Thanks for the clarification RParker



  • 16.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 07:42 PM

    Seriously?  Don't get your panties in a twist.  I am agreeing with you!

    Take a chill pill, take a breath, count to ten, take a walk or something. 



  • 17.  RE: 10 GB iscsi realistic throughput

    Posted Jan 25, 2013 08:20 AM

    Matt and Richard, Be nice to each other.  you have both been good boys recently, and I would not want to be getting mediaeval on you with Blowpipes and pliers again. :smileyhappy:



  • 18.  RE: 10 GB iscsi realistic throughput

    Posted Jan 25, 2013 06:43 PM

    I remember the pliars. That was unpleasant.



  • 19.  RE: 10 GB iscsi realistic throughput

    Posted Jan 22, 2013 06:41 PM

    dm_khan wrote:

    Hi All,

    For our Iscsi network( 10gb*2), during SVmotion we are getting a throughout of 300 to 500 MBps. Is this  the max I can get from Esxi ?

    backend storage is VNX 5300 with SAS and FC SSD tier.

    Regards,

    DMK

    Fast Ethernet is 100Mb/s throughput

    12 MB/s speed (theorhetical)

    Gig Ethernet is 1000Mb/s throughput

    125 MB/s speed (theorhetical)

    10 Gig Ethernet is 10,000Mb/s throughput

    1250 MB/s speed (theorhetical)

    ESXi is capable of over 4096 MB/s speed (theorhetical) or Infiniband which is 40,000 Mb/s throughput.

    10G Ethernet is also distance dependent.. I think its like 30 meters or something, beyond that and your speed will DROP significantly... over copper, if you have optical cable (end to end) then maybe you can go a little further...

    The weakest link in an infrastructure is the DISK, not network, not protocol type (Fiber is actually better than iSCSI), not switch (although port aggregates will affect this), or OS.

    Your DISK is your bottleneck, 100% guaranteed, that's what is holding you back.  The fact that you are getting 300 to 500 MBps is actually very good, you are filling up half of a 10G pipe, you should be happy.

    How much throughput do you NEED, that's a better question...