vSphere Storage Appliance

 View Only
Expand all | Collapse all

Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

  • 1.  Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 06, 2010 11:54 PM

    We recently acquired a Dell MD3200 and MD1200 6Gb/sec shared SAS storage system and I have spent a couple of days setting up some tests to see how it compares to our old iSCSI EMC Clariion. Wasn't really a fair fight, given that the EMC had 500GB 7200rpm SATA disks in it, whereas the new systems had 450GB 15000rpm SAS in the MD3200 and 2TB 7200rpm nearline SAS in the MD1200.

    Big caveat on the following test results ... all are the result of testing a single VM ... I do not (yet) have any results with a multipleVM shared workloads on shared LUNs, so I haven't tested the impact of contention, locking, etc.

    Tests were done by running simple IOMETER jobs. Results per job were cross-checked with 'esxtop' to ensure that they roughly matched (they did). I installed a Windows Server 2008 R2 VM and created 100GB vmdk's on each storage tier. I tested with 5-disk and 6-disk RAID5 as well as a few tests with a 6-disk RAID10.

    I had a guess at constructing some sample workloads to simulate SQL data (reading/writing 64KB chunks of data) as well as simulating retrieval of data for backups. My guess as to how these might work could be way off! If anyone thinks so and can suggest improvements to the IOMETER test workloads, I am happy to receive that feedback and can probably repeat the tests with different parameters.

    Some of the conclusions I reached (some of which came as a surprise to me, and some which didn't!):

    • 5-disk RAID5 is almost indistinguishable from 6-disk RAID (between 2% and 4% better performance with the 6-disk config)

    edit: this is only under light workload ... see updated stats below

    • 5-disk RAID5 out-performed a 6-disk RAID10 by a fair margin (about 20%) on the "all in one" IOMETER tests

    edit: this is only under light workload ... see updated stats below

    • 450GB 15k SAS outperformed 2TB 7.2k nearline SAS by 50%-100% in a 5-disk RAID5 configuration

    • its possible to pull around 700-800MB/sec of real data out of the shared SAS storage given optimal conditions, which blows away a small iSCSI config (unless you have masses of 1Gb/sec NICs teamed, or are blessed with 10Gb/sec ethernet)

    Here's the some of the test results, in case they're of interest. Feedback welcomed.

    IOMETER standard "all in one" workload

    Storage

    Physical

    RAID

    Number

    TEST RESULTS

    % IMPROVEMENT

    Tier

    Disk Type

    Type

    of Disks

    IOPS

    MB/sec

    Avg Seek

    IOPS

    MB/sec

    Avg Seek

    MD3200

    15000rpm 450GB SAS

    RAID5

    5

    2015

    25.8

    0.50

    294%

    291%

    74%

    MD3200

    15000rpm 450GB SAS

    RAID5

    6

    2065

    26.5

    0.48

    303%

    302%

    75%

    MD3200

    15000rpm 450GB SAS

    RAID10

    6

    1640

    21.0

    0.61

    220%

    218%

    69%

    MD1200

    7200rpm 2TB nearline SAS

    RAID5

    5

    1710

    21.9

    0.58

    234%

    232%

    70%

    MD1200

    7200rpm 2TB nearline SAS

    RAID5

    6

    1733

    22.2

    0.58

    238%

    236%

    70%

    MD1200

    7200rpm 2TB nearline SAS

    RAID10

    6

    1175

    15.1

    0.85

    129%

    129%

    56%

    iSCSI SAN

    7200rpm 500GB SATA

    RAID5

    5

    512

    6.6

    1.95

    "simulated SQL workload" workload (reading/writing 64KB blocks in 80:20 ratio, random/sequential in 90:10 ratio)

    Storage

    Physical

    RAID

    Number

    TEST RESULTS

    % IMPROVEMENT

    Tier

    Disk Type

    Type

    of Disks

    IOPS

    MB/sec

    Avg Seek

    IOPS

    MB/sec

    Avg Seek

    MD3200

    15000rpm 450GB SAS

    RAID5

    5

    240

    14.9

    4.18

    229%

    224%

    69%

    MD3200

    15000rpm 450GB SAS

    RAID5

    6

    250

    15.6

    4.00

    242%

    239%

    71%

    MD1200

    7200rpm 2TB nearline SAS

    RAID5

    5

    119

    7.4

    8.43

    63%

    61%

    38%

    MD1200

    7200rpm 2TB nearline SAS

    RAID5

    6

    126

    7.9

    7.90

    73%

    72%

    42%

    iSCSI SAN

    7200rpm 500GB SATA

    RAID5

    5

    73

    4.6

    13.60

    "simulated backup workload" workload (reading 64KB blocks, 100% sequential)

    Storage

    Physical

    RAID

    Number

    TEST RESULTS

    % IMPROVEMENT

    Tier

    Disk Type

    Type

    of Disks

    IOPS

    MB/sec

    Avg Seek

    IOPS

    MB/sec

    Avg Seek

    MD3200

    15000rpm 450GB SAS

    RAID5

    5

    2004

    125.3

    0.50

    309%

    309%

    75%

    MD3200

    15000rpm 450GB SAS

    RAID5

    6

    2017

    126.1

    0.49

    312%

    312%

    76%

    MD1200

    7200rpm 2TB nearline SAS

    RAID5

    5

    1967

    123.0

    0.51

    301%

    302%

    75%

    MD1200

    7200rpm 2TB nearline SAS

    RAID5

    6

    2008

    125.5

    0.50

    310%

    310%

    75%

    iSCSI SAN

    7200rpm 500GB SATA

    RAID5

    5

    490

    30.6

    2.04



  • 2.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 07, 2010 04:22 AM

    UPDATE:

    Just realised that I had configured IOMETER with just 1 outstanding I/O per target ... so I am doing some new tests to properly saturate the arrays with a much heavier workload

    e.g.

    64K blocks with 16 outstanding IOs/target for a SQL workload

    8K blocks with 64 outstanding IOs/target for an Exchange workload

    Will post again when I have all those results, including a fresh comparison of a 6-disk RAID5 vs 6-disk RAID10 ... and also comparison with a 4-disk RAID10 DAS.

    UPDATE:

    I now have stats from the MD3200 and MD1200 under heavier load (more oustanding IOs/target as indicated above) ... this is now showing that under load the RAID10 is substantially outperforming the RAID5 (which is more in line with my initial expectations) ... and also showing that performance of the MD3200 and MD1200 is roughly on par with DAS for an Exchange-like workload, and outperforming DAS for a SQL-like workload.

    Will post final stats after I collect data from a physical server with a 4-disk 10000rpm 300GB SAS RAID5 DAS array (have to wait for the weekend for that).



  • 3.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 10, 2010 10:39 PM

    I've now completed my testing and can compare the MD3200/MD1200 with some standard 3Gb/sec SAS direct attached storage as well as our current iSCSI SAN. The following tests tried to simulate heavier loads of SQL and Exchange 2007 data. The specifications for the IOMETER tests were:

    SQL workload: 64KB, 16 IOs/target, 67%:33% read/write, 100% random, 30 seconds ramp-up, 5 minutes test duration

    Exchange 2007 workload: 8KB, 64 IOs/target, 55%:45% read/write, 80% random, 30 seconds ramp-up, 5 minutes test duration

    My findings:

    • under heavy loads, the 4x6Gb/sec MD3200/MD1200 performs at least as well as regular DAS

    • 15k rpm disks are well worth the investment (compared to 10k or 7.2k disks)

    • allocate more spindles (disks) per disk group to improve performance (not far short of linear improvement)

    • don't split disk groups over multiple controllers if you can help it (one of the DAS tests below used that config and it was pitiful)

    • don't use regular SATA for bulk storage ... get nearline SAS drives (nearly 2x performance of SATA)

    Here's the stats I collected:

    Simulated SQL workload (64K blocks, 66:34 read/write, 100% random, 16 outstanding IOs/target)

    Storage

    Physical

    RAID

    Number

    TEST RESULTS

    % IMPROVEMENT

    Tier

    Disk Type

    Type

    of Disks

    IOPS

    MB/sec

    Avg Seek

    IOPS

    MB/sec

    Avg Seek

    Dell MD3200

    15000rpm 450GB SAS

    RAID5

    5

    794

    49.6

    20.2

    291%

    291%

    74%

    Dell MD3200

    15000rpm 450GB SAS

    RAID5

    6

    945

    59.1

    16.9

    366%

    365%

    78%

    Dell MD3200

    15000rpm 450GB SAS

    RAID10

    6

    1243

    77.7

    12.9

    512%

    512%

    84%

    Dell MD1200

    7200rpm 2TB nearline SAS

    RAID5

    5

    390

    24.4

    41.0

    92%

    92%

    48%

    Dell MD1200

    7200rpm 2TB nearline SAS

    RAID5

    6

    503

    31.4

    31.8

    148%

    147%

    60%

    Dell MD1200

    7200rpm 2TB nearline SAS

    RAID10

    6

    621

    38.8

    25.8

    206%

    206%

    67%

    DAS

    15000rpm 300GB SAS

    RAID10

    4

    550

    34.4

    29.1

    171%

    171%

    63%

    DAS

    10000rpm 300GB SAS

    RAID5

    4

    189

    11.8

    84.4

    -7%

    -7%

    -7%

    iSCSI SAN (2xNICs)

    7200rpm 500GB SATA

    RAID5

    5

    203

    12.7

    78.7

    Simulated Exchange workload (8K blocks, 55:45 read/write, 80% random, 64 outstanding IOs/target)

    Storage

    Physical

    RAID

    Number

    TEST RESULTS

    % IMPROVEMENT

    Tier

    Disk Type

    Type

    of Disks

    IOPS

    MB/sec

    Avg Seek

    IOPS

    MB/sec

    Avg Seek

    Dell MD3200

    15000rpm 450GB SAS

    RAID5

    5

    1145

    8.9

    55.9

    207%

    207%

    67%

    Dell MD3200

    15000rpm 450GB SAS

    RAID5

    6

    1317

    10.3

    48.6

    253%

    255%

    72%

    Dell MD3200

    15000rpm 450GB SAS

    RAID10

    6

    2004

    15.7

    31.9

    437%

    441%

    81%

    Dell MD1200

    7200rpm 2TB nearline SAS

    RAID5

    5

    546

    4.3

    117.1

    46%

    48%

    32%

    Dell MD1200

    7200rpm 2TB nearline SAS

    RAID5

    6

    724

    5.7

    88.3

    94%

    97%

    48%

    Dell MD1200

    7200rpm 2TB nearline SAS

    RAID10

    6

    1033

    8.1

    62.0

    177%

    179%

    64%

    DAS

    15000rpm 300GB SAS

    RAID10

    4

    1208

    9.4

    53.0

    224%

    224%

    69%

    DAS

    10000rpm 300GB SAS

    RAID5

    4

    367

    2.9

    173.8

    -2%

    0%

    -1%

    iSCSI SAN (2xNICs)

    7200rpm 500GB SATA

    RAID5

    5

    373

    2.9

    171.5



  • 4.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 12, 2010 07:56 PM

    Great post! Thanks for all the hard work and sharing it with us all.



  • 5.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Apr 12, 2011 01:22 AM

    I now have the MD3200 + MD1200 in production ... performing well so far (after a couple of days).

    Backup performance is massively improved (86% reduction in time to backup my VMs).

    Running performance stats on the arrays shows that during normal daily use its load is around 100 IOPS on average, with peaks around 200-250 IOPS.  We have 14 VMs running across 3 hosts.  Not much in the way of SQL load (they are still mostly on our 2 physical db servers).



  • 6.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Apr 12, 2011 05:23 AM

    After running monitoring all day, I've seen peaks at 3,950 IOPS and 207MB/sec throughput over the whole array ... sure beats the 30-40MB/sec I was getting from my old SAN.



  • 7.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted May 08, 2011 08:51 AM

    Did you have the md3200 high performance tier optional kit

    http://www.dell.com/downloads/global/products/pvaul/en/powervault-md3200-high-performance-tier-implementation.pdf

    Does anyone here know anything about it or have an opinion regarding this addon

    Seems to me that it may be worthwhile in most situations, I am trying to get the cost now as I was about to order 2 md3200 to set up a couple of customers with a SAS SAN.

    Regards Jason



  • 8.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted May 08, 2011 04:18 PM

    IIRC it increases the CPU speed of the controller. Make sence if you have more disks populated and the number of diskgroups/volumes increased.

    Regards

    Joerg



  • 9.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted May 08, 2011 10:14 PM

    No, we aren't using the "high performance tier" upgrade.  Just a stock-standard MD3200, dual controllers, connected to 3 x Dell R710 servers which are ESXi v4.1 hosts.  We also have an MD1200 attached to the MD3200, and have loaded it with 2TB nearline SAS drives.

    I was a bit worried that we might have problems when we ran our backups, as I backup my VMs using Veeam Enterprise v5.0 to a VM with 2 x 2000GB virtual disks running on the MD1200 ... I then copy the data off to USB HDD disks on a separate physical server.  But it seems to be performing beautifully.  On my old SAN it basically took 16+ hours to take backups of my VMs.  The job now takes me less than 2 hours, leaving plenty of time for copying of those backups off to the USB drives.



  • 10.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Sep 27, 2011 04:57 PM

    I just completed the exact same setup in my data center with an MD3200 and md1200. I connect them to a r710, r810 and T610. The md3200 is using 3.25's and the MD1200 is using 2.5 inch drives. I've build 5 arrays

    12 x 2.5 300gig 10K rpm raid 6

    6 x 3.25 600gig 15K rpm raid 5

    6 x 3.25 600gig 15K rpm raid 5

    6 x 2.5 300gig 10K rpm raid 5

    6 x 2.5 300gig 10K rpm raid 5

    I've just completed the upgrade to vsphere 5.0 so all hosts and virtual machines are configured as 5.0 as well as their storage updated to FMFS5

    I've dumped 18 virtual machines on it now, a mix of typical small business servers, 2 exchange servers, 2 sql servers, several web servers, terminal servers, Symantec central manager, several large nas's, etc.. etc...

    The thing just works great. Veeam is doing its backups perfectly. Performance is great, latency is low.

    Amazing setup for cheap!

    My only issue (how I came upon this) is trying to figure out how to give the veeam vm a direct path to the md3200.



  • 11.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Sep 27, 2011 10:05 PM

    Interesting that you are running vSphere 5.0 ... I checked on hardware compatibility a month or so ago and saw that the MD3200 was on the list ... but when I looked again last week it had disappeared off the list again and I saw some discussions (elsewhere?, possibly in the vSphere forums?) where someone who had upgraded to 5.0 was having problems.  So I hope it is working well for you!  Would be interested in any feedback on that.  I've consequently delayed my own plans for a 5.0 upgrade until it reappears on the HCL again and stays there for a few months.

    Regarding Veeam, its working really well for me.  I've deployed a 4xvCPU VM with VBR Enterprise 5.0.  We use the backup connection method which is called "Virtual Appliance Mode".  It directly attaches the virtual disks to the Veeam VM, so all the backup activity happens 'within' the MD3200 and doesn't require network traffic.  Its a LOT faster than a network backup.

    You can find out more via the Veeam forums ... there are a lot of really helpful people there, incl. Veeam staff ... but I found that setting this up just worked out of the box simply enough.  The stuff that I have needed some consulting assistance with is the setup of the VBR virtual lab stuff.  We have it working now, but it took a bit of mucking about and it was, for me anyway, not quite intuitive.  But I now have the ability to boot my backup images in a virtual lab to prove that the backup worked OK, plus the new Application Item Restore wizards, combined with virtual lab functionality, allow me to boot a  backup image (e.g. our Exchange server) and restore individual items (mailboxes, folders, messages) quickly and easily.



  • 12.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 10, 2011 01:01 AM

    Interesting that you are running vSphere 5.0 ... I checked on hardware compatibility a month or so ago and saw that the MD3200 was on the list ... but when I looked again last week it had disappeared off the list again and I saw some discussions (elsewhere?, possibly in the vSphere forums?) where someone who had upgraded to 5.0 was having problems.  So I hope it is working well for you!  Would be interested in any feedback on that.  I've consequently delayed my own plans for a 5.0 upgrade until it reappears on the HCL again and stays there for a few months.

    Another "hmmmm" moment ... I see that the MD3200 is back on the HCL again for vSphere 5.0 (as at midday Monday 10th October 2011 anyway).



  • 13.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 22, 2011 11:41 PM

    First off, thank you very much for putting all this together.  second, thank you very much for keeping it updated for over a year!  I'm not sure how much time you've spent on forums, but it's likely you've noticed how rare that is.

    I just purchased a MD3220, and I have a few questions about it that Dell won't (can't?) give me a straight answer on.  I have a single MD3220 unit with 24 146GB 15K drives.  Because our storage needs are very small (<1TB), it seems logical to me to simply raid a single RAID 10 disk group (plus cold spare) and divide that up in to 4 virtual disks.  We only have about 20 VMs, with a varied workload (Exchange, Citrix XA and PVS, AD, file server w/ roaming profiles).  However, you mention the following:

    "don't split disk groups over multiple controllers if you can help it (one of the DAS tests below used that config and it was pitiful)"

    I'm not exactly certain what you meant here, as disk groups don't have controller assignments?  Rather, it's the virtual disks that are assigned.  Are you saying that once a disk group is created, all its' associated virtual disks should be assigned to 1 controller?  A Dell tech told me that they recommend a single virtual disk per disk group, but it seems like there's the potential for unnecessarily poor performance.  Why give a disk group 6 disks when it coud have 24?

    The only answer I can think of, would be that a sequential read/write might be interrupted by a random r/w on a different virtual disk (1 of many possible examples).  From the MD3200 performance tuning guide:

    "Dell™ does not recommend using more than four virtual disks or repositories per disk group for peak

    performance. Additionally, where performance is critical, isolate virtual disks to separate disk groups
    when possible. When multiple high traffic virtual disks share a disk group, even with purely sequential
    usage models, the disk group I/O behavior becomes increasingly random, lowering overall performance."

    I'm not specifically attached to the idea of having 4 virtual disks, that just seemed to be a good rule of thumb based on the above quote and http://www.yellow-bricks.com/2009/06/23/vmfslun-size/ (not the only sources I used, but the quickest explanation when accounting for total available capacity)

    I've even considered short stroking a single RAID 6 disk group, as our few writes (<15%) are bursty but small in nature.  Anybody have experience/thoughts on that vs a single RAID 10?

    Finally, how many sectors did you give to iometer on these tests?  I can't find any best practice info on sample size, only on read/write.  For example, if my Exchange DB is 30GB, should I run a 30GB iometer test?

    This is a brand new environment, so we'll most likely be doing ESXi 5.  We only use the Essential Plus kit, so being an early adopter won't keep me up at night knowing that we're using the primary functions of ESXi that have been tested time and again for years - nothing bleeding edge.



  • 14.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 23, 2011 09:27 PM

    My comment about splitting disk groups over multiple controllers actually referred to a physical server here that I was testing as a comparison.  I haven't tried to do that with our MD3200+MD1200 combo, but it just seems to me to be a really counter-intuitive thing to do anyway.

    One reason why you might want to give a disk group 6 disks (and have 4 disk groups) rather than 24 disks (as a single disk group) is maybe that your data is spread over those 24 disks.  I don't know how smart the controllers are in the MD3200/MD3220 so I don't know whether that would mean that all 24 disks have to be kept "in sync" even for small file/data disk writes.  If it were me, I would do some testing of that scenario and see how it performs and I might start with something like:  2x8-disk RAID10s and 1x6-disk RAID10 ... and a couple of hot spares.  It really depends on how much disk space you're going to want for your biggest servers and how much performance you want to try to extract from them.  I might also consider having some RAID5 so as to have a mix of max-performance and max-storage.

    Regarding how I tested with IOMETER, it was that long ago that I can't remember all the details now.  But I do recall that the VM that I was using had been set up with a 100GB D: drive.

    We are running about 30 VMs now across the MD3200+MD1200 combination using shared SAS and its performing just fine for us.  Current config is a 5-disk RAID5 and a 6-disk RAID-10 on the MD3200 (1 hot spare) using 450GB 15k SAS drives, and a5-disk RAID5 and a 6-disk RAID5 on the MD1200 (1 hot spare) using the 2TB nearline SAS drives.

    We have about 100 staff now.  Our Exchange server has about 250GB of mailbox data (too high!) but its performing fine on the RAID10.  I have a couple of lightly-used SQL databases on that RAID10 too.  Our 2 main SQL servers are still physical servers though and I don't expect that to change in the near future.  If I were to virtualise them, I would add a 3rd tier of storage, probably a Dell MD1220 loaded with 146GB 15k SAS drives and would roughly split it into 2 disk groups, configured as RAID10 ... and I have been advised by Dell that I would probably need to upgrade the licence on the MD3200 to the High Performance Tier licence.



  • 15.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 24, 2011 10:04 PM

    As far as I can tell, I also need the High Performance Tier.  No matter how I configure these disks (1 RAID 10, 1 RAID 10 + 1 RAID 5, 1 RAID 6) I always end up with ~2800 IOPS.  Maximum latency is significantly lower with a single RAID 10, along with a bump in transfer rates, but even the average latency is within 4% on a single virtual disk as a RAID 10 vs a RAID 6. 

    Thanks for your reply!  In my testing, it appears that configuration isn't going to be that difficult for me until I can afford the performance upgrade.



  • 16.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 09, 2011 03:16 AM

    Frosty,

    Excellent post! I have been researching a setup for about a month now and I had all but settled on one until I read this post. I was intending use use 3 R710s running vsphere 5 essentials plus kit for my HA cluster. For storage, I was going to use two R510s running Starwind as mirrored iSCSI sans. Veeam would be used for backups.

    I have questions reguarding your setup:

    1. Do you not worry about the MD3200 failing? You would loose your entire system If for some reason you could not power on the device.

    2. Could you tell me your exact raid configurations on the two boxes?

    3. What nics are you using in your servers and how many do you have? Are you using teaming/MPIO?

    I currently have 2 MD1000s. I was going to use one for backups (15 x 300gb 15k sas) and sell the other (15x 7.2k 750gb sata).

    But, I could get a MD3200 and connect my good MD1000 to it; However, I worry about redundancy.

    Thanks again for the great post!



  • 17.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Oct 09, 2011 09:43 PM

    Just to be sure that there is no possibility of confusion here ... we are using the MD3200 not the MD3200i ... I just mention that because you referred in passing to considering an iSCSI setup.  The MD3200 is a shared SAS device, not iSCSI.  In answer to your questions:

    (1)  the possibility of any SAN failing is something to consider, however we are using the dual-controller MD3200 so we have failover capability at that level.  We also have dual cards in each ESXi host, so we have failover at that layer too.  All our storage is carved out as a mix of RAID5 and RAID10 disk groups, with a hot spare in each storage device (i.e. one hot spare in the MD3200 and another hot spare in our MD1200).  So as much as is possible, given our budget, we are protected against failure.

    (2)  our MD3200 has 12 x 450GB 15k SAS drives configured as two disk groups:  (1) a 5-disk RAID5 (1x800GB and 1x870GB LUNs); and (2) a 6-disk RAID10 (2x625GB LUNs) ... the remaining drive is the hot spare.  Our MD1200 has 12 x 2TB nearline SAS drives configured as two disk groups:  (1) a 5-disk RAID5 (3X2000GB, 1x1400GB and 1x50GB LUNs); and (2) a 6-disk RAID5 (4x2000GB and 1x1300GB LUNs)  ... the remaining drive is the hot spare.

    (3)  we ended up with 10 NICs per server:  4 on the motherboard, a 4-NIC card, and a 2-NIC card.  I'm currently running 4 network segments (LAN, DMZ, MANAGEMENT and VMOTION) for our VMs, each is using 2 NICs, meaning that we currently have 2 NICs spare in each server.  We're using NIC teaming in an Active:Active config.

    For storage access, because its not iSCSI, no NICs are involved.  We have dual SAS controller cards in each server and they share the load.  So where one of them has a particular LUN as Active, the other card will have the corresponding LUN as Standby.

    Hope this info helps...



  • 18.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 25, 2012 10:25 AM

    Hi, all

    i spend two days investing my bad Results with vmware Esxi5 , ISCSI and MD3200i.

    Here are my first Results

    And now my new Results, on which you can see, that it's quite better.



  • 19.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 27, 2012 07:13 AM

    Hi abkabou,

    can you tell us how did you achieve such results?

    Thanks.



  • 20.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 27, 2012 07:23 AM

    

    Hello,

    I will be out of the office Aug 26 - Sept 2. If this is an emergency, please contact Chad Riley or Andy Wade. Thank you!



  • 21.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 27, 2012 07:48 AM

    I have connected

    Here is how i have configured my San

    I have Set Vmware ISCSI Prefered path to MRU

    I have only used Used 1GB Port on each Controler.

    I have dedicated one Network card by Network

    Checked that the Lun was accessed using the prefered Path, if it's not the case the MD3200i give you an alert Message.

    I have Created one VMKNIC per Network Card

    I have Set Jumbo Frame to 6500 on both element, network card

    I have set one Target by Initiator.

    I have created two VSWITCH.

    VSWITCH1 ==> VMK1 ==> Physical Network Card 1 ==> Jumbo 6500 on SAN and on network Card and vswitch ==> accessing Controler 1 (accessing controler 2 if controller one 1 fail)

    VSWITCH2 ==> VMK2 ==> Physical Network Card 2 ==> Jumbo 6500 on SAN and on network Card and vswitch ==> accessing Controler 2 (accessing controler 1 if controller one 1 fail)

    Abdellah KABOU



  • 22.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 27, 2012 08:51 AM

    Hi,

    Thank you for your hints. I have more questions for you.

    You say that you use ISCSI Preferred path MRU. Is it faster than Round Robin?

    You say that you use 1GB Port on each Controller. On MD3200i each controller has 4 ports. Are you really using only 1 port from each controller? I’m not sure that it’s good idea, what happens if one controller fails?

    You say that you dedicated one Network card by Network. Do you mean 1 physical Ethernet port to one iSCSI VMkernel? Or 1 physical Ethernet card to one iSCSI VMkernel?

    You use Jumbo frame value 6500, why this number?

    You say that you set one Target by Initiator. Do you mean one target as one MD3200i or one IP address?

    I also want to ask if you use SAN switches. If yes what type and configuration.

    Thank you.



  • 23.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 28, 2012 09:37 AM

    Hi,

    You say that you use ISCSI Preferred path MRU. Is it faster than Round Robin?

         - i was using MRU but i have change it now to Round Robin, and it's faster when using multiple PATH

    You  say that you use 1GB Port on each Controller. On MD3200i each  controller has 4 ports. Are you really using only 1 port from each  controller? I’m not sure that it’s good idea, what happens if one  controller fails?

    I was using one port on each controler but both port on each controler could access the LUn, the first port on controler 0 was the prefered path.

    If controler 0 fails it will switch to controler 1 port 1

    You say that you dedicated one  Network card by Network. Do you mean 1 physical Ethernet port to one  iSCSI VMkernel? Or 1 physical Ethernet card to one iSCSI VMkernel?

    1 physical Ethernet port to one  iSCSI VMkernel

    You use Jumbo frame value 6500, why this number?

    This is the best value that i have found during my tests, i tried increasing and decreasing this value

    You say that you set one Target by Initiator. Do you mean one target as one MD3200i or one IP address?

    ISCSI target discovered from one target on each VMHBA

    I also want to ask if you use SAN switches. If yes what type and configuration.

    I'm using Alcatel-lucent switch 6450

    You can't change the MTU SIZE in the configuration, beacuase according to some other alcatel topics it detects the mtu and just forward it.

    I have now Connect my esxi using Three ISCSI VMKERNEL, to 2 iscsi port of the MD3200i

    - ISCSI 01 192.168.131.99

    - ISCSI02 192.168.131.100

    - ISCSI03 192.168.132.100

    I put you my last results.

    Thanks Abdellah KABOU

    System and Network Engineer



  • 24.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 28, 2012 10:41 AM

    Thank you for your tips :smileyhappy:



  • 25.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 28, 2012 11:59 AM

    Youare welcome. :smileyhappy:

    Thanks Abdellah KABOU

    System and Network Engineer



  • 26.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 29, 2012 07:20 PM

    I’d be interested if anyone else is getting the purple screen of death on esxi now with 5.0 update 1. I’ve now had 3 in 2 months on one host and all 3 times vmware support has said its the Dell (LSI) SAS HBA causing it. They have said they are going to roll out a new driver soon in a future esxi update. This is obviously not good!



  • 27.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 29, 2012 10:02 PM

    I have tried to update my esxi to the latest update esxi 5.0 update 1 using vmware iso,

    After upgrade isaw a crash screen.

    I'm back to the dell latest version after rebooting.



  • 28.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Aug 29, 2012 10:05 PM

    I have been running ESXi 5.0 U1 (3 hosts) for a several months now with our MD3200/MD1220/MD1200 array and have not had any crash problems at all. Everything has been remarkably stable actually.



  • 29.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Nov 28, 2012 12:11 PM

    Just let me know if you want some advice regarding the md3220i or the md3200i as they are quite capable when configured correctly.

    I can get up to 400 MBps on 100% read on iometer.

    David



  • 30.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Nov 28, 2012 12:14 PM

    And those are not good results, there is no way that the system would be faster in 50% read/write than in 100% read.

    something got be wrong in this config



  • 31.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Jan 22, 2013 01:54 PM

    Hi,

    actually scenario like this might be fairly common (100% read slower than mixed seq read/write) 

    and actually it might be correct numbers for his current config.

    i have seen simillar behaviour before

    last time i have seen this was VM residing on SAN and fast storage behind it.

    reason for this behaviour - as VM/ ESXi host/SAN was configured - test used only 1 port for 100% sequential read (no MPIO etc), and due to high number of spindles on SAN storage the cap was the 1x 8gbit FC port bandwith and not the storage itself.

    In that test read/write scenario was able to use 2nd port as well and resulted in higher bandwith.

    looks to me as this might be simillar case :

    - 1gbit maxing for 100% seq read (125mb/s? - looks like too good to be true number for single 1gig link)

    - more throughput delivered via both ports during read/write seq access  (188mb/s?)



  • 32.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Jan 22, 2013 04:46 PM

    I ran some perf tests on my production environment on both my md3200's. The test ran for 24 hours (including Veeam backups). I'll post some more findings on a few arrays:

    host 1

    6 x 600gig 15K 3.25 raid 5               166.7 mb/sec    peak iops: 2355.3     avg io 71.4kb / 30.0kb

    8 x 900gig 10K 2.5 inch raid 10:     296.9 mb/sec     peak iops: 316     avg io 645.5kb / 6.1kb

    8 x 900gig 10K 2.5 inch raid 10:     415.9 mb/sec     peak iops: 1023.7 avg io 585kb / 0.0kb

    8 x 300 gig 10K 2.5 inch raid 6:     126.8 mb/sec     peak iops: 1560.2     avg io 68.5bk /1 7.1 kb

    8 x 300 gig 10K 2.5 inch raid 6:     244.1 mb/sec     peak iops: 3891.8     avg io 187.6kb / 20.8 kb

    8 x 1TB 7.2K 2.5 Raid 6:                199.6 mb/sec     peak iops: 590     avg io 97.0kb / 13.0 kb

    host 2

    8 x 900gig 10K 2.5 inch raid 10:     65.4 mb/sec     peak iops: 3882.3     avg io 119/2kb / 31.2kb

    8 x 300 gig 10K 2.5 inch raid 6:     87.4 mb/sec     peak iops: 4499.6     avg io 45.7kb / 18.9 kb

    host 3:

    8 x 900gig 10K 2.5 inch raid 10:     174.4 mb/sec     peak iops: 5012.7     avg io 47.1kb / 40kb



  • 33.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Feb 15, 2018 10:52 PM

    I thought it might be interesting to dredge up this old thread of mine, now that we are about 7 years down the track.

    Our MD3200/MD1220/MD1200 setup has served us really, really well.

    Not a single regret about going with Direct Attached Storage (with 3 x ESXi hosts); it has performed brilliantly.

    But now I am about to embark on our Infrastructure v2 project and have just purchased:

    3 x Dell R740 hosts (will run vSphere 6.5)

    Dell SC5020 storage with 20 x 1.92TB SSDs (direct attached storage again!).

    Some time in the next 4-6 weeks we will be firing these puppies up and I'll be repeating my performance testing (see first page of this thread).

    Looking forward to seeing how much better the SSD performs!

    Frosty.



  • 34.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted May 01, 2018 01:10 AM

    Well I have to say that I am rapt with the performance of our new R740's with the SC5020 storage loaded with 20 x SSDs.  I re-ran basically the same set of performance tests on the SC5020 as I ran on the MD3200 originally (see start of this thread) and both IOPS and MB/sec results are through the roof.  Here's a brief explanation of the config and initial test results:

    SC5020 has 2 disk groups of 10 x 1.92TB SSDs each, only a single tier (no spinning disks) so therefore no "smarts" in the SC5020 like auto-tiering being utilised.  We are using direct attached storage, giving 2 cables per host, each with 4 x 12Gb/sec channels).  We are using only single redundancy RAID5 and RAID10 volumes.  I ran up a Windows Server 2008 R2 VM with 2 x vCPU and 4GB RAM.  Gave it an extra 100GB HDD for use with IOMeter.  Ran a series of tests to try to simulate both Exchange and SQL workloads, comparing with the MD3200, and was like night and day.

    * data throughput / performance for VMs should be anywhere from 20x - 40x better (measured in MB/sec and in IOPS)

    * back-end disk write performance (measured via Dell Storage Manager) was seen to peak at:

    -- 3250MB/sec writing to RAID5 on some tests (e.g. Storage vMotion, so volume to volume copy)

    -- 4250MB/sec writing to RAID10 on some tests (ditto)

    DSM also showed performance on some tests peaking in excess of 100,000 IOPS.

    IOMETER test results

    1.  simulated SQL database workload RAID10

    (64K blocks, 66:34 read/write, 100% random, 16 I/Os per target)

    Old MD3200 RAID10 = 1243 IOPS and 78 MB/sec

    New SC5020 RAID10 = 38973 IOPS (31x) and 2554 MB/sec (33x)

    2.  simulated SQL database workload RAID5

    (64K blocks, 66:34 read/write, 100% random, 16 I/Os per target)

    Old MD3200 RAID5 = 794 IOPS and 50 MB/sec

    New SC5020 RAID5 = 33692 IOPS (42x) and 2208 MB/sec (44x)

    3.  simulated Exchange mail server workload RAID10

    (8K blocks, 55:45 read/write, 80% random, 64 I/Os per target)

    Old MD3200 RAID10 = 2004 IOPS and 16 MB/sec

    New SC5020 RAID10 = 45414 IOPS (23x) and 372 MB/sec (23x)

    4.  simulated Exchange mail server workload RAID5

    (8K blocks, 55:45 read/write, 80% random, 64 I/Os per target)

    Old MD3200 RAID5 = 1145 IOPS and 9 MB/sec

    New SC5020 RAID5 = 42282 IOPS (37x) and 346 MB/sec (38x)

    I'm pretty sure we could coax more performance out of this with a different config, however I've made some conscious trade-offs in order to get some other benefits (e.g. reduced rebuild time and minimising number of volumes/VMs affected if an SSD fails).



  • 35.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Feb 02, 2023 01:47 PM

    Good Morning,

    I know the topic is old, but I still use an MD3200i equipment.
    My question is the following, if I connect the MD3200i to the same esxi HOST, with 8 Network Interface, will I get a 4Gbps connection? (4x1GB), and the other 4 ports would be redundant?
    I ask which setting I get the best performance?

    Sorry for my English, I'm using a translator.



  • 36.  RE: Dell MD3200/MD1200 6Gb/sec Shared SAS performance tests

    Posted Sep 04, 2012 04:20 PM

    Here is the official word from the VMware support engineer about the Dell HBA card (LSI):

    "

    Thank you for your Support Request.

    I wanted to follow up with you regarding the support request # 12207949208 registered regarding the issue with PSOD on the ESX/ESXi host.

    As was informed earlier, we had found that the cause of the issue is due to the compatibility issues of the mptasas drivers ( controller drivers) with ESXi 5.0 hosts.

    You could either downgrade the host to a lower version till we have drivers to fix this issue or we could check to see if we could get a debug driver ( test drivers) for you to install of the host and check if it helps in resolving the issue.

    Looking forward to hearing from you.

    "