ESXi

 View Only
Expand all | Collapse all

2x RAID5 vs 1x RAID10

  • 1.  2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 05:40 PM

    In my system I can either way have two RAID5 arrays or one RAID10 array. Now, my questions are:

    1. What is the better option for a VMware ESXi environment?

    2. What gives me better performance? I mean I do understand that RAID10 writing is faster than RAID5, but is this statement still true if I am writing against two different RAID5 arrays?

    3. How about data safety? From what I can see there are no differences between RAID5 and RAID10. Is that correct?

    Thanks,

    Jens



  • 2.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:05 PM

    How many spindles are you looking to use? How about going best of both worlds with RAID 50 (http://www.acnc.com/04_01_50.html)? Is your system listed on the HCL? What RAID controller are you using?

    VMware VCP4

    Consider awarding points for "helpful" and/or "correct" answers.



  • 3.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:15 PM

    RAID 50

    While this may be true, most internal RAID cards don't support RAID 50, only option is RAID 0,1,10, 5 and 6.



  • 4.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:17 PM

    This is for a development environment.

    Hardware:

    - System is SuperMicro 6026T-NTR+ which is listed on the HCL

    - RAID Controller is Adaptec 5805 with 6 x 2TB and 1 x 1TB (For ESXi & ISO images)

    - 32GB RAM

    We plan to install 3 VMs on this system. One of the VMs will only be used infrequently (once a month). The other two would be on 24x7.

    I am just wondering if two RAID5s would perform better (write & read) than one RAID10 in this scenario. Is the total performance by controller or by disk array? Do two disk arrays work in parallel or do they work sequential?

    Thanks!



  • 5.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:25 PM

    jstraten wrote:

    • RAID Controller is Adaptec 5805 with 6 x 2TB and 1 x 1TB (For ESXi & ISO images

    http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/performance/SAS-5805/

    Supports RAID levels: 0, 1, 1E, 5, 5EE, 6, 10, 50, 60

    Make sure you get the battery with/for the controller too. It's listed as an optional item.

    VMware VCP4

    Consider awarding points for "helpful" and/or "correct" answers.



  • 6.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:45 PM

    jstraten wrote:

    • RAID Controller is Adaptec 5805 with 6 x 2TB and 1 x 1TB (For ESXi & ISO images

    Supports RAID levels: 0, 1, 1E, 5, 5EE, 6, 10, 50, 60

    Make sure you get the battery with/for the controller too. It's listed as an optional item.

    VMware VCP4

    Consider awarding points for "helpful" and/or "correct" answers.

    </div>

    Could you elaborate on the purpose of the battery?

    Thanks!



  • 7.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:12 PM

    1. What is the better option for a VMware ESXi environment?

    It's apples vs pomegranates.

    RAID 5 is only 1 parity stripped, but you lost N-1 disks. RAID 10 is better performance but you lose HALF of all the drives in the Array (thus only 50% space available), so it's a trade off.

    I would think RAID 5 is slightly better security, RAID 6 even more still. READS are better on RAID 10 writes are better on RAID 5.



  • 8.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:21 PM


    1. What is the better option for a VMware ESXi environment?

    It's apples vs pomegranates.

    RAID 5 is only 1 parity stripped, but you lost N-1 disks. RAID 10 is better performance but you lose HALF of all the drives in the Array (thus only 50% space available), so it's a trade off.

    I would think RAID 5 is slightly better security, RAID 6 even more still. READS are better on RAID 10 writes are better on RAID 5.

    50% space loss is acceptable to us if the performance would be significantly better (especially write performance).

    Hmm, I thought this to be other way round. Doesn't RAID 10 have faster write performance than RAID 5?

    Or, are you referring to two RAID 5 writing faster than one RAID 10 (which is basically my question).

    Thanks.



  • 9.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:24 PM

    RAID Level

    Total array capacity

    Fault tolerance

    Read speed

    Write speed

    RAID-10

    500GB x 4 disks

    1000 GB

    1 disk

    4X

    2X

    RAID-5

    500GB x 3 disks

    1000 GB

    1 disk

    2X

    Speed of a RAID 5 depends upon the controller implementation

    You can clearly see RAID 10 outperforms RAID 5 at fraction of cost in

    terms of read and write operations.



  • 10.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:36 PM

    |RAID Level|Total array capacity|Fault tolerance|Read speed|Write speed|

    RAID-10

    500GB x 4 disks

    1000 GB

    1 disk

    4X

    2X

    RAID-5

    500GB x 3 disks

    1000 GB

    1 disk

    2X

    Speed of a RAID 5 depends upon the controller implementation

    You can clearly see RAID 10 outperforms RAID 5 at fraction of cost in

    terms of read and write operations.

    </div>

    I have seen this chart before. However, I am still wondering if this is still true if you compate TWO RAID 5 arrays with ONE RAID 10 array.



  • 11.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:37 PM

    I have seen this chart before. However, I am still wondering if this is still true if you compate TWO RAID 5 arrays with ONE RAID 10 array.

    It's clear you are hell bent on using RAID 5. So use RAID 5 then.



  • 12.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:38 PM

    |RAID Level|Total array capacity|Fault tolerance|Read speed|Write speed|

    RAID-10

    500GB x 4 disks

    1000 GB

    1 disk

    4X

    2X

    RAID-5

    500GB x 3 disks

    1000 GB

    1 disk

    2X

    Speed of a RAID 5 depends upon the controller implementation

    You can clearly see RAID 10 outperforms RAID 5 at fraction of cost in

    terms of read and write operations.

    </div>

    I have seen this chart before. However, I am still wondering if this is still true if you compate TWO RAID 5 arrays with ONE RAID 10 array.

    </div>Only if you're talking about striping the two arrays... AKA RAID 50...

    VMware VCP4

    Consider awarding points for "helpful" and/or "correct" answers.



  • 13.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:18 PM

    RAID10 might be the fastest RAID level to choose, however with RAID5 (with the 6 disks you mentioned in the other thread) I assume you won't see much difference in performance since there are enough disks on which the controller can write simultaneously.

    In both RAID levels you can loose 1 disk. (On RAID10 you could actually loose 3 disks. Depends which ones.)

    There is a huge difference in available disk size. With RAID10 you will have 6TB, with RAID5 it's 10TB.

    So what I would recommend for your system is a RAID5 with 5 disks and 1 hot spare. This way you have more safety and also 8TB of disk space.

    Make sure to add a battery cache to your Adaptec controller, this makes a very big difference in disk performance.

    Also be aware that the max. VMFS size for 1 LUN is 2TB - 512 Bytes when using 8MB block size.

    André



  • 14.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:27 PM

    So what I would recommend for your system is a RAID5 with 5 disks and 1 hot spare. This way you have more safety and also 8TB of disk space.

    What is the purpose of a spare? You can lose a disk in RAID 5 and the RAID will still function, the big difference there is you LOSE that spare permanently just by making it a "hot" standby. I would rather have the performance than waste 500 bucks on something taking up energy.

    For that matter just make it a RAID 6 and be done with it. The RAID 5 vs RAID 6 is minor difference in speed.



  • 15.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:30 PM

    RAID10 might be the fastest RAID level to choose, however with RAID5 (with the 6 disks you mentioned in the other thread) I assume you won't see much difference in performance since there are enough disks on which the controller can write simultaneously.

    In both RAID levels you can loose 1 disk. (On RAID10 you could actually loose 3 disks. Depends which ones.)

    There is a huge difference in available disk size. With RAID10 you will have 6TB, with RAID5 it's 10TB.

    So what I would recommend for your system is a RAID5 with 5 disks and 1 hot spare. This way you have more safety and also 8TB of disk space.

    Make sure to add a battery cache to your Adaptec controller, this makes a very big difference in disk performance.

    Also be aware that the max. VMFS size for 1 LUN is 2TB - 512 Bytes when using 8MB block size.

    André

    </div>

    If I understand you correctly, you are recommending one RAID5 with 1 hot spare. Is that basically RAID6?

    So, I am still wondering: Are two RAID5 faster than one RAID5?

    I guess I am uncertain about the controller capabilities. I mean is it smart enough to read/write disks in one RAID array at the same speed as it would in two arrays? This is probably a trivial question, but I am pretty new to RAID hardware.

    Thanks!



  • 16.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:41 PM

    RAID5 with hot spare is not really the same as RAID6.

    The difference is, RAID5 is a little bit faster, however when you loose a disk the stand by disk has to be rebuilt. On RAID6 all disks are in use (2 parity disks) and you can loose two disks.

    A RAID5 with 3 disks is the slowest configuration you can choose. On RAID5 each write has to lock 2 disks (data and parity), so only one write at a time can be done.

    André



  • 17.  RE: 2x RAID5 vs 1x RAID10
    Best Answer

    Posted Apr 06, 2010 06:42 PM

    So, I am still wondering: Are two RAID5 faster than one RAID5?

    Probably but only marginally and only IF you have 2 separate controllers. If you put 2 RAID 5 on the controller that means that one controller controls BOTH RAID groups. So for this it would be slower, if you can stick 2 RAID controllers each managing the RAID 5 separately it might be a little faster, but I doubt it. 2 RAID 5 means that EACH array is still a RAID 5, RAID 10 is the fastest you can get, period (other than RAID 0). This assumes the same number of spindles.

    RAID 5 with 6 spindles, 2 RAID 5 with 3 spindles each, or a RAID 10 on 6 spindles. RAID 10 beats RAID 5, RAID 50.



  • 18.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:49 PM


    So, I am still wondering: Are two RAID5 faster than one RAID5?


    Probably but only marginally and only IF you have 2 separate controllers. If you put 2 RAID 5 on the controller that means that one controller controls BOTH RAID groups. So for this it would be slower, if you can stick 2 RAID controllers each managing the RAID 5 separately it might be a little faster, but I doubt it. 2 RAID 5 means that EACH array is still a RAID 5, RAID 10 is the fastest you can get, period (other than RAID 0). This assumes the same number of spindles.

    RAID 5 with 6 spindles, 2 RAID 5 with 3 spindles each, or a RAID 10 on 6 spindles. RAID 10 beats RAID 5, RAID 50.

    Thanks! This is exactly the information I was looking for.

    Basically, in my scenario with one RAID controller, I wouldn't see any performance improvement by having a second RAID array.

    It seems that the best of both worlds would be two use RAID50.

    How much faster is RAID10 in comparison to RAID50 with a six drive array?



  • 19.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:31 PM

    With the controller you're listing, you could do 8 1TB SAS drives in a RAID 50 array, have the space of six of them and be in a good position for speed and parity. I wouldn't give the boot/ISO virtual drive 1TB, you could easily go with 200GB or less there without issue. Use the remaining space to make your LUNs 2TB-512B in size, giving the balance for a final LUN and be good to go.

    VMware VCP4

    Consider awarding points for "helpful" and/or "correct" answers.



  • 20.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:43 PM

    With the controller you're listing, you could do 8 1TB SAS drives in a RAID 50 array, have the space of six of them and be in a good position for speed and parity. I wouldn't give the boot/ISO virtual drive 1TB, you could easily go with 200GB or less there without issue. Use the remaining space to make your LUNs 2TB-512B in size, giving the balance for a final LUN and be good to go.

    VMware VCP4

    Consider awarding points for "helpful" and/or "correct" answers.

    </div>

    I am more and more considering RAID50 now. It seems that it somehow meets most of my requirements in regards of performance, parity and space loss. Can I build a RAID50 with six drives? My server only has 8 slots.

    I hear you about the 1TB being wasted. I think I will have to redo that drive and follow your suggestion to create two RAID 0 LUNs against that drive.



  • 21.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 06:45 PM

    Can I build a RAID50 with six drives?

    That's the minimum requirement, 6 drives.



  • 22.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 07:10 PM

    This should be required reading...

    http://www.adaptec.com/en-US/_common/compatibility/_education/RAID_level_compar_wp.htm

    VMware VCP4

    Consider awarding points for "helpful" and/or "correct" answers.



  • 23.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 07:50 PM

    I'll put some numbers on this. For virtualisation it is random IO that is the metric of interest usually. Sequential MB/s numbers which will be huge with any of these configurations.

    My own testing has shown that quality SATA drives like Western Digital's RE series will turn in about 140 IOPS within a reasonably small test space; I tend to test with about 8GB test files.

    RAID 10

    - 6x 140 IOPS read, 6/2 x 140 IOPS write

    - Arbitary 70:30 read:write split should yield about 700 IOPS

    RAID 5

    - 5x 140 IOPS read, 5/4 x 140 IOPS write

    - 70:30 roughly 525 IOPS

    - Note writes incur a full additional RPM of latency however due to read-update-write

    RAID 6

    - 4x 140 IOPS read, 4/6 x 140 IOPS write

    - 70:30 roughly 420 IOPS

    - As with RAID-5 writes incur a full additional RPM of latency however due to read-update-write

    RAID-50

    - Probably not a good choice as each "side" of the stripe is only 3-drives, causing a write to absorb all three disks (controllers tend to read-peers then update data + parity with 3-drive sets to save processing overhead of calculating parity twice)

    - 4x 140 IOPS read, 4/4 x 140 IOPS write

    - 70:30 roughly 435 IOPS

    - Write latency per RAID 5 & 6

    However all of this might be irrelevant because many array controllers will present only ONE LUN with RAID-x0 configuration, which will obviously be way too big for ESXi which needs each LUN to be just under 2 TB as stated above already. If this controller has this limitation, this immediately forces the configuration to RAID-5 or RAID-6 (which should allow multiple 2 TB LUNSs to be defined across the whole array).

    Raid-5 will perform slightly better and give more space. However with 2 TB SATA drives the prosepct of a second failure during a rebuild is a very real prospect, resulting in total data loss, due to the read error rates being not massively higher than the overall capacity. Then again it's a dev environment anyway so speed might be more important.

    Raid-6 offers security in the rebuild process but is slightly slower and costs you another disk of capacity.

    The battery is essential because ESX will use ONLY write-through caching, as it needs to guarantee to VMs that writes are completed when they think they are. Hence hardware write-back caching is used to hide, to an extent, the effect of the read-update-write process for writes.

    HTH :smileyhappy:

    Please award points to any useful answer.



  • 24.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 08:03 PM

    I'll put some numbers on this. For virtualisation it is random IO that is the metric of interest usually. Sequential MB/s numbers which will be huge with any of these configurations.

    My own testing has shown that quality SATA drives like Western Digital's RE series will turn in about 140 IOPS within a reasonably small test space; I tend to test with about 8GB test files.

    RAID 10

    • 6x 140 IOPS read, 6/2 x 140 IOPS write

    • Arbitary 70:30 read:write split should yield about 700 IOPS

    RAID 5

    • 5x 140 IOPS read, 5/4 x 140 IOPS write

    • 70:30 roughly 525 IOPS

    • Note writes incur a full additional RPM of latency however due to read-update-write

    RAID 6

    • 4x 140 IOPS read, 4/6 x 140 IOPS write

    • 70:30 roughly 420 IOPS

    • As with RAID-5 writes incur a full additional RPM of latency however due to read-update-write

    RAID-50

    • Probably not a good choice as each "side" of the stripe is only 3-drives, causing a write to absorb all three disks (controllers tend to read-peers then update data + parity with 3-drive sets to save processing overhead of calculating parity twice)

    • 4x 140 IOPS read, 4/4 x 140 IOPS write

    • 70:30 roughly 435 IOPS

    • Write latency per RAID 5 & 6

    However all of this might be irrelevant because many array controllers will present only ONE LUN with RAID-x0 configuration, which will obviously be way too big for ESXi which needs each LUN to be just under 2 TB as stated above already. If this controller has this limitation, this immediately forces the configuration to RAID-5 or RAID-6 (which should allow multiple 2 TB LUNSs to be defined across the whole array).

    Raid-5 will perform slightly better and give more space. However with 2 TB SATA drives the prosepct of a second failure during a rebuild is a very real prospect, resulting in total data loss, due to the read error rates being not massively higher than the overall capacity. Then again it's a dev environment anyway so speed might be more important.

    Raid-6 offers security in the rebuild process but is slightly slower and costs you another disk of capacity.

    The battery is essential because ESX will use ONLY write-through caching, as it needs to guarantee to VMs that writes are completed when they think they are. Hence hardware write-back caching is used to hide, to an extent, the effect of the read-update-write process for writes.

    HTH :smileyhappy:

    Please award points to any useful answer.

    </div>

    The Adaptec 5805 can actuaully build Multi LUNs for RAID10. I haven't tried it for RAID50 yet, but I am guessing that it will work as well.

    So, you would recommend to use RAID6 in my scenario if I understand you correctly?



  • 25.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 08:08 PM

    If you can afford the space, use RAID-10 and create 3x 2TB LUNs. If you need more space, yes, I'd go for RAID-6 subject to your benchmarks showing sequential write performance is sufficient for your needs with this particular card.

    Please award points to any useful answer.



  • 26.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 08:13 PM

    From the information provided on the Adaptec site:

    Combines multiple RAID 5 sets with RAID 0 (striping). Striping helps

    to increase capacity and performance without adding disks to each RAID 5

    array (which will decrease data availability and could impact

    performance when running in a degraded mode).

    RAID 50 comprises RAID 0 striping across lower-level RAID 5 arrays.

    The benefits of RAID 5 are gained while the spanned RAID 0 allows the

    incorporation of many more disks into a single logical drive. Up to one

    drive in each sub-array may fail without loss of data. Also, rebuild

    times are substantially less then a single large RAID 5 array.

    Usable capacity of RAID 50 is between 67% - 94%, depending on the

    number of data drives in the RAID set.

    Each RAID 5 array has the striping it contains, then the two are striped together, to increase performance to ~6 drives (if using 8) or 4 drives (if using only 6). I do see RAID 5/50 losing out if you're going to go to a max of six drives. BUT, if you go to the full eight that the card supports (and the chassis too) then you'll gain there.

    I would stay clear of using SATA drives no matter what RAID level you eventually end up with. I think you'll be many times better off using SAS drives, even if they are 7200 rpm models. You can get those in sizes up to 1TB. RAID 10 would be my initial choice of an array as well, but if you cannot afford the cost of drive parity, then I see RAID 50 as the next best thing...

    BTW, RAID 10 and 50 perform alike when running normally. RAID 10 will perform better when the array is in a degraded state (when you have drive failures). For that reason, I would always have at least one, or two, 'spare' drives on hand ready to swap in during such a time. You hope you never need to use one of them, but you're prepared in case you do.

    VMware VCP4

    Consider awarding points for "helpful" and/or "correct" answers.



  • 27.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 08:28 PM

    Thanks Everybody!

    I think I will just go with RAID10 for the time being. It seems to be the best option from a performance point of view while also providing me with some safety by mirroring the drives.



  • 28.  RE: 2x RAID5 vs 1x RAID10

    Posted Apr 06, 2010 09:20 PM

    J1mbo provided some great info. I have a series of posts on my site that explain the math a bit more: http://vmtoday.com/2009/12/storage-basics-part-i-intro/


    If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".

    Please visit http://vmtoday.com for News, Views and Virtualization How-To's

    Follow me on Twitter - @joshuatownsend