VMware vSphere

 View Only
  • 1.  What is the best practice for block sizes across several layers: Hardware, Hypervisor, and VM OS?

    Posted Oct 25, 2013 08:41 PM

    The example below is not an actual configuration I am working with, but should get the point across. Here is my example of what I am referring to as a layer:

    (Layer1) Hardware: Hardware RAID Controller

    • -1TB Volume configured at 4K block size. (RAW?)


    (Layer2) Hypervisor: ESXi Datastore

    • -1TB from Raid Controller formatted with VMFS5 @ 1MB block size.


    (Layer3) VM OS: Server 2008 R2 w/SQL

    • -100GB Virtual HD using NTFS @ 4K block size for OS.
    • -900GB Virtual HD configured using NTFS @ 64K block size to store SQL database.

    It seems VMFS5 is limited to only having a 1MB block size. Would it be best if all or some of the block sizes matched on different layers, and why or why not? How do the different block sizes affect other layers and performance? Could you suggest better alternative or best practice for the above example configuration?

    If a SAN were involved instead of a Hardware RAID controller on the host, would it be best to store the OS vmdk on the VMFS5 datastore and create a separate iSCSI LUN formatted at a 64K block size then attach it using the iSCSI initiator in the OS and format that at 64K. Do matching block sizes across layers increase performance or is it a best practice? Any help answering and/or explaining best practice is greatly appreciated.



  • 2.  RE: What is the best practice for block sizes across several layers: Hardware, Hypervisor, and VM OS?

    Posted Oct 25, 2013 08:58 PM

    Hello,

    In VMFS-5, very small files (that is, files smaller than 1 KB) are stored in the location in the metadata rather than using file blocks. Once the file size increases beyond 1 KB, sub-blocks are used. After one 8 KB sub-block is used, 1 MB file blocks are used. As VMFS-5 uses sub-blocks of 8 KB rather than 64 KB (as in VMFS-3), this reduces the amount of disk space being used by small files.

    So i think that it would be good  to store the OS vmdk on the VMFS5 datastore and create a separate iSCSI LUN formatted at a 64K block size then attach it using the iSCSI initiator in the OS and format that at 64K for the  which would Increase the performance. But as per the below blog . its not going to much of a different in performance.

    Performance: RDM vs. VMFS

    Hope this was helpful.

    Thanks,

    Avinash



  • 3.  RE: What is the best practice for block sizes across several layers: Hardware, Hypervisor, and VM OS?

    Posted Oct 25, 2013 11:49 PM

    Evening,

    This is a fun issue.  There have been lots of answers to this question through out the life of ESXi.   The simple answer is it depends.  You have two simple hard stops:

    VMFS you have no control.  Most of the time you don't have control on the array.  Given these two I vote you line up anything in between with your storage array as much as possible.  I'll try to answer your questions in order:

    It seems VMFS5 is limited to only having a 1MB block size. Would it be best if all or some of the block sizes matched on different layers, and why or why not?

    -> If block sizes line up on all layers that helps prevent having read's / writes cross blocks which adds to performance.   It really is more about crossing blocks for the read's / writes.  But again the answer is it depends on an array like an EVA where all data is striped across all disks this type of lining up has less value.  

    How do the different block sizes affect other layers and performance?

    -> Explained in the last question.  It's all about crossing block boundries only when required.

    Could you suggest better alternative or best practice for the above example configuration?

    ->See bottom

    If a SAN were involved instead of a Hardware RAID controller on the host, would it be best to store the OS vmdk on the VMFS5 datastore and create a separate iSCSI LUN formatted at a 64K block size then attach it using the iSCSI initiator in the OS and format that at 64K.

    ->I would avoid RDM as much as possible I would do everything via VMFS for lots of reason's including backup and vmotion, skip the direct iscsi connection to guest virtual machines. 

    Do matching block sizes across layers increase performance or is it a best practice?

    ->Could improve performance, is a best practice

    Any help answering and/or explaining best practice is greatly appreciated.

    (Layer1) Hardware: Hardware RAID Controller

    • -1TB Volume configured at 4K block size. (RAW?)

    (Layer2) Hypervisor: ESXi Datastore

    • -1TB from Raid Controller formatted with VMFS5 @ 1MB block size.

    (Layer3) VM OS: Server 2008 R2 w/SQL

    • -100GB Virtual HD using NTFS @ 4K block size for OS.
    • -900GB Virtual HD configured using NTFS @ 64K block size to store SQL database.

    So here is my suggestion

    Server 1:

    Local Hard drive used to install ESXi needs at least 6GB only. I don't really care how lined up the drives are since it's small and loaded into memory during running. 

    Server 2:

    Local Hard drive used to install ESXi needs at least 6GB only. I don't really care how lined up the drives are since it's small and loaded into memory during running.


    Layer 2: Make this iSCSI storage presented to Server 1 and 2 multiple luns

    - Lun1 for OS

    -Lun2 for SQL logs

    -Lun3 for SQL DB


    (Layer3) VM OS: Server 2008 R2 w/SQL

    • -xxxGB Virtual HD using NTFS @block size for iSCSI array
    • -xxxGB Virtual HD configured using NTFS @block size iSCSI array block size to store SQL database.
    • xxxGB Virtual HD configured using NTFS @block size iSCSI array block size to store Logs

    I may have missed your question 100% let me know if so by refining the question and I'll give it another shot.

    Thanks,

    J



  • 4.  RE: What is the best practice for block sizes across several layers: Hardware, Hypervisor, and VM OS?

    Posted Oct 26, 2013 01:05 PM

    Thank you for your answer it was indeed helpful.  I would like some clarification on the block size of the RAID controller in that situation.

    Let's use a PERC 6/i RAID Controller with two 1TB Drives in RAID1.  Here is a link to the setup screens of the raid controller that might help convey what I am trying to ask.

    http://www.thegeekstuff.com/2009/05/dell-tutorial-create-raid-using-perc-6i-integrated-bios-configuration-utility/

    I have 1 TB of space (round about) and create two Virutal Drives.

         Virtual Drive 1 - 10GB   - To be used for Hyper-visor OS files

         Virtual Drive 2 - 990GB - Used for VMFS Datastore/VM Storage

    The default stripe element size on the Perc6/i is 64KB, but can be 8,16,32,64,128,256,512,or 1024KB.

    What block size would you use on Array 1 which is where the actual hyper-visor will be installed?

    What block size would you use on Array 2 which will be used as the VM Datastore in ESXi?

         -Would you use 1024KB to make it match the VMFS block size that will eventually be formatted on top of it?            *Consider that this data store would eventually store several virtual hard drives for OS, SQL Database, SQL Logs each formatted in NTFS at the recommended block size,4K,8K,64K.

    If the RAID Strip element size is set to 1024KB so it matches the VMFS 1MB Block size, would that be best practice or is it indifferent?  What effect does that have on the OS/Virtual HD's and their respective block sizes installed on top of the stripe element and VMFS block size?

    I could be completely over thinking the entire situation, but to me it seems that has to be some sort of correlation between the three different "layers" as I call, and a best practice to suit.

    Thanks for the assistance. +1 on the previous post.



  • 5.  RE: What is the best practice for block sizes across several layers: Hardware, Hypervisor, and VM OS?
    Best Answer

    Posted Oct 26, 2013 04:34 PM

    itsolution,

    Thanks for the helpful answer points.  I wrote a blog post about this whole thing that I hope will help:

    Partition Alignment and block size VMware 5 | blog.jgriffiths.org

    To answer your questions here goes:

    I have 1 TB of space (round about) and create two Virutal Drives.

         Virtual Drive 1 - 10GB   - To be used for Hyper-visor OS files

         Virtual Drive 2 - 990GB - Used for VMFS Datastore/VM Storage

    The default stripe element size on the Perc6/i is 64KB, but can be 8,16,32,64,128,256,512,or 1024KB.

    What block size would you use on Array 1 which is where the actual hyper-visor will be installed? 

    ->If you have two arrays I would set the block size on the hypervisor array to 8KB

    What block size would you use on Array 2 which will be used as the VM Datastore in ESXi?

    ->I would go with 1024KB to match VMFS 5 size

         -Would you use 1024KB to make it match the VMFS block size that will eventually be formatted on top of it?          

    ->Yes

    *Consider that this data store would eventually store several virtual hard drives for OS, SQL Database, SQL Logs each formatted in NTFS at the recommended block size,4K,8K,64K.

    ->The problem here is VMFS is going to go with 1MB no matter what you do so carving it lower on the RAID will not cause issues but will not help them either.  You have 4KB on sectors on disk.  1MB RAID, 1MB VMFS, 4k,8K,64K Guests.   Really the gains from 64K are lost a little when the backend storage is 1MB.

    If the RAID Strip element size is set to 1024KB so it matches the VMFS 1MB Block size, would that be best practice or is it indifferent?

    ->As long as it's 1024KB or smaller in 4KB chucks it does not matter really.

    What effect does that have on the OS/Virtual HD's and their respective block sizes installed on top of the stripe element and VMFS block size?

    ->The effect is minimal on performance but does exist.   It would be lie to say it didn't.

    I could be completely over thinking the entire situation, but to me it seems that has to be some sort of correlation between the three different "layers" as I call, and a best practice to suit.

    Hope that helps.  I will tell you have I run SQL and Exchange both virtualized without any issues and without changing the OS block size.  I just stuck with the microsoft standard size.  I would be a lot more concerned about the performance of the raid controller on your server.  They keep making those things cheaper and cheaper with less and less cache.  If performance is the major concern then I would consider an array or RAID5/6 solution or at least look at the cache amount on your raid controller (read is normally critical for database)

    Just my two cents. 

    Let me know if you have additional questions.

    Thanks,

    J



  • 6.  RE: What is the best practice for block sizes across several layers: Hardware, Hypervisor, and VM OS?

    Posted Oct 26, 2013 06:46 PM

    Gortee,

         That is a fabulous explanation and the blog post is over the top.  Thanks!



  • 7.  RE: What is the best practice for block sizes across several layers: Hardware, Hypervisor, and VM OS?

    Posted Oct 26, 2013 08:10 PM

    Happy to help.