vSphere Storage Appliance

 View Only
  • 1.  Block Size and IOPS

    Posted Oct 16, 2012 02:07 PM

    I’m trying to wrap my head around the following questions.

    How does block size effect storage performance?

    Logic tells me that the smaller the piece of data transferred to the storage the faster the I/O. This makes an assumption that I need to get clarified.

    Does 1 I/O equal 1 block of data or can I have multiple I/O’s per block? If it’s the later please explain.

    Since VMFS is a 1MB block size, how is that affected by a guest writing data to its file system?  Am I thinking about this to hard? Does the OS just write whatever it thinks is the appropriate data size to its file system and then the host breaks that into blocks that are sent to the storage device?

    Any help understanding this would be greatly appreciated.



  • 2.  RE: Block Size and IOPS

    Posted Oct 20, 2012 09:29 PM

    So let’s with the relationship between VMFS and the file systems block size. Or better; actually there is no relationship.

    Windows and it applications will write data with their designed/configured I/O size (e.g. NTFS 4KB or SQL 8 KB) and the ESXi kernel will simply pass through I/Os to the backend with their native size. The VMFS blocks (and sub-blocks) are used to allocated physical storage blocks on the disk/LUN.

    And if Windows or an application writes/changes or reads a block this is basically one IOP.

    The bigger the I/O size the more data could get transferred.

    So the more I/Os the less MB/s will be transferred and vice versa.

    Avery I/O the needs to be split into multiple iSCSI (1538 bytes payload) / FC (2048 bytes payload) packets to be transferred to the storage. So in theory a smaller I/O require less ISCSI/FC frames but to be honestly I can’t imagine that this is measurable. So I would say that the I/O size has no direct impact on how fast (ms) your I/O will be send down to the storage.

    Usually the presented LUN is based on a RAID/LUN which has a stripe/chunke size configured.

    From what I’ve read so far …

    Small IO pattern: Chunk size >=  IO size

    Target: Request will be read/written to/from a single disk

    Advantage: Other disks can serve other IO requests

    Big IO pattern:  Chunk size as small as possible

    Target: Request will be read/written to/from multiple disks

    Advantage: IOs are processed more quickly.

    But often it’s not that easy because in a virtual environment you got a wide variety of workloads.

    So for example Netapp uses a 4KB block size for their WAFL file system. EMC’s VNX array uses 64KB what you also can find as default setting in many common RAID controllers/arrays.

    I hope this helps you a little bit and please correct me if I’m wrong!

    Regards

    Patrick