So let’s with the relationship between VMFS and the file systems block size. Or better; actually there is no relationship.
Windows and it applications will write data with their designed/configured I/O size (e.g. NTFS 4KB or SQL 8 KB) and the ESXi kernel will simply pass through I/Os to the backend with their native size. The VMFS blocks (and sub-blocks) are used to allocated physical storage blocks on the disk/LUN.
And if Windows or an application writes/changes or reads a block this is basically one IOP.
The bigger the I/O size the more data could get transferred.
So the more I/Os the less MB/s will be transferred and vice versa.
Avery I/O the needs to be split into multiple iSCSI (1538 bytes payload) / FC (2048 bytes payload) packets to be transferred to the storage. So in theory a smaller I/O require less ISCSI/FC frames but to be honestly I can’t imagine that this is measurable. So I would say that the I/O size has no direct impact on how fast (ms) your I/O will be send down to the storage.
Usually the presented LUN is based on a RAID/LUN which has a stripe/chunke size configured.
From what I’ve read so far …
Small IO pattern: Chunk size >= IO size
Target: Request will be read/written to/from a single disk
Advantage: Other disks can serve other IO requests
Big IO pattern: Chunk size as small as possible
Target: Request will be read/written to/from multiple disks
Advantage: IOs are processed more quickly.
But often it’s not that easy because in a virtual environment you got a wide variety of workloads.
So for example Netapp uses a 4KB block size for their WAFL file system. EMC’s VNX array uses 64KB what you also can find as default setting in many common RAID controllers/arrays.
I hope this helps you a little bit and please correct me if I’m wrong!
Regards
Patrick