vSAN1

 View Only

 Impact of Changing AVS vSAN Number Of Disk Stripes Per Object to 12

Jump to  Best Answer
IainBD's profile image
IainBD posted Dec 05, 2025 03:37 PM

The current VMDKs in our Azure VMWare Solution vSAN are RAID-6 and each has six components spanning six hosts. The cluster has 16 hosts and each is its own fault domain. Would increasing the number of Number Of Disk Stripes Per Object=12 result in the same parity space as normally expect to see in a traditional RAID-6 array expanding from 6 drives to 12, or will it still use 4d:2p. Using 150% of the space for RAID-6 seems like such a waste. Especially when each host only has 6 3.2TB capacity drives. I'd love to be able to see that ratio at 10d:2p.

Thank you,

Iain

Duncan Epping's profile image
Broadcom Employee Duncan Epping  Best Answer

the overhead would remain the same and the parity remains the same, it is just that the data would be spread wider by creating additional components. Those components would still be 4+2. There's no way to do 10+2 or anything like that.

TheBobkin's profile image
TheBobkin

@IainBD, just to add to what Duncan said - Striping (dictated by Stripe-Width rule in the storage policy) basically just splits each current RAID6 component into RAID0 sub-components e.g. if applied Stripe-Width=2 you would have 6x RAID6 components each comprised of 2x sub-components (e.g. a RAID6 of RAID0 components).

In modern environments, this generally isn't particularly useful (other than VERY specific use-cases) as it was more so intended for things like Hybrid configurations where the throughput of each Capacity-tier HDD would have been the bottleneck and striping data across more of them may have been advantageous (but whether it was even then was very dependent on the IO access pattern etc.).

IainBD's profile image
IainBD

Thank you very much for the answers.
I am new to the vSAN side. Before our recent move into Azure VMWare Solution, we used IBM SANs for over twenty years. It just strikes me as very strange that a RAID-6 array can only be six stripes/devices. With 16 hosts, in the cluster, that is a lot of unnecessary space usage. Is there a specific technical reason vSAN has this limitation? Allowing wider RAID-6 arrays would be so much more efficient in terms of space usage. With 16 HCI ESXi hosts, each in it's own fault domain, it would only make sense to be able to stripe the RAID-6 FTT2 VMDK across the max of 12 hosts. We ran 8 drive RAID-5 arrays for years without issue. RAID-6 across 12 3.2 TB NVME drives with the hosts connected via 2x25 GBE doesn't seem very risky to me.

dabrigo's profile image
dabrigo

Maybe you shouldn't think at vSAN as a traditional RAID but more like an object storage, using a fixed 4:2 erasure coding for FTT2.