vSAN1

 View Only

 vSAN ESA 5 node cluster

Ian Browne's profile image
Ian Browne posted Oct 24, 2025 06:12 AM

I am wondering about the failure scenarios for a 5 node ESA cluster using Raid 5.  Obviously the placement scheme will be 2+1 from the start.  If I add a host then after 24 hours I will get a notification to update storage policy to the cluster and placement will change to 4+1.  However I am trying to following this recommendation:

Recommendation: Build a cluster with at least one more host than the minimum required by your storage policy. Upon a sustained outage or maintenance of a host, this will allow vSAN to automatically repair the data impacted by the failed host and reestablish the prescribed level of resilience assigned by the storage policy. If there is not an available host to rebuild the data to, the object will be available, but will be in a degraded state, and may not have a sufficient resilience setting to maintain availability of that object upon another failure.

So the minimum is 4 hosts, and a build of 5 hosts will satisfy this criteria.  However if I place a host in maintenance mode and drop another host I no longer have a responsive clutser.  Shouldn't I?

Thanks.

Geogee's profile image
Geogee

Minimum on the ESA cluster with RAID5 is 3 hosts. So if you will have 5 hosts at the start, 1 host place into maintenance and 1 other will fail, all the VMs can be compliant because of used schema 2+1. If you will have enough resources to satisfy. Not only from the Storage policy point of view, but space.

RAID 5 on ESA cluster is adaptive and automaticaly switching between 2+1 schema and 4+1. As you mentioned previously after 24h when you add 6th host. The Policy switching is something different, but happens when you add 7th host and it will be able to satisfy RAID6.

woonghee's profile image
woonghee

I might be wrong, but at first, I also thought, “With five hosts, shouldn’t I be able to use a 4+1 RAID-5 policy?”
However, since this policy adjusts automatically, it actually only applies starting from six hosts.
I guess that’s probably to ensure the cluster can handle both a host failure and planned maintenance at the same time.
So effectively, you need at least six hosts to use the 4+1 RAID-5 policy —
and honestly, I also find it a bit disappointing that we can’t use 4+1 RAID-5 starting with just five hosts!

Alexandru Capras's profile image
Alexandru Capras

Agree with @Geogee.
If you apply the auto-policy management to change to a 4+1 scheme with only 6 nodes you won't be able to tolerate a failure while one node is in MM.
A 4+1 RAID-5 layout can’t be satisfied with only 4 active nodes, so if you want to survive a failure while another host is in MM, you’ll need at least 7 nodes in the cluster.

Not sure, but SAN will not automatically change this storage policy. It is up to the user to make the change.
https://blogs.vmware.com/cloud-foundation/2023/03/20/auto-policy-management-capabilities-with-the-esa-in-vsan-8-u1/