vSAN1

 View Only

 vSAN 2 node - when primary fails VMs stay inaccessible

Jump to  Best Answer
Jem's profile image
Jem posted Aug 09, 2024 07:19 AM

Hello community

Currently Im working with 2 node vSAN cluster. Health is 100, nodes see witness just fine, no other errors are observed.

Im testing vSAN network loss (management and VM networks stay up), only vSAN network is disconnected from host. When vSAN netowork is removed VMs dont fail over to the secondary node, they just stay inaccessible. When doing this same procedure on secondary node VMs restart to primary just fine.

Network is removed by virtually removing adapters from vSAN vSwitch. When removed Physical disk placement shows that VM components are still active with disconnected server, components on second healthy node is shown as Absent.

HA is configured:

  • Host failure Response - Restart VMs
  • Host isolation - Power off and restart VMs
  • PDL, APD - Power off and restart VMs - Conservative
  • VM monitoring - Disabled
  • Admission control - all values are default
  • HB datastores - not definied

RAID policy should be correct, primary node hosts preffered components. vCenter and ESXi are at 8.0.2 version, HPE image.

Have you seen such behaviour, what am I missing here? More information can be provided.

Jem's profile image
Jem  Best Answer

One of the things support suggested was to create seperate vmk's and tag them as vSAN Witness which I did not know was a thing. So in my case 2 vSwitches - 1 for vSAN (vSAN vmk) and 1 for VM, Management etc (which also now holds Witness vmk)).

Now the situation is when primary node fully loses vSAN network VMs continue to run on that host due to vSAN Witness network still being present (on mentioned seperate vSwitch) which I assume is expected.