One of the things support suggested was to create seperate vmk's and tag them as vSAN Witness which I did not know was a thing. So in my case 2 vSwitches - 1 for vSAN (vSAN vmk) and 1 for VM, Management etc (which also now holds Witness vmk)).
Now the situation is when primary node fully loses vSAN network VMs continue to run on that host due to vSAN Witness network still being present (on mentioned seperate vSwitch) which I assume is expected.