Hello GatorMania93,
Welcome to Communities! Some useful info on participating here:
https://communities.vmware.com/docs/DOC-12286
"Been wrestling with this for weeks now."
Sorry to hear that, what have you tried/checked so far?
"I see that the witness host is running on partition 1 and my two vsan hosts are running on partition 2. Is this the cause of the failure?"
Cluster members need to be able to communicate with one another and should never be network partitioned.
This cluster is on 6.6 so going to assume Unicast.
Check the cluster config on all three node to ensure they are all *trying* to be part of the same cluster and all have Unicast mode enabled:true :
# esxcli vsan cluster get
Check the unicastagent lists on each node:
# esxcli vsan cluster unicastagent list
Each node should have the 2 other nodes in their list (don't worry if witness shows as 0000 for UUID just look at the IP, these should state if they have Unicast enabled)If these are all good then check the network connectivity from the vSAN-enabled vmk on each host to the IP of the vmk on the others:
Get the IP of the vSAN interface on each node:
#esxcfg-vmknic -l
Confirm how this is configured (in case you have multiple or Witness Traffic Seperation in use):
# esxcli vsan network list
Ping the other interfaces from data-nodes to Witness:
# vmkping -I vmk# <Other_nodes_vsan_IP>
Check this BOTH directions.
If this fails then start looking at your network configuration and gateways, other issues such as busted vmk interfaces can rarely occur so remove and reconfigure this on Witness might be an approach.
FYI Witness appliances are very simple to redeploy in 6.6 and there is an in-built check for basic network configuration etc. when adding this to a node.
Bob