Hi,
Ran into the same issue. ESXi 5.5U2 with NFS datastores on NetApp CDOT v8.2 on a two node FAS3250 cluster. Establishing a connection is no problem, but rebooting the ESXi host causes the NFS datastores to not mount. They are shown as "unmountded/inaccessible" in the vsphere client, both native and web. What is very odd though is that if I go to browse them ... I can! So esxi says it is not mounted, and an ls of /vmfs/volumes shows that this is true as it does not show any of the NFS datastores, not the uuid nor the symbolic link from the datastore name to the uuid. But I can browse the datastore from the clients. Very odd.
So I did a 'grep -i nfs *.log' in the log directory to see if any clue might show up - and indeed there is a clue:
syslog.log:2014-10-03T21:18:12Z jumpstart: unhandled exception whilst processing restore-nfs-volumes: Unable to resolve hostname 'cluster-name.domain.uri'
So I 'unmounted' the offending NFS datastore - now please note that all my other NFS datastores are defined with the IP address and NOT the dns hostname. I have only the ONE datastore that uses the DNS hostname.
Then I rebooted. Bingo! All the datastores came up no problem.
Analyis : the datastore mounts occur before the network with the dns server network comes up. Thus the dependency on name resolution is not satisfied and the named datastore is not mounted. But sadly and very badly, NONE of the IP addressed NFS datastores comes up either. Very much an all or none situation.
Workaround/Fix : do not use the DNS hostname, instead use the IP address for your NFS datastores. At least until vmware can figure what is really going on and fix it.
Workaround/Fix #2 : add the ip address / hostname to the esxi host servers /etc/hosts file.
HTH.
Cheers,
Ron Neilly
Systems Administrator II
UBC Okanagan
Kelowna, BC, Canada
Message was edited by: Neilly