This appears to be a point a great confusion around several forums that I have visited. I was able to recently complete some testing (in attempting to investigate and resolve another issue) with regard to Disk IO performance and iSCSI, NFS, and Local Storage.
All this testing was done on identical VMs each running the latest Ubuntu LTS release. I used "dd if=/dev/zero of=test-disk-io.out bs=1G count=1 oflag=dsync" to write data to the virtual disk on each VM. Details on my setup are at the end of this post... Testing to remote datastores (iSCSI and NFS) used a 10GbE SPF+ network.
Here is what I found:
- Local Storage: 661Mbps Write to Disk
- iSCSI Storage: 584Mbps Write to Disk
- NFS: 240Mbps Write to Disk
Based on this testing, it would seem (and make sense) that running a VM on the local storage is best in terms of performance; however, that is not necessarily feasible in all situations. On top of this, the performance impact when moving from Local Storage to iSCSI is notable but limited. For my purposes, iSCSI storage is a no-brainer as I can offload the storage of VMs to my NAS while at the same time only suffering a minor performance impact.
Of course the story changes entirely when comparing iSCSI (or even Local Storage) to NFS. While NFS does have its perks, the manner in which the NFS Client protocol is implemented in ESXi causes a greater level of overhead and higher performance impact. Regardless of Sync Operation settings on the NAS server, ESXi forces updates to NFS shares with o_sync - essentially reducing the overall performance that can be observed with NFS. This is not the case in terms of iSCSI as, being block storage, gets formatted with the VMFS filesystem and managed exclusively by ESXi.
Because of this forced o_sync, all write operations to the NFS share are required to sync - even if sync settings are disabled on the NAS's end. A large amount of this performance impact can be remediated if the NAS provides an SSD-based Cache for the pool in which the NFS datastore is running.
Ultimately this left me a little bit torn... I love the idea of NFS Datastores for VMware and believe this is the future over iSCSI-based storage. However, the implementation of NFS Client in ESXi leaves a lot of be desired. While and NFS share is easier to deploy, manage, and maintain it will require a rather beefy and performance capable NAS - there is simply no good way to leverage NFS-based Datastores unless you NAS has a large and high-speed cache. As for me, I still have (and use) all three options - depending on the situation at hand and the requirements of the VM I plan to deploy.