Hey vfk,
Much like you I've mostly used NFS for ISO Datastores and Dev stuff and have mainly stuck with iSCSI and FCP in the past. One thing I know is NFS doesn't bind to any one VMK port and usally just pics the lowest number in that particular subnet. So lets say you have management on 192.168.1.x and you have four vmk's on 10.0.1.X(vmk2 - 10.0.1.52, vmk3 - 10.0.1.53, vmk4 -10.0.1.54, vmk5 - 10.0.1.55) which is your storage IP range. When a traffic request goes to NFS it will probably pick vmk2 for most things until it can't get that path. It will then pick vmk3 and so on. However if you had your NFS storage on 192.168.1.x it would probably go through your management VMK. I see this happen a lot when people don't put there NFS storage IP's in different networks from there management IP, so traffic typically goes out the management vmk as its the lowest number. However if you where to isolate your vmk's like this:
vmk2 - 10.0.1.52
vmk3 - 10.0.2.52
vmk4 - 10.0.3.52
vmk5 - 10.0.4.52
You would have more controll over where the traffic is going. Chris also did a good lab test on leveraging the VDS's new Load Balancing Team protocol(LBT) with vmk's / NFS in this article:
NFS on vSphere Part 4 – Technical Deep Dive on Load Based Teaming | Wahl Network
His findings where LBT acutally load balanced the NFS vmk's when loads where high as the LBT protocol doesn't care about portrgroups as a delimiting factor. Meaning you may be able to get the load balacning your looking for with NFS without having to use a LAG/LACP trunk.
I'm a huge fan of LBT in the VDS as I think it does a great job and is REALLY easy to install as it requires no configuration at the switch level. It is essentially Orig Port ID however vCenter tracks where it is putting things as far as the Orig Port goes and if that port gets to saturated it moves its assignment. You can even adjust the % level at which LBT moves things around as well. Also if your moving to VDS another thing to look into is the Network IO controll. In 5.X you can now create your own personalized profiles and assign them to port groups for more finite controll over how much speed you want to give it. with 10GB becoming more popular and people just throwing everything into 1 big 10GB bucket I can see this becoming MUCH more usefull.
With that said I would like to see what Chris thinks as he has more experience with NFS and could probably better direct you.
However your idea moving forward with the VDS is good, I too keep my management on a VSS and move everything else to the VDS. It's not that you can't have your management on VDS as I've done it a few times, but every now and then you run into a quarky problem where having your management on a VDS is a bit of a pain, aka something goes wrong and you need to reset your management network, you messed up on a VDS migration and lost access to your host, moving from 1 VDS to another, or moving from one vCenter to another makes it a bit trickier. For those reasons I personally keep my management on a standard VSS still, however different folks different strokes and its really a prefference.
Hope this has helped