Have a look at this thread (http://communities.vmware.com/message/1814909#1814909), and my comment on there regarding EqualLogics. Dell also has some excellent reference documentation on the topic, as well.
I can't necessarily speak to VMware recommendations (seeing as they have to remain multi-vendor neutral), but I can speak to Dell's, as I've set up EQL's and vSphere several times (just set up a PS4000XV, in fact).
Dell recommends a 1:1 ratio of physical NICs on your hosts to the number of controller NICs on your SAN(s). So with PS4000's, that would mean 2 phyiscal NICs per server dedicated to iSCSI. From everything I've read, going above that doesn't really end up mattering, as one server can only communicate to a given SAN with two NICs at any one point in time. From there, Dell recommends a 1:1 ratio of VMkernel ports to physical NICs for multipathing. Then, you dedicate 1 nic to each VM kernel port. From there, you'll have properly multi-pathed storage. VMware round robin is fine, although if your version of vsphere supports 3rd part plugins, use EQLs.
Also, management traffic is pretty minimal, so in most scenarios I usually don't see that dedicated to its own vSwitch. With 16 NICs, I would do the following:
Per Host:
- 6 NICs for vMotion (you only really need 2 for physical redundancy, but vSphere 5 actually takes advantages of multiple NICs for vmotion, whereas previously it only ever used one at a time). Therefore, the more NICs you use, the faster your vMotions will be. I've tested this, and it is indeed much faster as you throw more NICs at it. You have NICs to spare, so why not.
- 2 NICs for iSCSI - Again, Dell's recommendation is 1:1. I don't think you'd be hurting anything with more, but I don't think you're gaining anything either. Perhaps Dell has some recommendations when using multiple EQL units.
- 8 (all the rest) for VM traffic.
- Remember to segregate your vSwitches across several physical NICs. You don't want one PCI NIC card to die and completely kill your environment (eg. making sure your iSCSI is on multiple pNICs).
Have a look at some of the Dell documentation, because it's not just a matter of assigning NICs to vSwitches and calling it a day. Also keep in mind that your vSwitches need to be configured exactly the same on all hosts (even named the same) in order for vMotion to work properly. dvSwitches help to eliminate that issue, and make config easier.
P.S. I have to disagree with some of Virtualinfra's comments regarding his teaming on the vSwitches. Each vSwitch should have 1 active NIC only (when used with iSCSI).