Which vNIC did you use in the VM's?? For VM to VM traffic, the physical network has little to do with it. That is, unless you're not using shared storage, and the VM's are in different datacenters (so physically seperated from each other). If on the same datastore set, and cluster, ie, same infrastructure, then using a VMXNET3 vNIC should get you the max performance available. I've seen those vNIC's list out 10Gb connection speeds even on Gb LAN's. You'll see the best performance when going VM to VM, with more 'normal' rates going through the physical LAN... If you have the VM's on the same vSwitch (or distributed switch) then you'll have max performance right out of the gate...
BTW, if you configured the vSwitch for jumbo frames, you really shouldn't need to configure the VM's vNIC too. What you're probably encountering, when trying to push the VM's into the same area for MTU, is contension for resources that is better handled by the hypervisor.
Maybe we can get one of the VMware tech's to post up more technical details as to why this is how it is... Or one of the other 'big brains' could post up why this is... I've not dived too deep into the code under the networking items so far. Just not had a need to...