It really depends on application and the platforms. I have been doing a lot of benchmark testing lately and there are some combinations of Linux / Windows / application that perform the same, some that perform better, and some that perform worse. You just gotta test it to know. There are a lot of weird interactions here.
For example, some of the TCP processing rules require senders to hold outbound data until fully-sized segments are ready or until timers have been triggeed, and using jumbo frames means longer hold times to fill the outbound segment. This can make chatty applications much more sluggish.
Also if the card and TCP/IP stack are capable of processing large frames then they will often just accept them instead of dropping them. And some TCP/IP stacks probe the receiver's maximum capacity rather than limit themselves to the advertised capacity. What this means is that you may find that you are using jumbo frames in some applications even if they are not explicitly enabled in the guest OS but are enabled in the infrastructure switches.
Test your applications and see what they are doing.