What you’re seeing isn’t something you can bypass by running an older local ovftool binary on ESXi. The 16–18 Mbps cap is enforced inside the ESXi management TCP/IP stack, specifically on vmk0. It applies to any HTTPS-based OVF export path that touches the management stack, regardless of where ovftool itself is running. Even if you managed to run the tool directly on the host, the traffic would still be shaped by the same QoS rule on the vmk0 path. Loopback or not, it doesn’t escape the management network pipeline.
This is why even large exports stall at ~16 Mbps and why moving the binary around doesn’t change the behavior. The limit isn’t in ovftool, it isn’t in the OpenSSL-linked binaries, and it isn’t tied to the portgroup; it’s applied inside the ESXi management datapath. That’s also the reason NFS or any other datastore traffic works normally over 10/25 GbE while OVF exports don’t.
The only reliable way to get around it is to avoid the management stack entirely. Using a VM-based export solution inside the same host, exporting from a powered-off VM using direct datastore access, or performing the transfer over a non-vmk0 TCP/IP stack are the options that actually change the throughput. As long as the export is HTTPS over vmk0, the shaping applies and you won’t see more than the ~16 Mbps you’re hitting now.
Until Broadcom removes or relaxes that limiter in 6.7/7.0 code, ovftool local or remote will behave the same. That’s why your 50-gig export takes several hours even with max compression and parallel threads.