VMware vSphere

 View Only

 ESXi/vSphere 6.7.0u3 local ovftool - by vmk0 QOS bandwidth restriction ?

bseklecki_ge's profile image
bseklecki_ge posted Dec 02, 2025 08:33 AM

Good day all:


If I manage to find an old version of OVFTOOL that will execute locally on old ESXi/vSphere 6.7.x (**), will the traffic sourced / destined to/from the loopback0 interface on the GNU/Linux kernel in the supervisor still be subject to the 16mbps QOS Rate-limit (*) that seems to be  hard-coded on [ vmk0 / defaultTcpipStack ] restricting bandwidth for all management traffic?

I ask, because If I can bypass the vmk0 QOS restriction (perhaps it only applies to the management PortGroup?), then I can set the the destination as an NFS datastore / mount connected via 25gbps.  Otherwise OVF Exports are rate limited to 100mbps ethernet speeds.

At the moment, a 50gbps OVF/OVA export task with the 16mbps QOS limit applied to HTTPs traffic for OVFTOOL executing remotely will take [ up to 12 hours ], even with parallel threads and compression level set to 9. 

Merci,

~Brian

(**) Most RHEL ovftool binaries have an LDD failure linking appropriately to OpenSSL libssl - https://community.broadcom.com/vmware-cloud-foundation/communities/community-home/digestviewer/viewthread?MessageKey=29cfe713-6d23-42a0-af14-ca0d39d72307&CommunityKey=0c3a2021-5113-4ad1-af9e-018f5da40bc0

(*) https://www.reddit.com/r/vmware/comments/ljejdc/ovftool_export_speed_limited_to_18mbps_how_can_i/

Andrea Consalvi's profile image
Andrea Consalvi

What you’re seeing isn’t something you can bypass by running an older local ovftool binary on ESXi. The 16–18 Mbps cap is enforced inside the ESXi management TCP/IP stack, specifically on vmk0. It applies to any HTTPS-based OVF export path that touches the management stack, regardless of where ovftool itself is running. Even if you managed to run the tool directly on the host, the traffic would still be shaped by the same QoS rule on the vmk0 path. Loopback or not, it doesn’t escape the management network pipeline.

This is why even large exports stall at ~16 Mbps and why moving the binary around doesn’t change the behavior. The limit isn’t in ovftool, it isn’t in the OpenSSL-linked binaries, and it isn’t tied to the portgroup; it’s applied inside the ESXi management datapath. That’s also the reason NFS or any other datastore traffic works normally over 10/25 GbE while OVF exports don’t.

The only reliable way to get around it is to avoid the management stack entirely. Using a VM-based export solution inside the same host, exporting from a powered-off VM using direct datastore access, or performing the transfer over a non-vmk0 TCP/IP stack are the options that actually change the throughput. As long as the export is HTTPS over vmk0, the shaping applies and you won’t see more than the ~16 Mbps you’re hitting now.

Until Broadcom removes or relaxes that limiter in 6.7/7.0 code, ovftool local or remote will behave the same. That’s why your 50-gig export takes several hours even with max compression and parallel threads.

bseklecki_ge's profile image
bseklecki_ge

Thanks for clarifying 100%; much better than 100% of the available AI enhanced search engines.

Additional follow-up question: vSphere 8.x manifests the same problem?

Someone who knows a project manager over there might want to give them a nudge, ask them to bump it a wee higher than 16mbps.  On T-Mobile 5G on a UC band, I can download stuff over the Internet than I can transfer my OVAs  :~{

Even SFR in France is faster, and friend....let me tell you: It is not every day I can say something nice about SFR. Vive la France!

bseklecki_ge's profile image
bseklecki_ge

In the end, it is faster to follow a manual procedure (than rely on OVFTOOL):

  • [Optional] Delete any snapshots
  • Copy the entire VMDK in "Flat" mode (20GB becomes 512GB) to a 25/40gb connected NAS (NFS): vmkfstools -i "/vmfs/volumes/X/Y.vmdk" "/vmfs/volumes/NAS...", this will be done in less than 30 seconds
  • Connect a high speed GNU/Linux or Windows utility host to the same NFS Share and run the program through QEMU qmu-img[.exe]:  qemu-imgconvert -p -O vmdk [Y.vmdk] [Y_Spare.vmdk]" (This process will take about 45 minutes; adjust -S flag if you do not have 4k sector sizes on your VMDK)
  • Download the [ .ovf ] and [ .mf ] files for the VM via ESXi/vSphere web portal export function
  • Combine it all together in a single directory and update the SHA1 entries in the [ .mf ] (Especially for the disk, but also the .ovf if you need to manually tune anything) 
  • Tar it all up using GNU TAR or 7zip
  • Optionally 7zip Ultra or GZip9 compress it all

It seems like a lot of steps, but you'll be done LONG-BEFORE you ovftool even reaches 1gb transfer.

CAVEAT: It occurs to me that the QOS Policies related to network bandwidth on the vSphere Domain0 management GNU/Linux may not be the only bottleneck/choke-point: Implicit or explicit CPU and Disk I/O restrictions probably are also a problem, as VMDK is processed for Snapshot consolidation and compression before it converted into a byte stream to be encapsulated in HTTPs.