Hi folks,
I am having a strange issue with my network bandwidth.
This is my setup.
- 2-node ESX 8 U3 cluster.
- Each node has multiple physical network cards going to different subnets (2x 10 Gbit network 1, 2x 10 Gbit network 2, 2x 100Gbit network 3)
- All setup in a dvs.
- All guest VMs are Windows 2022 using vmxnet 3 interfaces to each of my 3 physical networks.
Now I am performing network bandwitdh tests using iperf3.
Those are my results:
Test 1: iperf directly on the ESXi CLI from ESXi 1 to ESXi 2 => 10 Gbits speed as expected
Test 2: iperf from VM 1 to VM 2 that resides on same host => 10 Gbits speed as expected
Test 3: iperf from VM 1 to VM 2 that do NOT reside on same host => only 2,5 Gbits instead of expected 10 Gbits
Test 4: iperf from VM 1 to ESXi 2 (again VM is not on this ESXi) => only 2,5 Gbits instead of expected 10 Gbits
=> So if VM traffic stays within one ESXi , i got the whole bandwidth of 10 Gbits, but as soon as it leaves the ESXi, I only got 2,5 Gbits. It is like a hidden boundary that I am not allowd to go over 2,5 Gbits.
Several more tests I did and it shows:
=> Does not matter which VMs i tried, allways the same
=> Does not matter if i do iperf to another client in the network (apart from my vmware cluster) , again only 2,5 Gbits
=> Changed to normal vswitch, again only 2,5 Gbits
=> Even passed through the physical 100 Gbits network card to my VM, again only 2,5 Gbits.
=> increased the virtual speed of my vmxnet3 card in the vmx file, again only 2,5 Gbits
=> enabled network reservation in VM settings, again only 2,5 Gbits
What am i doing wrong? Is there some hidden setting which throttles my bandwidth as soon as network traffic leaves the VMs and the ESXi ?
I did not change any setting in my cluster apart from enabling HA and DRS, no network reservation enabled etc.
Thx,
Marcus