VMware vSphere

 View Only
  • 1.  Cannot set jumbo frames in vCenter

    Posted Sep 06, 2023 08:39 AM

    System setup:
    ESXi (8.x) cluster with 3 hosts, each with 2x 10GB adapters (plus a few 1GB adapters)
    ESXi vSwitches attached to 10GB adapters configured with MTU 9000
    I use iSCSI and all storage-kernel adapters and everything on the SAN has jumbo frames working fine.
    ESXi management kernel-adapter on the same 10GB adapter as vCenter

    vCenter is tested as it was installed, with a vmxnet3-adapter and stating "Other Linux 6.x" as OS, like i said this is an "out of the box". Changing OS to Photon OS has no effect on MTU, it's still 1484. vCenter attached to Port group connected to a 10GB adapter.

    I test max MTU from the backup-server (which also has 10GB adapters) using "ping xx.xx.xx.xx -l 8972 -f"
    I can successfully set ping the ESXi-management adapter with 8972 packet size on the same host where vCenter is
    I can NOT ping the vCenter with larger packet size than 1484 (which strangles my backup speed)

    All i get from google is 1000 sites explaining "this is how you configure MTU in ESXi", nothing specific about vCenter.



  • 2.  RE: Cannot set jumbo frames in vCenter

    Posted Sep 06, 2023 09:24 AM
    This will be the VMkernal Adaptor you are using for vCenter Server communications that effects its MTU.

    Enabling Jumbo Frames on a VMkernel port from the vCenter Server
    In the vSphere Web Client, navigate to the host.
    On the Configure tab, click VMkernel Adapters.
    Click Edit.
    Set the MTU value to 9000. Note: You can increase the MTU size up to 9000 bytes.

    More information can be found here: https://infohub.delltechnologies.com/l/smartfabric-services-with-multisite-vsan-stretched-cluster-deployment-guide/configure-jumbo-frames-and-vmotion-network-on-the-services-host#:~:text=The%20vMotion%20VMkernel%20adapter%20and,vCenter%20health%20checks%20to%20pass.


  • 3.  RE: Cannot set jumbo frames in vCenter

    Posted Sep 06, 2023 09:49 AM

    How am i supposed to connect vCenter (Virtual Machine)  to a kernel adapter?

    vCenter is a Virtual machine and can only be connected to normal Port Groups, which is connected to a vSwitch etc.
    Kernel adapters are connected to Port Groups and vSwitches, but a port group with a kernel adapter cannot be used by a VM.



  • 4.  RE: Cannot set jumbo frames in vCenter
    Best Answer

    Posted Sep 06, 2023 09:54 AM

    Hi,

     

    check out the following link.

    Photon OS Network Configuration

    You have to reconfigure vCSA to use Jumbo Frames, and to be honest I don't think that's a good idea.

    Simply because I do see the risk that such a config change wouldn't survive an update/upgrade.

    May I ask why you wanna use Jumbo Frames with vCSA?



  • 5.  RE: Cannot set jumbo frames in vCenter

    Posted Sep 06, 2023 10:20 AM

    From my understanding backup traffic pass through vCenter between the host and the backup server.
    The backup server is Veeam so it would be using the standard VMware API to talk to vCenter

    ... there is nothing i would like more than to be wrong about this, it doesn't seem logical but it's the only conclusion i can reach after testing. It would make more sense if vCenter only brokered the connection and the actual backup traffic went directly between the ESXi and the backup server. If that's the case then there is indeed no need to configure jumbo frames in vCenter.



  • 6.  RE: Cannot set jumbo frames in vCenter

    Posted Sep 06, 2023 10:31 AM

    Hi,

     

    the backup is handled between the ESXi Server who runs/owns the VM and the backup server.

    vCSA is only used to figure out on which ESXi server the VM is running, so that the backup server would reach out to the right ESXi server.

     



  • 7.  RE: Cannot set jumbo frames in vCenter

    Posted Sep 06, 2023 11:26 AM

    Hey, it certainly helps and that's how i was hoping it worked, thanks for confirming it!

    My issue has been resolved and came down to that the ESXi-server was insisting on using vmnic0 (also vmk0) which is a 1GB adapter for management traffic even though it (vmnic0) had no vmk adapter configured to handle management and despite the fact that i had a different (10G) adapter configured to handle management. This meant that all traffick to/from the ESXi went through the 1GB vmnic.

    I set up a new Port Group connected to a 10GB adapter and pointed vmk0 there and re-enabled management on it. I now have 2 management vmk adapters, each connected to a 10GB NIC. Backup traffic is now 10GB and with jumbo frames, so I'm happy.

    I'm particularly happy to have discovered i perf in ESXi, using it together with iperf for windows on my backup server to verify transfer speed helped A LOT!

    Thanks again for the help!