VMmark

 View Only

 some issues happen when set MTU to 9000

jiang jianfeng's profile image
jiang jianfeng posted Feb 25, 2025 09:50 AM

I set the MTU of workload vSwitch to 9000,and i set all the VMs ens192 MTU=9000 too. I try ping (8000bytes)and SSH between VMs and the result is ok.

But the result of vmmark test is all failed.

How to change the MTU of workload network?

VMmark4-Tilescore.py : v1.0.4 09192024
Tiles = 1.0 : Full Tiles = 1 : Fractional Tiles = 0.0
Referencing_tile: 0.
First Sample       : 1740450120 : Mon Feb 24 18:22:00 2025 PST : Tue Feb 25 02:22:00 2025 UTC
Info: t = 1 : Time = 1740450180 : Full Tiles = 1 : Fractional Tiles = 0.0 : App Workloads = 10
Run_start          : 1740450120 : Mon Feb 24 18:22:00 2025 PST : Tue Feb 25 02:22:00 2025 UTC
Time_start         : 1740450180 : Mon Feb 24 18:23:00 2025 PST : Tue Feb 25 02:23:00 2025 UTC
Time_end           : 1740460560 : Mon Feb 24 21:16:00 2025 PST : Tue Feb 25 05:16:00 2025 UTC
Run_end            : 1740460920 : Mon Feb 24 21:22:00 2025 PST : Tue Feb 25 05:22:00 2025 UTC
Duration           : 173.00 minutes
Steady_state_start : 1740451980 : Mon Feb 24 18:53:00 2025 PST : Tue Feb 25 02:53:00 2025 UTC
Steady_state_end   : 1740459180 : Mon Feb 24 20:53:00 2025 PST : Tue Feb 25 04:53:00 2025 UTC
Phase_0_begin      : 1740451980 : Mon Feb 24 18:53:00 2025 PST : Tue Feb 25 02:53:00 2025 UTC
Phase_1_begin      : 1740454380 : Mon Feb 24 19:33:00 2025 PST : Tue Feb 25 03:33:00 2025 UTC
Phase_2_begin      : 1740456780 : Mon Feb 24 20:13:00 2025 PST : Tue Feb 25 04:13:00 2025 UTC


TILE_0_Scores:       WVAuctionVM    WVAuctionK8S   DVDStoreA   DVDStoreB   DVDStoreC   NoSQLBenchA   NoSQLBenchB   NoSQLBenchC   SocialNetwork   Standby
p0                       9428.28          465.37     2171.35     1499.62      986.20      42054.40      42058.20      42056.99           49.51      1.00
p1                      11456.40          464.39     2502.75     1826.22     1282.88      51374.94      51374.73      51373.30           54.63      1.00
p2                      12099.62          465.42     2401.20     1698.78     1143.15      50143.82      50144.17      50143.20           57.19      1.00

TILE_0_Ratios:       WVAuctionVM    WVAuctionK8S   DVDStoreA   DVDStoreB   DVDStoreC   NoSQLBenchA   NoSQLBenchB   NoSQLBenchC   SocialNetwork   Standby   Geo.Mean
p0                          0.67            0.05        0.75        0.70        0.64          0.74          0.74          0.74            0.69      1.00       0.53
p1                          0.82            0.05        0.86        0.85        0.84          0.91          0.91          0.91            0.76      1.00       0.62
p2                          0.86            0.05        0.83        0.79        0.75          0.89          0.89          0.89            0.79      1.00       0.61

TILE_0_QoS:         WVAuctionVM%   WVAuctionK8S%   DVDStoreA   DVDStoreB   DVDStoreC   NoSQLBenchA   NoSQLBenchB   NoSQLBenchC
p0                47.15 | 33.25+2250.53 | 100.00+    1154.78*    1295.89*    1529.58*          0.56          0.56          0.56
p1                36.27 | 30.44+2250.60 | 100.00+      745.63      807.18      957.40          0.48          0.46          0.47
p2                34.43 | 30.98+2250.58 | 100.00+     859.99*      974.32    1139.89*          0.51          0.49          0.49

p0_score =   0.53
p1_score =   0.62
p2_score =   0.61

Infrastructure_Operations_Scores:   vMotion   SVMotion   XVMotion   Deploy
Completed_Ops_PerHour                 28.00      13.00      14.00    12.50
Avg_Seconds_To_Complete                6.96      56.81      47.43   245.52
Failures                               0.00       0.00       0.00     0.00
Ratio                                  0.95       1.00       1.08     0.86
Number_Of_Threads                         1          1          1        1

Warnings Messages::
  p0 : WVAuctionVM0 Exceptions : 400963
  p0 : WVAuctionK8S0 Exceptions : 1120032
  p1 : WVAuctionVM0 Exceptions : 15313665
  p1 : WVAuctionK8S0 Exceptions : 1085012
  p2 : WVAuctionVM0 Exceptions : 16716259
  p2 : WVAuctionK8S0 Exceptions : 1119976
  rampdown : WVAuctionVM0 Exceptions : 9669785
  rampdown : WVAuctionK8S0 Exceptions : 644017

Summary ::
Run_Is_NOT_Compliant
Turbo_Setting : False
Number_of_Workloads_Missing : 0
Number_of_Compliance_Issues (identified by '*' or '+') : 11
Issues Found : 
    Tile0-WVAuctionVM-p0
    Tile0-WVAuctionK8S-p0
    Tile0-DVDStoreA-p0
    Tile0-DVDStoreB-p0
    Tile0-DVDStoreC-p0
    Tile0-WVAuctionVM-p1
    Tile0-WVAuctionK8S-p1
    Tile0-WVAuctionVM-p2
    Tile0-WVAuctionK8S-p2
    Tile0-DVDStoreA-p2
    Tile0-DVDStoreC-p2
Median_Phase : p2

Unreviewed_VMmark4_Applications_Score    :     0.61
Unreviewed_VMmark4_Infrastructure_Score  :     0.97
Unreviewed_VMmark4_Score                 :     0.68 @ 1 Tiles (NC)

Results table generated : VMmark4-Results-Table.html
Graph generated : VMmark4-Graph-Throughput.html
Graph generated : VMmark4-Graph-QoS.html
Graph generated : VMmark4-Graph-Infrastructure-Ops.html
Graph generated : 198.168.55.78-powermetrics.html
Graph generated : 198.168.55.77-powermetrics.html

Benjamin Hoflich's profile image
Broadcom Employee Benjamin Hoflich

Note that in the Run and Reporting Rules section 3.2.5.3d, each workload VM must use the default MTU of 1500.
I'm not sure there has been testing with this setting changed, or if it should even be expected to work.

Will it run with more success with just the vSwitch MTU 9000?

jiang jianfeng's profile image
jiang jianfeng

yes it run with  success with just the vSwitch MTU 9000. Does it work for increase performance just set vSwitch MTC 9000? 

Benjamin Hoflich's profile image
Broadcom Employee Benjamin Hoflich

My thinking is that the MTU 9000 would be primarily beneficial to the performance of infrastructure operations (such as vMotions), network storage traffic, and also to vSAN, if in use.