VMware vSphere

 View Only
  • 1.  Oracle ASM -> linux guest -> ESX vmdk -> Netapp bottlenecks?

    Posted Nov 09, 2023 03:41 PM

    We're in the process of an Oracle bare metal (ODA) to VMWare/Netapp migration project.   The netapp is an all NVME SSD SAN array.   ESX servers are dell R750s with 2 HBAs each.  We have carved up 32 "disks" (1TB each) on the netapp to present to the ESX host which are served to the linux guest via VMDK (not using RDM).   When we are running the SLOB benchmark tool, we can see the linux host making use of all 32 ASM disks (as it sees them):

    wadams_0-1699544011081.png

     

    But when the ESX admin looks at the disks at the ESX layer, he is only seeing 2 busy disks:

     

    wadams_1-1699544059968.png

    Is this normal?   Do we have some bottleneck due to how we have the disks presented to ESX?    Not sure how disks/luns from the netapp should appear on the ESX host?

     

    Thanks

     

    Wayne

     

     



  • 2.  RE: Oracle ASM -> linux guest -> ESX vmdk -> Netapp bottlenecks?

    Posted Nov 30, 2023 08:55 PM

     

    Greetings for the Day.

    Hope you are doing well. 

    I see there is huge latency happening and also with this latency or virtual machines will be very slow 

    We can do deep dive and pull the storage sense parameters 

    Run the command and if interested the share the snaps 

    ssh to host 

    cd /var 

    cd run 

    cd logs or log 

    then run command 

    cat vmkernel.log | grep "performance has deteriorated" | awk '{print $20, $21}' | sort -nr | head -10Top 10 instances of "performance has deteriorated" in vmkernel.log
    cat vmkernel.log | grep "performance has deteriorated" | egrep -o "eui.[0-9a-f]+_*" | sort | uniq -c | sort -nrCount number of times "performance has deteriorated" per LUN