View Only
  • 1.  Storage Performance Degradiaton Issue On Local Storage

    Posted Sep 02, 2016 02:56 PM


    Currently, I run a ESXi 6 physical host. It has the following specs:

    2 x Intel Xeon V3-2630 CPU - 8 physical cores

    128 GB RAM

    2 Quad-Port NICs

    4 On Board NICs running Tntel NICs

    9 SAS Drives

    3 SSDs

    1 Adaptec 72405 RAID Controller

    I've noticed when I transfer files between disks on Guests VMs while running on top of the ESXi Hypervisor, the storage speed is very, very slow.

    When I disconnect ESXi and put a Windows Server 2012 R2 Datacenter as a hypervisor, the speeds return to normal.

    To more specific, when transferring a large 100 GB file from disk 1 to disk 2 on a guest VM (or VM-to-VM or within the VM from one of its VMFS disk that is located on one physical disk to another VMFS disk located on  another physical disk), storage performance goes initially in the first second at 60 Mb/s for read/write speeds to sustained transferred speeds of only 10 Mb/s for the next hour.

    The physical disk are local storage. Each one of my physical hard drives (Seagate Constellaiton 4 TB at 7200 RPMs) are then connected via a Norco RPC-4224 Chasis. It has 24 bays. It is then connected to my Adaptec 7 Series RAID Controller, which is also connected via the PCI slot.

    I have alraedy tried to troubleshoot this issue. I've done these things alrady:

    1. I ran diagnostics.
    2. I've disconnected the ESXi boot disk, and booted in Microsoft Windows Server 2012 R2. On a Windows box, I can get read and write speeds at 140 -160 MB/s. But I don't know why I am getting only 10 Mb/sec on ESXi. And that's a huge difference!
    3. I also disconnected each of the physical drives and plugged my physical hard drives into the motherboard's onboard SATA/SAS ports.
    4. I've also went into the ESXi Advanced Settings to play around with the "Disk" settings. After all this troubleshooting, I am still getting the same problem.
    5. Already upgraded the storage drivers from Microsemi for the Adaptec 7 Series RAID card, and even updated the firmware/BIOs for the motherboard.

    Any ideas how I can resolve this?

  • 2.  RE: Storage Performance Degradiaton Issue On Local Storage

    Posted Sep 03, 2016 03:49 AM

    I suggest that updating your storage controller firmware and driver.

  • 3.  RE: Storage Performance Degradiaton Issue On Local Storage

    Posted Sep 05, 2016 03:02 AM

    Are you using vmdks larger than 6TBs ?
    If yes - I would like to see the output of
    vmkfstools -p 0 large-vmdk-flat.vmdk > /tmp/file
    If you have the problem that I have in mind it can be detected like that.

  • 4.  RE: Storage Performance Degradiaton Issue On Local Storage
    Best Answer

    Posted Sep 21, 2016 02:41 PM

    After three weeks, I found the solution to my problem.

    On Adaptec RAID cards, make sure you have the following enabled:

    1) Log in to MaxView Storage Controller with administrative admin your browser.

    2) On the left-hand side of the navigation panel, select the RAID card controller option/link.

    3) On the ribbon menu, under Controller tab, select Properties.

    3) Change all your settings to below.

    The important is changing the Performance Mode to either Big Block Bypass or OLTP / Database setting AND have Global Physical Devices Write Cache Policy set to Enable All.

    Restart your server.

    Afterwards. open up VMware vSphere Client (the desktop GUI), login into vCenter, go to the Host, select the Configuration tab on the top menu bar, on the left hand side select the Storage option, and for all your local storage disks, enable Storage I/O from the drop down menu.