Hello,
Currently, I run a ESXi 6 physical host. It has the following specs:
2 x Intel Xeon V3-2630 CPU - 8 physical cores
128 GB RAM
2 Quad-Port NICs
4 On Board NICs running Tntel NICs
9 SAS Drives
3 SSDs
1 Adaptec 72405 RAID Controller
I've noticed when I transfer files between disks on Guests VMs while running on top of the ESXi Hypervisor, the storage speed is very, very slow.
When I disconnect ESXi and put a Windows Server 2012 R2 Datacenter as a hypervisor, the speeds return to normal.
To more specific, when transferring a large 100 GB file from disk 1 to disk 2 on a guest VM (or VM-to-VM or within the VM from one of its VMFS disk that is located on one physical disk to another VMFS disk located on another physical disk), storage performance goes initially in the first second at 60 Mb/s for read/write speeds to sustained transferred speeds of only 10 Mb/s for the next hour.
The physical disk are local storage. Each one of my physical hard drives (Seagate Constellaiton 4 TB at 7200 RPMs) are then connected via a Norco RPC-4224 Chasis. It has 24 bays. It is then connected to my Adaptec 7 Series RAID Controller, which is also connected via the PCI slot.
I have alraedy tried to troubleshoot this issue. I've done these things alrady:
- I ran diagnostics.
- I've disconnected the ESXi boot disk, and booted in Microsoft Windows Server 2012 R2. On a Windows box, I can get read and write speeds at 140 -160 MB/s. But I don't know why I am getting only 10 Mb/sec on ESXi. And that's a huge difference!
- I also disconnected each of the physical drives and plugged my physical hard drives into the motherboard's onboard SATA/SAS ports.
- I've also went into the ESXi Advanced Settings to play around with the "Disk" settings. After all this troubleshooting, I am still getting the same problem.
- Already upgraded the storage drivers from Microsemi for the Adaptec 7 Series RAID card, and even updated the firmware/BIOs for the motherboard.
Any ideas how I can resolve this?