I have a Supermicro X10SL7-F motherboard, which includes an integrated LSI 2308 SAS/SATA controller, running on a fresh install of ESXi 5.5 Update 1 patch rollup 2. I have updated the firmware of the LSI 2308 controller to v19 and updated to the latest VMWare driver (version 19, available here: VMware vSphere 5: Private Cloud Computing, Server and Data Center Virtualization). Reminder: this LSI controller is on VMWare's supported device list.
I use two Western Digital Caviar Black 2TB drives in RAID 1 configuration on this controller, and the write performance is absolutely terrible. The "out of the box" write performance was about 4 MB/s, and after contacting Supermicro support and setting the Disk Cache Policy = Enabled I get about 13 MB/s write throughput. (FYI, this disk cache policy setting is not accessible via the BIOS menu, you HAVE to use LSI's Megaraid Storage Manager software to set it, which is a huge headache all its own). This same RAID 1 configuration allows for over 100 MB/s read performance in ESXi, and that is perfectly acceptable.
Before people jump all over me for using an integrated controller with no onboard RAM cache, I temporarily ran Fedora Linux and was able to demonstrate over 135 MB/s reads and writes to the same RAID 1 configuration. Also, as a test I re-configured the RAID settings temporarily to run as RAID 0 (stripe) and the read and write performance in ESXi was just fine, and exceeded 100 MB/s. To me, this clearly indicates the write performance is a problem with VMWare's driver and is not a limitation of the controller or the underlying hard drives.
I have scoured the internet and all I see are people re-flashing the controller to run in IT mode, then they use passthrough to get the drives accessible to some underlying VM (e.g. a ZFS storage array, then re-share it back to its parent hypervisor via NFS or iSCSI). I really don't want to do this if I don't have to!
Please, is there anyone who has words of wisdom to make the RAID 1 performance on a LSI 2308 controller not suck on ESXi? Does anyone know if this same problem exists when running a RAID 10 configuration?