hi everyone - i wonder if anyone can assist on the following scenario ...
I currently have 3 ESX Hosts attached to an MSA1000 (RAID1+0) with 256MB read and 256MB write cache. I have extracted the READ/WRITE rate information as well as READ/WRITE requests information from vCentre for each Host. I have been doing this daily and am getting stats at a 5 minute interval. I am seeing that each day, all 3 Hosts combined throughput to all my datastores are as follows ..
Maximum READ/WRITE RATE = 80,000KB. This happens overnight when our vRanger backups run
Average READ/WRITE RATE = about 20,000KB during the day
Maximum number of READ/WRITE commands (IOPS) = 2000 (at the same time the data throupghput peaks)
Average number of READ/WRITE commands = 750 during the day
My first question is, what is the max amount of data that can be read/written per READ/WRITE command ? The SAN has 128kb stripe size (is this the same as block size?), whereas the VMFS Datastore formatting are either 1MB, 2MB or 4MB - so if i issue 1 WRITE command what is the maximum amount of data that can be written as per that command ?
We have been specced a new filer to replace the SAN ( a netapp 3210 HA cluster) with 24 x 1TB SATA and 256GB READ Cache card. Im being advised that the controller is capable of 5000 IOPS - but is IOPS the best way to assess whether the performance will be acceptable, and how much room i will have to grow into.
I do see a direct correlation between data throughput and number of read/write commands in my graphs so im confident that data extrraction methods are correct ;-)
Maybe data throughput is a better way of doing it ? any advice would be most welcome
My managers do not want to spend all that money only to find out that either a) the new hardware cannot handle the cureent SAN load and b) that there is no room to gorw into. Im sure that both a and b will not be an issue - but i need to prove this !!
Thanks