1. Strip Size: 1MB ? I read VMFS uses 1MB too.
VMFS allocates space in 1MB chunks but the VMs still read and write in whatever size they normally do. Most SAN documentation recommends 64K as its a good average and generally performs well. If you have workloads that do large sequential reads/writes then go bigger. I would recommend trying at least two sizes and running real world benchmarks, not just synthetic "What's the max best case scenario" ones.
2. Write Cache: ?, I am not sure performance vs. security ?
This is the RAM on the RAID card. If for any reason your server/RAID card lose power or hard lock, any data in that RAM is gone (depending on battery backup on the card). This means the VMFS could become corrupted and you could potentially lose all data on the filesystem. Many decent RAID cards have a built in battery that will preserve the data in RAM if there is a short power failure but this should not be considered adequate for production datastores. Performance uses this RAM, security does not.
Note, Large SANs typically use NV memory for cache. I.e. flash memory.
3. Read Cache: No Read Ahead ? I read for the big files like VMDK on a VMware datastore, there is no use for read ahead.
Read Cache is great if you have a small amount of data that is read frequently. If you have more than one VM reading data, the read patterns become vary chaotic and random so the cache becomes less useful.
Read Ahead, see above. Things are to random for it to help in most cases.
Also keep in mind that the VMs are the ones making the read and write requests form their virtual disks. VMware (unless you configure host read cache) does NOTHING to cache reads, writes, or anything else. This is because if a VM writes to disk and the VM gets an "OK, that's on disk" when its not, corruption happens just like the cache on the raid card.
4. Disk Cache: default ? (enabled vs. disabled)
Same as all the other caching, just on the hard drives. This should ALWAYS be disabled unless you have PLP SSDs.
This all equates to anything under the the VM (ESXi, RAID card, and Disks) would function as synchronous writes. This means if a VM says "write this to disk" it gets to the physical platters before we tell the VM that were done with that write. This is a slow process because we have to wait for the mechanical disk to spin to the right spot, the head to swing into place and write the data for each and every write. This is safe, just slow. Now if you don't care about your data or some risk is acceptable, you can enable write cache on the RAID card. Just know that if the power goes out on the server, if the battery on the raid card(if it has one) fails or just doesn't last long enough, bad things happen.