Please take in consideration VSAN does not work like a normal RAID controller. There are two tiers Cache and Capacity tier. Probably your test is using only the Cache area (in the All-Flash case "write buffer") if the test duration is not long.
Take also in consideration:
1. RAID5 works calculates one parity from three data sets on different vsan nodes. vs. RAID1 creates two replicas on two separate vsan nodes. The parity calculation time add latency, but the write on 3 hosts in parallel reduce latency. Of course it depends on the number of stripes you defined in the storage policy for the RAID1 and RAID5 but i assume here you have stripe of 1. If you have simple VM and only write sequentially. raid5 writes to 3 nodes. RAID1 only to 2 nodes. raid5 may be faster. However if you have random RE-writes with small blocks. raid5 needs to read all four datasets before calculating the parity and the new small write. Then raid5 will be slower than raid1. However it depends on many factors.
2. The difference in performance between RAID5 and raid1 will depend on the type of performance test. random/sequential, blocksize, workset compared to cache size, etc.
3. The performance difference will depend on the network latency as well.
4. Write performance for small blocks will be depending on the latency of your SSD device for Caching. big block performance will depend on the throughput of your Cache SSD. The read performance on the number of SSD Capacity devices per diskgroup and the latency of those.