Remember, VSAN is all about having your working set in cache, so that most of the reads hit the flash cache tier.
It seems that analyzer 1 with 100MB is mostly in cache, whereas analyzer 2 is not.
We work off of a guideline, whereby an application's working set is usually 10% of its capacity. Sometimes it is more, sometimes it is less, but 10% is a generally accepted rule of thumb.
I don't know how Analyzer has been configured, but does it's workload contain repeat data patterns, so that some cached data can be read?
If not, it may mean that you are simply filling up the SSD write buffer, destaging it to the spinning disk when it reaches a particular threshold, then filling it up again. This means you are bound by magnetic disk performance and getting very little in the way of benefit from the caching layer of VSAN.
Things to look at:
1. Is Analyzer using all 100GB as its working set, and will this reflect your production workloads? If not, reduce it to a size that is reflective of a real production working set? If most of your VMs use 100GB VMDKs, consider a working set size of 10GB.
2. Is Analyzer using repeat patterns? If not, configure it to do so, or use another benchmark tool that does.
HTH
Cormac