Random reads on a SSD across the network are not always the fastest, so the cache tier will absorb some of that. Now one can argue that having a portion of the capacity disk(s) used for this, but cache disks that are SSD get utterly destroyed and fail much more often than data disk(s). Having the cache on a separate device protects the data disks from issues when the memory cells start going bad. In SSD's, as the memory cells go bad, just like with HDD's, you loose capacity.
In every HCI solution that I've seen that uses a network interconnect, a separate cache disk is absolutely needed. This really isn't unique to vSAN. Additionally, because writes to the SSD's are fast, a portion of the host RAM is used for write-caching, instead of using a much slower disk, which is why vSAN (and other HCI solutions) are memory intensive. If they used RAM for write-cache on a hybrid cluster, you would start starving the VMs, your workload, of resources, so using the cache disk for writes is needed in that case.
It's really an eloquent and resilient solution, and as with any such solutions, there's specific requirements in order to ensure that the data is intact, available, and rapidly ready.