"So, what is vSAN? It seems to be a bunch of vSAN certified disks (optimally SAS or NVMe based), locally attached to an vSAN certified I/O controller inside a vSphere certified server, presenting a part of a distributed block-level object storage cluster."
Not "seems to be". That *is* what it is. Essentially, when using a protection-policy, vSAN could be described as "RAID over Network".
"vSAN distributes the I/O over as much vSAN data nodes inside the cluster as needed"
That depends. When we talk about a RAID1 equivalent protection-policy, after the VM wrote something, those writes are sent out twice (mirror copy 1 & mirror copy 2), one goes to ESXi host A where it lands on a certain disk from the local pool of disks there. The other mirror copy goes to another vSAN node to one of its own local disks.
When erasure coding policies (R5 or R6 equivalent) or when using striping, then more hardware components are starting to get used in parallel (but this also introduces latencies).
The whole concept of "thinking in queues" in the classical sense becomes fuzzy when one introduces a software-based RAID engine.
Two answer the original question: just use the same concepts as with non vSAN: either smack everything onto a pvscsi controller or only the VMDK's that are high-performance. The performance-enhancing technique of pvscsi (in certain cases!!) still applies, vSAN or not.
When you deploy a new Server 2019 VM on vSphere 7, the controller recommendation is the NVMe controller (for boot-drive and all other vmdk's), even though there is no actual physical NVMe hardware to be seen in the cluster. It's all abstracted anyway.
I would not "over-analyse" the whole thing, trying to match things in your head, because as soon as you start with virtual NVMe controllers and storage-devices (65k queuedepth) even though the actual hardware is SAS (256 queuedepth) or even SATA for capacity devices (QD=32) and trying to fit one in the other, you'll go grazy.