"This is a bit weird because this 2 nodes are in different fault domains. I expected that the data will be placed inside one FD."
No, actually this is expected as this policy doesn't explicitly pin the data to Preferred/non-Preferred site so it will place components purely based on capacity-disk usage (e.g. place them on lowest used disk regardless of node/site).
"Originally this was a 2 node stretched cluster and the Raid0 policy was Set to None-stretched Cluster."
Yes, that was the assumption here as you didn't specify whether using site affinity or not.
"I changed this now to Site mirroring - stretched cluster:"
This is changing your data to FTT=1 across the sites, not FTT=0 and will double your capacity usage for these data - this is all well and fine if this is what you want but if that is the case then you may as well just set it for all data and use MM EA option (which then should state 0B to move to satisfy that MM type).
If it is not what you want (e.g. want data to be stored as FTT=0, have lower storage footprint and rely on HA in the application level) then the above isn't how that would be achieved - that would be achieved by making 2 FTT=0 policies, one pinning data to Preferred site, other pinning data to non-Preferred site, then applying the policies to each half of the VMs (e.g. assuming these are redundant pairs of VMs ,DAG-VM1 gets Preferred policy and DAG-VM2 gets non-Preferred policy), and also configure the appropriate DRS affinity rules (e.g. should/must run rules and anti-affinity).