First, , those are good questions to ask, especially when you are exploring different configurations and options.
provided the best answer to your first question. Keeping things straightforward and simplified is extremely helpful.
I would also suggest that you develop a VM blueprint (in vRA or vROPs if you have that) where you can add rules to a configuration profile so that it is not forgotten. But if you don't have those tools then it's a little more manual (or you can have someone build a script!).
For the other questions, configuring vMotion differently could work, but you may run into some unforseen issues with critical workloads that cannot be vMotioned the way you want to another data center. In that set-up, you'd essentially end up with a vCS at each site and have to manage and deploy workloads very specifically with stict VLAN and rule guides as you build/deploy. That sounds like more work than it is worth. But yes, if you have vMotion configured for a specific DC and hosts, then the VM 'should' power on within another host within the cluster in the same DC.
For the vDS question, kind of a similar answer - but it will depend on how you configure the VLANs and uplinks on the vDS as to how it will behave in that situation. For the most part - unless the vDS goes down, it vCS will follow standard vMotino rules and you'll see results similar to the paragraph above. Again, the vDS option would be a vCS at each DC anyway - and then that agian could introduce some unforseen issues (I would test this config if you can in an isolated VLAN just to see what happens). After all that, I'd still recommend one of the first two answers (the clulster per DC, or using VM blueprints to standardize how rules are applied to newly deployed workloads.
Hope this helps.