VMware has never claimed that ESX is hardware independent - as a matter of fact, there is an explicit listing of compatible hardware in the Compatibility Guides. The claim is that virtual machines are independent of the host hardware, and that is a correct statement. If you build a VM on one host with one type of hardware, you can (all things being configured equally - network connectivity, storage, etc) run that virtual machine on any hardware.
That said, CPU type, in particular, is important. But it's only important for live migration/cloning activities. This is due to the nature of the operating systems we run in virtual machines. Windows, for example, probes the CPU during boot with a CPUID command. When Windows gets the result for this query, it loads a driver appropriate for the CPU's capabilities. ESX passes that command straight to the CPU, and hands the response directly into the virtual machine. As such, the guest OS in the VM knows that CPU is in the hardware, and takes advantage of its particular feature set. Imagine what would happen to the guest if that instruction set changed on the fly? I see a kernel panic in the immediate future of such an event.
EVC (Enhanced VMotion Compatibility) was mentioned, but that won't do any good between Intel and AMD hosts. What EVC does is essentially sets a global CPUID mask for a DRS-enabled cluster, and presents that CPUID mask to all VMs in the cluster. This does ease the concern for identical CPUs, but doesn't completely alleviate it. There is a requirement for all-Intel or all-AMD hosts in the DRS cluster with EVC turned on.
Migrations and cloning can be done between an Intel host and an AMD host, but the source VM must be powered off for these operations to function. From how you describe your setup, I don't know that this is an issue. When you recover your VMs at your DR site, they should be in a powered-off state. This should allow you to power them back on with no issues - the guest will pick up the new CPU and work accordingly.
What you're seeing isn't so much a problem with virtualization, but rather a problem with the design of Operating Systems. We just never had the opportunity to see the problem in traditional, hardware-based installations, as the OS was absolutely tied to a specific piece of hardware. With virtualization, that's no longer the case, and these kinds of design decisions begin to form significant challenges.
Hope that makes sense,
-jk