Hi,
I've done a similar thing with a 5.1 vCenter environment to a seperate new 5.5 vCenter, across two vCenters each in its own data center.
We ended up using shared storage (iSCSI) on certain hosts within each vCenter/cluster.
The process was:
1) Migrate the running VM to the shared storage seen by both hosts/vcenters.
2) Power off the VM
3) Change the VM network label of the VM to a standard switch port group for just VM Traffic, which we named "migration_network".
4) Remove the VM from the Inventory of the vCenter.
5) In the new vCenter, navigate to the shared storage, find the VM and its contents and "Add to Inventory".
6) Once in, edit the VM settings, change the network label to one that will accommodate the VM's traffic from before. We also made changes to RAM reservations and enabled CPU/MEM hot add.
7) Upgrade the VM virtual hardware.
8) Power on the VM. When/if prompted by vCenter if we moved or copied it, we selected "I moved it".
9) Let the VM come up, depending on the OS it will require some reconfiguration. We also installed new VMware tools and then rebooted.
10) Check services of the guest VM and job is done. (We also used this opportunity to check patch levels and other AV stuff related to our environment).
11) Storage vMotion the VM to new SAN storage available to the new hosts in the new datacenter.
That is from memory, we wrote a guide for all system admins to do this and we all shared out the VM's according to service and let peopple organise downtime and migration.
The actual downtime per VM is less than 10 minutes as most of the work is in the storage vMotion and preparation of actually having downtime. The actual administration effort is quite low.
Hope this helped? Good luck!
Ryan