I'm seeing something very similar in a customer's environment. We have a vSphere 5.0 cluster on each site, a 30Mb/s link between the two sites with less than 10ms round trip, and we're replicating 20 or so VMs from HQ to the DR site using vSphere Replication.
We've seeded the replication site using one-time backups onto a USB drive (using VeeamZip) which were restored to the DR site a couple of days later. Some of the especially busy VMs generated between 10Gb and 50Gb of changes in that time and it's really struggling to get the replicated VMs up the date. As an example, we kicked off replication of a VM this afternoon and after checksumming it had 1.9Gb of changes that needed replicated. 7 hours later, it's only transferred 1.3Gb. At a conservative estimate, we should be able to transfer around 10Gb an hour over this link so this seems incredibly slow. It does seem that the data is being drip fed down to the DR site by vSphere Replication. The networking guys are seeing less than 5Mb/s traffic on the link.
I understand the point about it being optimised for multiple VMs but how many VMs do you need to be replicating for it to use a reasonable amount of bandwidth? From what I understand, vSphere Replication is positioned at the SMB market that perhaps haven't the budget for storage array replication. 20-30 VMs must be quite normal in these kind of environments. Initial replication could take weeks and weeks if you've not got the ability to pre-seed the DR site. I'd love to investigate the advanced settings to speed it up but I really don't want to go down that route on a customer's environment if VMware consider it unsupported.
Sorry, rant over and apologies for the post hi-jack but just wanted to add my experience of using it. Don't get me wrong, I like SRM as a product but my experience of using it with storage array replication is much better than what I'm seeing with vSphere Replication unfortunately.
Dave