Hi,
how exactly did you run the tests?
When using iometer, how many outstanding IO's did you configure.
A single file copy job performed from a windows VM is usually a single thread application which doesn't send parallel IO's (or only a limited amount of parallel IO's).
And when using thin vmdk's the values could also differ.
Before IOmeter measures the throughput and response time, it creates a testfile on the tested volume.
When that testfile resides on a thin vmdk, VMware will zero out the used tracks during the file creation process.
After the testfile is created, IOmeter starts the measurement.
When Explorer is used on a thin vmdk, any write IO on an uninitialized track will be intercepted by ESXi and a pre zero out IO will be injected.
So the total amount of write IO's needed to copy the testfile is much smaller than the total amount of IO's generated by the ESXi server..
If you would like to validate how fast your (EMC) array could handle multiple windows file transfer job you could use RichCopy on an eagerzeroed thick vmdk..
Without using eagerzeroedthick vmdks you can't trust IO performance values created inside a VM.
It's a known issue that svMotion Operations could cause high disk latencies, this is explained in the following VMware article.
VMware KB: Abnormal DAVG and KAVG values observed during VAAI operations
With VNX, the svMotion performance might also be slower if source and destination LUN aren't owned by the same SP.
If the LUN's are owned by different SP's, the VNX has to move the data from one SP to the other via the internal bus system.
When source and destination are owned by the same SP the data movement is handled by the same SP.
Also keep in mind that the IO response time for a small IO is much better than for a large IO.
AFAIK, Windows explorer does uses 1MB IO's, so I would expect higher response times.
If you're only testing with a single VM and the test period isn't long lasting, you could also use vscsiStats to measure the performance.
Regards,
Ralf