Interesting question, a bit of Googling found this:
Computers within an organization have different processing speeds. This speed difference might cause users to request that their work be run on the faster computer to reduce costs. This situation could lead to heavy workloads on the faster computers while the slower units stand idle. To avoid this problem, you can normalize the processing speeds to more evenly charge for CPU utilization. That is, you can define that a percentage of the original CPU be used during the billing process.
http://pic.dhe.ibm.com/infocenter/tivihelp/v3r1/index.jsp?topic=%2Fcom.ibm.tivoli.dszos.doc_1.8.1%2Fdr5luu02170.htm
So it looks like you need to benchmark at least one application in VMware to find out what percentage of the CPU it's using. I don't know what methoology they've used to get the figures but it sounds like if the application was running on a X CPU machine with Y cores you should configure a VM to have those same numbers. In VMware you can specify virtual sockets and cores per socket as well as resource allocation for CPUs per VM. Of course unless you dedicate entire cores or CPUs to the VM it'll have to share the processing power with other workloads so this could get a little complex. I have to wonder if this is a blunt tool when talking about virtualisation. My guess would be that as a general rule users of applications with lower normalised CPU speeds would notice less if they were visualised than users of more processor intensive applications but you really need to know more about the requirements.