ESXi

 View Only
  • 1.  vCPU Sockets vs Cores

    Posted Nov 24, 2015 02:22 AM

    Is there any performance benefit of using sockets vs cores for vCPUs, or are they more or less doing the same thing?



  • 2.  RE: vCPU Sockets vs Cores

    Posted Nov 24, 2015 07:39 AM

    1 vCPU maps to one physical core on the physical end

    So, if we either give 1 socket with 2 vCPU or 2 socket with 1 vCPU, the virtual machine is gonna see 2 virtual cores and the process scheduling is done on the respective two physical cores. This really doesn't make any performance difference.

    This is considered where in some cases licensing for application or Guest OS was done for sockets rather than cores.

    What I like to follow is always replicate the underlying hardware. If I have 4 sockets and 4 cores each (physical) and I want to assign 4 vCPUs then, I rather make 4 virtual socket with 1 vCPU each.

    Suhas



  • 3.  RE: vCPU Sockets vs Cores

    Posted Nov 24, 2015 10:23 AM

    I agree, unless licensing constraints dictate otherwise I consider best practice to be x sockets with 1 core each.

    vM

    -----------------------

    VCAP-DCD / VCAP-DCA / VCP-CLOUD / VCP-DT / VCP5 / VCP4

    -----------------------

    vMustard.com



  • 4.  RE: vCPU Sockets vs Cores

    Posted Nov 24, 2015 01:55 PM

    Thanks guys!



  • 5.  RE: vCPU Sockets vs Cores

    Posted Nov 24, 2015 09:15 AM

    As already said, it doesn't make any difference from a performance perspective, unless you deliberately set inefficient core counts that will affect NUMA (this is only relevant if you have at least 9 or more total vCPUs). This is why it's generally advised to not bother with cores and configure vCPUs as sockets only, unless you have to circumvent licensing limitations and really know what you're doing.

    Check this article that explains the rationale behind this in more detail:

    http://blogs.vmware.com/vsphere/2013/10/does-corespersocket-affect-performance.html