vNUMA: A VMware Admin’s Guide

Since I’ve been discussing this recently at work with developers and managers due to some SQL Server issues we have been having, I decided it was time to write about it.  vNUMA was introduced along with vSphere 5 back in the day and is related to NUMA (Non Uniform Memory Architecture). In a nutshell, vNUMA allows you to divide your CPU and Memory resources into different nodes and allows for faster memory access.

vNUMAThe image above shows how NUMA breaks up the resources to create the nodes.  This is also a setting that can be controlled within VMware.  vNUMA is typically used for your “Monster VMs”, where your virtual CPUs span multiple physical CPUs. For example, if your virtual machine has 12 CPUs and your physical host has 8 cores, you are spanning multiple CPUs.  vNUMA is also recommended when using more than 8 vCPUs.  To change the NUMA setting (Keep in mind that the VM must be powered off to make this change):

  1. Select your VM
  2. Choose Edit Settings…
  3. Go to the Options tab
  4. Go to Advanced -> General
  5. Click the button “Configuration Parameters”
  6. Add the following “numa.vcpu.min” and set to the number of CPUs you want used in the NUMA node.

One thing to keep in mind with NUMA is that you want to size the NUMA nodes according to the number of cores in your physical host.  For example, my physical hosts have a six core processor, so my NUMA node on the SQL Server is set to 6.  This gives my 12 processor SQL Server 2 NUMA nodes with 6 CPUs each.  By default, when you create a VM with more than 8 vCPUs, vNUMA is enabled by default.  Using the instructions above, you can change the vNUMA value or set it on VMs with less than 8 vCPUs.

vNUMA is very specific to certain use cases and should be used with caution.  Incorrectly configuring NUMA can cause more problems than leaving it at the default.  Be sure to test your settings on a non production server and see if the results were as expected.  One final thought and last thing to keep in mind, ensure that the hosts in your cluster when vNUMA is being used have similar NUMA configurations to avoid issues when the VM decides to vMotion to a different host.

Image courtesy of brentozar.com