Today's critical enterprise applications demand a larger number of processors, with increased clock speed. In the past, hardware vendors provided the support for more and faster CPUs, but with a single system bus and the memory latency, the additional processing power was not fully utilized. Such an architecture, where all memory accesses are posted to the same shared memory bus, is known as symmetric multiprocessing (SMP). To circumvent this, the concept of a large on-chip L3 cache was introduced, but it was also a limited solution. NUMA is designed to overcome the scalability limitations of SMP architecture. NUMA hardware architecture includes more than one system bus, each serving a small set of processors. Each group of processors has its own memory and, possibly, its own I/O channels. Each group is called a NUMA node. For example, a 16-processor machine may have 4 NUMA nodes, each node having 4 CPUs, its own system bus, and, possibly, its own I/O channels. This allows for a greater memory locality for that group of schedulers when tasks are processed on the node. Note that each CPU can, however, access memory associated with other groups in a coherent way. Non-uniform memory access means that it takes longer to access some regions of memory (for example, remote memory on a different node) than others (for example, local memory on the same node). The main benefit of NUMA is scalability for high-end machines (generally eight or more processors). In summary, NUMA reduces memory contention by having several memory buses and only a small number of CPUs competing for a shared memory bus. SQL Server 2005 is NUMA-aware. This means it can perform better on NUMA hardware without special configuration. When a thread running on a specific NUMA node has to allocate memory, SQL Server's memory manager tries to allocate memory from the memory associated with the NUMA node for locality of reference. Each NUMA node has an associated I/O completion port that is used to handle network I/O. SQLOS, SQL Server startup, network binding, and BPool management have been designed to make effective use of NUMA. SQL Server Configuration Manager allows you to associate a TCP/IP address and port to a single or multiple NUMA nodes. You can configure NUMA affinity so that clients can connect to specific nodes. NUMA affinity is configured as a server setting in SQL Server Configuration Manager. To set a TCP/IP address and port to a single or multiple nodes, you append a node identification bitmap (an affinity mask) in square brackets after the port number. Nodes can be specified in either decimal or hexadecimal format. For instance, both 1453[0x3] and 1453[3] map port 1453 to NUMA nodes 0 and 1. The affinity mask value [3] translates to binary 00000011, and because the 0th and 1st bits are set, the client requests on port 1453 are served on 0th and 1st NUMA nodes. Let's look at another example. 1453[0x11] and 1453[17] both map the port 1453 to NUMA nodes 0 and 4 because the binary representation of hex 11 or decimal 17 is 00010001 and the 0th and 4th bits are set. The default node identification bitmap is -1, which means listen on all nodes. To configure a TCP/IP port to one or more NUMA nodes, you follow these steps:
Setting the NUMA affinity and having different clients served by different NUMA nodes is an easy-to-manage alternative to traditional multi-instance-based server consolidation and load balancing approaches. |