System Configuration


You can configure the SQL Server engine in several ways and through a variety of interfaces. One approach involves using a system-stored procedure called sp_configure and supplying a value for a particular configuration option. Some configuration options can be set at the operating system level, and most of these can be controlled using the Surface Area Configuration Tool (discussed in Chapter 1). In addition, some server behaviors can be controlled using operating system control interfaces; others, such as trace flags, can be set as startup parameters of the SQL Server engine executable, Sqlservr.exe.

Next we'll look at most of the configuration options for controlling the behavior of SQL Server 2005.

Task Management

As you saw in Chapter 2, the operating system schedules all threads in the system for execution. Each thread of every process has a priority, and Windows executes the next available thread with the highest priority. By default, the operating system gives active applications a higher priority, but this priority setting is not appropriate for a server application running in the background, such as SQL Server 2005. To remedy this situation, the SQL Server installation program modifies the priority setting to eliminate favoring of foreground applications.

It's not a bad idea to periodically double-check this priority setting in case someone has set it back. From Control Panel in Windows, double-click the System icon to open the System Properties dialog box. On the Advanced tab, click the Performance Settings button. The first set of option buttons are for specifying how to allocate processor resources, and you can adjust for the best performance of either programs or background services. Select Background Services so that all programs (both background and foreground) receive equal processor resources. If you plan to connect to SQL Server 2005 from a local client (one that is running on the same computer as the server), you can improve processing time by using this setting.

Resource Allocation

A computer running Windows Server for file and print services will need the most memory for file caching. If it is also running SQL Server, it makes sense to have more memory available for SQL Server. In Windows 2000 and Windows 2003, you can change the properties for file and print services by right-clicking My Network Places and choosing Properties. Then right-click Local Area Connection, and again choose Properties. Select File And Printer Sharing For Microsoft Networks, and then click the Properties button. The dialog box shown in Figure 3-1, which is almost identical to the one you get in Windows 2000, appears.

Figure 3-1. Server optimization for network applications


The Maximize Data Throughput For Network Applications option is best for running SQL Server. You need high network throughput with minimal memory devoted to file caching so that more memory is available to SQL Server. When this option is set, network applications such as SQL Server have priority over the file cache for access to memory. Although you might expect that the Minimize Memory Used option would help, it minimizes the memory available for some internal network settings that are needed to support a lot of SQL Server users.

System Paging File Location

If possible, you should place the operating system paging file on a different drive than the files used by SQL Server. This is vital if your system will be paging. However, an even better approach is to add memory or change the SQL Server memory configuration to effectively eliminate paging. In general, SQL Server is designed to minimize paging, so if your memory configuration values are appropriate for the amount of physical memory on the system, so little page-file activity will occur that the file's location will be irrelevant.

Nonessential Services

You should disable any services you don't need. In Windows 2000 and 2003, you can right-click My Computer and choose Manage. Expand the Applications And Services node in the Computer Management tool, and click Services. In the right-hand pane, you will see a list of all the services available on the operating system. You can change a service's startup property by right-clicking its name and choosing Properties. Unnecessary services add overhead to the system and use resources that could otherwise go to SQL Server. No unnecessary services should be marked for automatic startup. Avoid using a server that's running SQL Server as a domain controller, the group's file or print server, the Web server, or the Dynamic Host Configuration Protocol (DHCP) server. You should also consider disabling the Alerter, ClipBook, Computer Browser, Messenger, Network Dynamic Data Exchange (DDE), and Task Scheduler services, which are enabled by default but are not needed by SQL Server.

Network Protocols

You should run only the network protocols you actually need for connectivity. (I discussed the various protocols in Chapter 2.) You can use the SQL Server Configuration Manager to disable unneeded services, as described earlier in this chapter.

Compatibility with Earlier Versions of SQL Server

A default instance of SQL Server 2005 can listen on the same network addresses as earlier versions of SQL Server, including SQL Server 2000 and SQL Server 7.0. Applications using earlier versions of the client tools can also connect to a default instance with no change. However, named instances of SQL Server 2005 listen on dynamic ports, and client computers using earlier versions of the client tools or components must be set up to connect to these addresses. You can do this by creating a server alias for use by the client. More information about setting up aliases is available in SQL Server 2005 Books Online.

Trace Flags

Books Online lists fewer than a dozen trace flags that are fully supported. You can think of trace flags as special switches that you can turn on or off to change the behavior of SQL Server. There are actually many dozens, if not hundreds, of trace flags. However, most were created for the SQL Server development team's internal testing of the product and were never intended for use by anybody outside Microsoft.

You can set trace flags on or off by using the DBCC TRACEON or DBCC TRACEOFF command or by specifying them on the command line when you start SQL Server using Sqlservr.exe. You can also use the SQL Server Configuration Manager to enable one or more trace flags every time the SQL Server service is started. (You can read about how to do that in Books Online.) Trace flags enabled with DBCC TRACEON are valid only for a single connection unless you specified an additional parameter of 1 when calling DBCC TRACEON, in which case they will be active for all connections, even ones opened before you ran DBCC TRACEON. Trace flags enabled as part of starting the SQL Server service are enabled for all sessions.

A few of the trace flags are particularly relevant to topics covered in this book, and I will discuss particular ones when I describe topics that they are related to. Because trace flags change the way SQL Server behaves, they can actually cause trouble if used inappropriately. Trace flags are not harmless features that you can experiment with just to see what happens, especially not on a production system. Using them effectively requires a thorough understanding of SQL Server default behavior (so that you know exactly what you'll be changing) and extensive testing to determine that your system really will benefit from the use of the trace flag.

SQL Server Configuration Settings

If you choose to have SQL Server automatically configure your system, it will dynamically adjust the most important configuration options for you. It's best to accept the default configuration values unless you have a good reason to change them. A poorly configured system can destroy performance. For example, a system with an incorrectly configured memory setting can break an application.

In certain cases, tweaking the settings rather than letting SQL Server dynamically adjust them might lead to a tiny performance improvement, but your time is probably better spent on application and database design, indexing, query tuning, and other such activities, which I'll talk about later in this book. You might see only a 5 percent improvement in performance by moving from a reasonable configuration to an ideal configuration, but a badly configured system can kill your application's performance.

SQL Server 2005 has only 14 configuration options that are not considered "advanced," and none of these directly affects performance. To see all the configuration options, you must change the value of the Show Advanced Options setting:

EXEC sp_configure 'show advanced options', 1 GO RECONFIGURE GO


You should change configuration options only when you have a clear reason for doing so, and you should closely monitor the effects of each change to determine whether the change improved or degraded performance. Always make and monitor changes one at a time.

The server-wide options discussed here are set via the sp_configure system stored procedure. Many of them can also be set from the Server Properties dialog box in the Object Explorer window of SQL Server Management Studio, but there is no single dialog box from which all configuration settings can be seen or changed. Most of the options that you can change from the Server Properties dialog box are controlled from one of the property pages that you reach by right-clicking your server. You can see the list of property pages in Figure 3-2.

Figure 3-2. List of server property pages in SQL Server Management Studio


If you use the sp_configure stored procedure, no changes take effect until the RECONFIGURE command runs. In some cases, you might have to specify RECONFIGURE WITH OVERRIDE if you are changing an option to a value outside its recommended range. Dynamic changes take effect immediately upon reconfiguration, but others do not take effect until the server is restarted. If an option's run_value and config_value as displayed by sp_configure are different, you must restart the server for the config_value to take effect.

We won't look at every configuration option hereonly the most interesting ones or ones that are related to SQL Server performance. In most cases, we'll discuss options that you should not change. Some of these are resource settings that relate to performance only in that they consume memory (for example, Locks). But if they are configured too high, they can rob a system of memory and degrade performance. I'll group the sp_configure settings by functionality. Keep in mind that SQL Server sets almost all of these options automatically, and your applications will work well without you ever looking at these options.

Memory Options

Prior to SQL Server 7.0, you had to manually configure all sorts of memory options, including how much total memory SQL Server should use and how much of that memory should be used for specific purposes. Fortunately, this is no longer the case. In the preceding section, you saw how SQL Server uses memory, including how it allocates memory for different uses and when it reads data from or writes data to disk. However, we did not discuss how to control how much memory SQL Server actually uses for these purposes.

Min Server Memory and Max Server Memory

By default, SQL Server adjusts the total amount of the memory resources it will use. However, you can use the Min Server Memory and Max Server Memory configuration options to take manual control. The default setting for Min Server Memory is 0 megabytes (MB), and the default setting for Max Server Memory is 2147483647. If you use the sp_configure stored procedure to change both of these options to the same value, you basically take full control and tell SQL Server to use a fixed memory size. The absolute maximum of 2147483647 MB previously mentioned is actually the largest value that can be stored in the integer field of the underlying system table. It is not related to the actual resources of your system. Both of these options were discussed in Chapter 2 in the section on memory, so I won't repeat that discussion here.

Set Working Set Size

The configuration option Set Working Set Size is a legacy setting from earlier versions, including SQL Server 2000. This setting is ignored in SQL Server 2005, even though you will not receive an error message when you try to use this value.

AWE Enabled

This option enables the use of the Address Windowing Extensions (AWE) API to support large memory sizes. With AWE enabled, SQL Server 2005 can use as much memory as the operating system version of Enterprise, Developer, or Standard edition allows. (The exception is when you are running 64-bit SQL Server 2005, in which case maximum memory is limited to 32 terabytes on Developer or Standard edition.) Instances of SQL Server 2005 running on Windows 2000 do not dynamically manage the size of the address space when you enable AWE memory. Instead, when it starts up, SQL Server commits all of the memory it can up to the amount available on the machine. When running on Windows 2003, SQL Server reserves only a small portion of AWE-mapped memory. As additional AWE-mapped memory is required, the operating system dynamically allocates it to SQL Server. Similarly, if fewer resources are required, SQL Server can return AWE-mapped memory to the operating system for use by other processes or applications.

Use of AWE, in either Windows 2000 or Windows 2003, will lock the pages in memory so that they cannot be written to the paging file. Windows has to swap out other applications if additional physical memory is needed, so the performance of those applications might suffer. You should therefore set a value for Max Server Memory when you have also enabled AWE.

If you're running multiple instances of SQL Server 2000 on the same computer and each instance uses AWE memory, you must pay close attention to certain configuration issues. SQL Server Books Online documents offer the following guidelines:

  • Make sure each instance has a Max Server Memory setting.

  • The sum of the Max Server Memory values for all of the instances should be less than the amount of physical memory in the computer.

If the sum of the settings exceeds the physical memory on the computer, some of the instances either will not start or will have less memory than is specified in Max Server Memory. For example, suppose that a computer has 48 gigabytes (GB) of physical random access memory (RAM) and three instances of SQL Server 2000 running on it and that Max Server Memory is set to 16 GB for each instance. If you stop and restart all three instances:

  • The first instance will start with the full 16 GB of memory.

  • The second instance will start, but with slightly less than 16 GB of memory (up to 128 MB less).

  • The third instance will start in dynamic memory mode and will have 128 MB or less memory available to it.

User Connections

SQL Server 2005 dynamically adjusts the number of simultaneous connections to the server if the User Connections configuration setting is left at its default of 0. Even if you set this value to a different number, SQL Server does not actually allocate the full amount of memory needed for each user connection until a user actually connects. When SQL Server starts up, it allocates an array of pointers with as many entries as the configured value for User Connections. If you must use this option, do not set the value too high because each connection takes approximately 28 KB of overhead regardless of whether the connection is being used. However, you also don't want to set it too low, because if you exceed the maximum number of user connections, you receive an error message and cannot connect until another connection becomes available. (The exception is the DAC connection, which can be used.) Keep in mind that the User Connection value is not the same as the number of users; one user, through one application, can open multiple connections to SQL Server. Ideally, you should let SQL Server dynamically adjust the value of the User Connections option.

Locks

The Locks configuration option sets the number of available locks (of all types). The default is 0, which means that when SQL Server starts, the lock manager acquires sufficient memory for an initial pool of 2500 lock structures per node. When the lock pool is exhausted, additional memory is acquired from the buffer pool. If more memory is required for the lock pool than is available in the buffer pool, and more computer memory is available (the target memory has not been reached), SQL Server generally allocates memory dynamically to satisfy the request for locks. The dynamic lock pool cannot acquire more than 60 percent of the memory allocated to the buffer pool. Once the 60 percent limit has been reached, further requests for locks will generate an error.

If you set the Locks option to something other than 0, you override the ability of SQL Server to allocate lock resources dynamically. In that case, the lock manager cannot allocate more locks than the value specified in Locks. Because each lock consumes memory (96 bytes per lock), increasing this value can require increasing the amount of memory dedicated to the server.

The Locks option also affects when SQL Server will consider lock escalation. When Locks is set to 0, SQL Server considers lock escalation when the memory used by the current lock structures reaches 40 percent of the buffer pool. When Locks is not set to 0, SQL Server considers lock escalation when the number of locks reaches 40 percent of the value specified for Locks. Locking and lock escalation are discussed in detail in Chapter 8.

Scheduling Options

As described earlier, SQL Server 2005 has a special algorithm for scheduling user processes using the SQLOS, which manages one scheduler per processor and makes sure that only one process can run on a scheduler at any given time. The SQLOS manages assignment of user connections to workers to keep the number of users per central processing unit (CPU) as balanced as possible. Five configuration options affect the behavior of the scheduler: Lightweight Pooling, Affinity Mask, Affinity64 Mask, Priority Boost, and Max Worker Threads.

Affinity Mask and Affinity64 Mask

From an operating system point of view, the ability of Windows to move process threads among different processors is efficient, but this activity can reduce SQL Server performance because each processor cache is reloaded with data repeatedly. By setting an affinity mask option, you can allow SQL Server to assign processors to specific threads and thus improve performance under heavy load conditions by eliminating processor reloads and reducing thread migration and context switching across processors. Setting an affinity mask to a non-0 value not only controls the binding of schedulers to processors, but it also allows you to limit which processors will be used for executing SQL Server requests.

The value of an affinity mask is a 4-byte integer, and each integer controls one processor. If you set a bit representing a processor to 1, that processor is mapped to a specific scheduler. The 4-byte affinity mask can support up to 32 processors. For example, to configure SQL Server to use processors 0 through 5 on an eight-way box, you would set the affinity mask to 63, which is equivalent to a bit string of 00111111. To enable processors 8 through 11 on a 16-way box, you would set the affinity mask to 3840, or 0000111100000000. You might want to do this on a machine supporting multiple instances, for example. You would set the affinity mask of each instance to use a different set of processors on the computer.

To cover more than 32 CPUs, you configure a 4-byte affinity mask for the first 32 CPUs and up to a 4-byte Affinity64 mask for the remaining CPUs. Note that affinity support for servers with 33 to 64 processors is available only on 64-bit operating systems.

You can configure the affinity mask to use all the available CPUs. For an eight-way machine, an Affinity Mask setting of 255 means that all CPUs will be enabled. This is not exactly the same as a setting of 0 because with the non-zero value, the schedulers are bound to a specific CPU, and with the 0 value, they are not.

Lightweight Pooling

By default, SQL Server operates in thread mode, which means that the workers processing SQL Server requests are threads. As I described earlier, SQL Server also lets user connections run in fiber mode. Fibers are less expensive to manage than threads. The Lightweight Pooling option can have a value of 0 or 1; 1 means that SQL Server should run in fiber mode. Using fibers may yield a minor performance advantage, particularly when you have 8 or more CPUs and all of the available CPUs are operating at or near 100 percent. However, the tradeoff is that certain operations, such as running queries on linked servers or executing extended stored procedures, must run in thread mode and therefore need to switch from fiber to thread. The cost of switching from fiber to thread mode for those connections can be noticeable and in some cases will offset any benefit of operating in fiber mode.

If you're running in an environment with a large number of CPUs, all operating at 100 percent capacity, and if System Monitor shows a lot of context switching, setting Lightweight Pooling to 1 might yield some performance benefit.

Priority Boost

If the Priority Boost setting is enabled, SQL Server runs at a higher scheduling priority. The result is that the priority of every thread in the server process is set to a priority of 13 in Windows 2000 and Windows 2003. Most processes run at the normal priority, which is 7. The net effect is that if the server is under load and is getting close to maxing out the CPU, these normal priority processes will be effectively starved.

The default Priority Boost setting is 0, which means that SQL Server runs at normal priority whether or not you're running it on a single-processor machine. There are probably very few sites or applications for which setting this option makes much difference, but if your machine is totally dedicated to running SQL Server, you might want to enable this option (set it to 1) to see for yourself. It can potentially offer a performance advantage on a heavily loaded, dedicated system. As with most of the configuration options, you should use it with care. Raising the priority too high might affect the core operating system and network operations, resulting in problems shutting down SQL Server or running other operating system tasks on the server.

Max Worker Threads

SQL Server uses the operating system's thread services by keeping a pool of workers (threads or fibers) that take requests off the queue. It attempts to evenly divide the worker threads among the SQLOS schedulers so that the number of threads available to each scheduler is the Max Worker Threads setting divided by the number of CPUs. With 100 or fewer users, there are usually as many worker threads as active users (not just connected users who are idle). With more users, it often makes sense to have fewer worker threads than active users. Although some user requests have to wait for a worker thread to become available, total throughput increases because less context switching occurs.

The Max Worker Threads default value of 0 means that the number of workers will be configured by SQL Server, based on the number of processors and machine architecture. For example, for a SQL Server running on a four-way 32-bit machine, the default is 256 workers. This does not mean that 256 workers are created on startup. It means that if a connection is waiting to be serviced and no worker is available, a new worker is created if the total is currently below 256. If this setting is configured to 256 and the highest number of simultaneously executing commands is, say, 125, the actual number of workers will not exceed 125. It might be even less than that because SQL Server destroys and trims away workers that are no longer being used. You should probably leave this setting alone if your system is handling 100 or fewer simultaneous connections. In that case, the worker thread pool will not be greater than 100.

Table 3-2 lists the default number of workers given your machine architecture and number of processors. (Note that Microsoft recommends 1024 as the maximum for 32-bit operating systems.)

Table 3-2. Default Max Worker Threads Settings

CPU

32-Bit Computer

64-Bit Computer

Up to 4 processors

256

512

8 processors

288

576

16 processors

352

704

32 processors

480

960


Even systems that handle 4000 or more connected users run fine with the default setting. When thousands of users are simultaneously connected, the actual worker pool is usually well below the Max Worker Threads value set by SQL Server because from the perspective of the database, most connections are idle even if the user is doing plenty of work on the client.

Disk I/O Options

No options are available for controlling SQL Server's disk read behavior. All the tuning options to control read-ahead in previous versions of SQL Server are now handled completely internally. One option is available to control disk write behavior. This option controls how frequently the checkpoint process writes to disk.

Recovery Interval

The Recovery Interval option can be automatically configured. SQL Server setup sets it to 0, which means autoconfiguration. In SQL Server 2005, this means that the recovery time should be less than 1 minute. This option lets the database administrator control the checkpoint frequency by specifying the maximum number of minutes that recovery should take, per database. SQL Server estimates how many data modifications it can roll forward in that recovery time interval. SQL Server then inspects the log of each database (every minute, if the recovery interval is set to the default of 0) and issues a checkpoint for each database that has made at least that many data modification operations since the last checkpoint. For databases with relatively small transaction logs, SQL Server issues a checkpoint when the log becomes 70 percent full, if that is less than the estimated number.

The Recovery Interval option does not affect the time it takes to undo long-running transactions. For example, if a long-running transaction takes two hours to perform updates before the server becomes disabled, the actual recovery takes considerably longer than the Recovery Interval value.

The frequency of checkpoints in each database depends on the amount of data modifications made, not on a time-based measure. So a database that is used primarily for read operations will not have many checkpoints issued.

As discussed earlier, most writing to disk doesn't actually happen during checkpoint operations. Checkpoints are just a way to guarantee that all dirty pages not written by other mechanisms will still be written to the disk in a timely manner. For this reason, you should keep the Recovery Interval value set at 0 (self-configuring).

Affinity I/O Mask and Affinity64 I/O Mask

These two options control affinity of a processor for I/O operations and work in much the same way as the two options for controlling processing affinity for workers. Setting a bit for a processor in either of these bit masks means that the corresponding processor will be used only for I/O operations. You will probably never need to set this option. However, if you do decide to use it, perhaps just for testing purposes, you should use it in conjunction with the Affinity Mask or Affinity64 Mask option and make sure the bits set do not overlap. You should thus have a setting of 0 for both Affinity I/O Mask and Affinity Mask for a CPU, 1 for Affinity I/O Mask option and 0 for Affinity Mask, or 0 for Affinity I/O Mask and 1 for Affinity Mask.

Query Processing Options

SQL Server has several options for controlling the resources available for processing queries. As with all the other tuning options, your best bet is to leave the default values unless thorough testing indicates that a change might help.

Min Memory Per Query

When a query requires additional memory resources, the number of pages it gets is determined partly by the Min Memory Per Query option. This option is relevant for sort operations that you specifically request using an ORDER BY clause, and it also applies to internal memory needed by merge-join operations and by hash-join and hash-grouping operations. This configuration option allows you to specify a minimum amount of memory (in kilobytes) that any of these operations should be granted before they are executed. Sort, merge, and hash operations receive memory in a very dynamic fashion, so you rarely need to adjust this value. In fact, on larger machines, your sort and hash queries typically get much more than the Min Memory Per Query setting, so you shouldn't restrict yourself unnecessarily. If you need to do a lot of hashing or sorting, however, and you have few users or a lot of available memory, you might improve performance by adjusting this value. On smaller machines, setting this value too high can cause virtual memory to page, which hurts server performance.

Query Wait

The Query Wait option controls how long a query that needs additional memory will wait if that memory is not available. A setting of 1 means that the query waits 25 times the estimated execution time of the query, but it will always wait at least 25 seconds with this setting. A value of 0 or more specifies the number of seconds that a query will wait. If the wait time is exceeded, SQL Server generates error 8645:

Server: Msg 8645, Level 17, State 1, Line 1 A time out occurred while waiting for memory resources to execute the query. Re-run the query.


Even though memory is allocated dynamically, SQL Server can still run out of memory if the memory resources on the machine are exhausted. If your queries time out with error 8645, you can try increasing the paging file size or even add more physical memory. You can also try tuning the query by creating more useful indexes so that hash or merge operations aren't needed. Keep in mind that this option affects only queries that have to wait for memory needed by hash and merge operations. Queries that have to wait for other reasons are not affected.

Blocked Process Threshold

This new option in SQL Server 2005 allows an administrator to request a notification when a user task has been blocked for more than the configured number of seconds. When Blocked Process Threshold is set to 0, no notification is given. You can set any value up to 86,400 seconds. When the deadlock monitor detects a task that has been waiting longer than the configured value, an internal event is generated. You can choose to be notified of this event in one of two ways. You can use SQL Trace to create a trace and capture events of type Blocked process report, which you can find in the Errors and Warnings category on the Events Select screen in SQL Server Profiler. As long as a resource stays blocked on a deadlock-detectable resource, the event will be raised every time the deadlock monitor checks for a deadlock. An XML string will be captured in the Text Data column of the trace that describes the blocked resource and the resource being waited on. More information about deadlock detection is in Chapter 8.

Alternatively, you can use event notifications to send information about events to a service broker service. Event notifications can provide a programming alternative to defining a trace, and they can be used to respond to many of the same events that SQL Trace can capture. Event notifications, which execute asynchronously, can be used to perform an action inside an instance of SQL Server 2005 in response to events with very little consumption of memory resources. Because event notifications execute asynchronously, these actions do not consume any resources defined by the immediate transaction.

Index Create Memory

The Min Memory Per Query option applies only to sorting and hashing used during query execution; it does not apply to the sorting that takes place during index creation. Another option, Index Create Memory, lets you allocate a specific amount of memory for index creation. Its value is also specified in kilobytes.

Query Governor Cost Limit

You can use the Query Governor Cost Limit option to specify the maximum number of seconds that a query can run. If you specify a non-zero, nonnegative value, SQL Server disallows execution of any query that has an estimated cost exceeding that value. Specifying 0 (the default) for this option turns off the query governor, and all queries are allowed to run without any time limit.

Max Degree Of Parallelism and Cost Threshold For Parallelism

SQL Server 2005 lets you run certain kinds of complex queries simultaneously on two or more processors. The queries must lend themselves to being executed in sections. Here's an example:

SELECT AVG(charge_amt), category FROM charge GROUP BY category


If the charge table has 100,000 rows and there are 10 different values for category, SQL Server can split the rows into groups and have only a subset of the groups processed on each processor. For example, with a four-CPU machine, categories 1 through 3 can be averaged on the first processor, categories 4 through 6 can be averaged on the second processor, categories 7 and 8 can be averaged on the third, and categories 9 and 10 can be averaged on the fourth. Each processor can come up with averages for only its groups, and the separate averages are brought together for the final result.

During optimization, the optimizer always finds the cheapest possible serial plan before considering parallelism. If this serial plan costs less than the configured value for the Cost Threshold For Parallelism option, no parallel plan is generated. Cost Threshold For Parallelism refers to the cost of the query in seconds; the default value is 5. If the cheapest serial plan costs more than this configured threshold, a parallel plan is produced based on some assumptions about how many processors and how much memory will actually be available at run time. This parallel plan cost is compared with the serial plan cost, and the cheaper one is chosen. The other plan is discarded.

A parallel query execution plan can use more than one thread; a serial execution plan, which is used by a nonparallel query, uses only a single thread. The actual number of threads used by a parallel query is determined at query plan execution initialization and is called the degree of parallelism (DOP). The decision is based on many factors, including the Affinity Mask setting, the Max Degree Of Parallelism setting, and the available threads when the query starts executing.

You can observe when SQL Server is executing a query in parallel by querying the DMV sys.dm_os_tasks. A query that is running on multiple CPUs will have one row for each thread.

SELECT     task_address,     task_state,     context_switches_count,     pending_io_count,     pending_io_byte_count,     pending_io_byte_average,     scheduler_id,     session_id,     exec_context_id,     request_id,     worker_address,     host_address FROM sys.dm_os_tasks ORDER BY session_id, request_id;


Be careful when you use the Max Degree Of Parallelism and Cost Threshold For Parallelism optionsthey have server-wide impact.

There are other configuration options that I will not mention, most of which deal with aspects of SQL Server that are beyond the scope of this book. These include options for configuring remote queries, replication, SQL Agent, C2 auditing, and full-text search. There is a Boolean option to disallow use of the CLR in programming SQL Server objects; it is off (0) by default. The SQL Server 2000 option Allow Updates still exists but has no effect in SQL Server 2005. A few of the configuration options deal with programming issues, and these will be discussed in Inside SQL Server: TSQL Programming. These options include options for dealing with recursive and nested triggers, cursors, and accessing objects across databases.

Some of the configuration options take effect immediately after you set them, and they issue the RECONFIGURE (or in some cases, RECONFIGURE WITH OVERRIDE) statement. Others require that you restart your SQL Server instance.

Table 3-3 lists all available configuration options, the possible settings, and the default values. The configuration options are marked with letter codes as follows:

  • A = An advanced option. It should be changed only by an experienced database administrator, and it requires that you set the Show Advanced option to 1.

  • RR = An option that requires a restart of the Database Engine.

  • SC = Self-configuring option.

Table 3-3. SQL Server 2005 Configuration Options

Configuration Option

Minimum Value

Maximum Value

Default

Ad Hoc Distributed Queries (A)

0

1

0

Affinity I/O Mask (A, RR)

2147483648

2147483647

0

Affinity64 I/O Mask (A, available only on 64-bit version of SQL Server)

2147483648

2147483647

0

Affinity Mask (A)

2147483648

2147483647

0

Affinity64 Mask (A, available only on 64-bit version of SQL Server)

2147483648

2147483647

0

Agent XPs (A)

0

1

0

Allow Updates (Obsolete)

0

1

0

Awe Enabled (A, RR)

0

1

0

Blocked Process Threshold (A)

0

86400

0

C2 Audit Mode (A, RR)

0

1

0

CLR Enabled

0

1

0

Cost Threshold For Parallelism (A)

0

32767

5

Cross Db Ownership Chaining

0

1

0

Cursor Threshold (A)

1

2147483647

1

Database Mail XPs (A)

0

1

0

Default Full-Text Language (A)

0

2147483647

1033

Default Language

0

9999

0

Default Trace Enabled (A)

0

1

1

Disallow Results From Triggers (A)

0

1

0

Fill Factor (A, RR)

0

100

0

Ft Crawl Bandwidth (Max), (A)

0

32767

100

FT Crawl Bandwidth (min), (A)

0

32767

0

FT Notify Bandwidth (max), (A)

0

32767

100

FT Notify Bandwidth (min), (A)

0

32767

0

Index Create Memory (A, SC)

704

2147483647

0

In-Doubt Xact Resolution (A)

0

2

0

Lightweight Pooling (A, RR)

0

1

0

Locks (A, RR, SC)

5000

2147483647

0

Max Degree Of Parallelism (A)

0

64

0

Max Full-Text Crawl Range (A)

0

256

4

Max Server Memory (A, SC)

16

2147483647

2147483647

Max Text Repl Size

0

2147483647

65536

Max Worker Threads (A, RR)

128

32767 (1024 is the maximum recommended for 32-bit operating systems.)

0 auto-configures the number of max worker threads based on the number of processors using the formula (256 + [number of processors 4] * 8).

Media Retention (A, RR)

0

365

0

Min Memory Per Query (A)

512

2147483647

1024

Min Server Memory (A, SC)

0

2147483647

8

Nested Triggers

0

1

1

Network Packet Size (A)

512

32767

4096

Ole Automation Procedures (A)

0

1

0

Open Objects (A, RR, obsolete)

0

2147483647

0

PH_Timeout (A)

1

3600

60

Precompute Rank (A)

0

1

0

Priority Boost (A, RR)

0

1

0

Query Governor Cost Limit (A)

0

2147483647

0

Query Wait (A)

1

2147483647

1

Recovery Interval (A, SC)

0

32767

0

Remote Access (RR)

0

1

1

Remote Admin Connections

0

1

0

Remote Login Timeout

0

2147483647

20

Remote Proc Trans

0

1

0

Remote Query Timeout

0

2147483647

600

Scan For Startup Procs (A, RR)

0

1

0

Server Trigger Recursion

0

1

1

Set Working Set Size (A, RR, obsolete)

0

1

0

Show Advanced Options

0

1

0

SMO and DMO XPs (A)

0

1

1

SQL Mail XPs (A)

0

1

0

Transform Noise Words (A)

0

1

0

Two Digit Year Cutoff (A)

1753

9999

2049

User Connections (A, RR, SC)

0

32767

0

User Instance Timeout (A, appears only in SQL Server 2005 Express Edition)

5

65535

10

User Instances Enabled (A, appears only in SQL Server 2005 Express Edition)

0

1

0

User options

0

32767

0

Web Assistant Procedures (A)

0

1

0

xp_cmdshell (A)

0

1

0


You can determine the specific value for each configuration option by using the following statement.

SELECT * FROM sys.configurations ORDER BY name ;


The Default Trace

One final option that doesn't seem to fit into any of the other categories is a new option in SQL Server 2005 called Default Trace Enabled. I mention it because the default value is 1, which means that as soon as SQL Server starts, it runs a server-side trace, capturing a predetermined set of information into a predetermined location. None of the properties of this default trace can be changed; the only thing you can do is turn it off.

You can compare the default trace to the blackbox trace in SQL Server 2000, but the blackbox trace takes a few steps to create, and it takes even more steps to have it start automatically when your SQL Server starts. This trace is so lightweight that you might find little reason to ever disable it. If you're not familiar with SQL Server tracing, you'll probably need to spend some time reading about traces; the GUI for creating them is called the SQL Server Profiler.

Thep default trace output file is stored in the same directory in which you installed your SQL Server, in the \log subdirectory. So if you've installed your SQL Server in the default location, the captured trace information will be in the file C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\log.trc. Every time you stop and restart SQL Server, a new trace file is created with a sequential numerical suffix, so the second trace file would be Log_01.trc, followed by Log_02.trc, and so on. If all the trace log files are removed or renamed, the next log file starts at Log.trc again.

You can open the trace files created through the default trace mechanism by using the SQL Server Profiler, just as you can any other trace file, or you can copy to a table by using the system function fn_trace_gettable and view the current contents of the trace while the trace is still running. As with any server-side trace that writes to a file, the writing is done in 128-KB blocks. Thus, on a very low-use SQL Server instance, it might look like nothing is being written to the file for quite some time. You need 128 KB of data for any writes to the physical file to occur. In addition, when the SQL Server service is stopped, whatever events have accumulated for this trace will be written out to the file.

Unlike the blackbox trace in SQL Server 2000, which captures every single batch completely and can quickly get huge, the default trace in SQL Server 2005 captures only a small set of events that were deemed likely to cause stability problems or performance degradation of SQL Server. The events captured are the following. (Please see the documentation on monitoring and tuning SQL Server for performance for a description of what each event captures.)

  • ServiceControl

  • Login

  • Object

  • Hash

  • Sort Warnings

  • Missing Column Statistics

  • Missing Join Predicate

  • Server Memory

  • Data File Auto Grow

  • Log File Auto Grow

  • Data File Auto Shrink

  • Log File Auto Shrink

  • Audit Statement GD

  • Audit Object GDR

  • Audit Add/Drop Login

  • Audit Login GDR

  • Audit Login Change Property

  • Audit Add Login to Server Role

  • Audit Add DB User

  • Audit Add Member to DB

  • Audit Add/Drop Role

  • Audit Backup/Restore

  • Audit DBCC

Not only can you not change anything about the files saved or their locations, you can't add or remove events, the data captured along with the events, or the filters that might be applied to the events. If you want something slightly different than the default trace, you can disable the predefined trace and create your own with whatever events, data, and filters you choose. Of course, you must then make sure the trace starts automatically. This is not impossible to do, but I suggest that you leave the default trace on, in addition to whatever other traces you need, so that you know that at least some information about the activities taking place on SQL Server is being captured.



Inside MicrosoftR SQL ServerT 2005. The Storage Engine
Inside Microsoft SQL Server 2005: The Storage Engine (Solid Quality Learning)
ISBN: 0735621055
EAN: 2147483647
Year: 2004
Pages: 115
Authors: Kalen Delaney

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net