SQL Server Configuration Settings

Configuration changes can degrade system performance just as easily as they can improve it. This is particularly true in version 7 of SQL Server, for which most performance-oriented configuration settings are fully automatic. SQL Server 7 has only 11 configuration options that are not considered "advanced," and none of these directly affects performance. To see all the configuration options, you must change the value of the Show Advanced Options setting:

 EXEC sp_configure 'show advanced options', 1 GO RECONFIGURE GO 

You should change configuration options only when you have a clear reason for doing so, and you should closely monitor the effects of each change to determine whether the change improved or degraded performance. Always make and monitor changes one at a time.

Serverwide Options

The serverwide options discussed here are set via the sp_configure system stored procedure. Many of them can also be set from SQL Server Enterprise Manager, but there is no single dialog box from which all configuration settings can be seen or changed. Most of the options that can be changed from the SQL Server Enterprise Manager user interface are controlled from one of the Properties tabs that you reach by right-clicking on your server. If you use the sp_configure stored procedure, no changes take effect until the RECONFIGURE command (or RECONFIGURE WITH OVERRIDE, in some cases) is issued. Some changes (dynamic changes) take effect immediately upon reconfiguration, but others do not take effect until the server is restarted. If an option's run_value and config_value as displayed by sp_configure are different, the server must be restarted for the config_value to take effect.

We won't look at every configuration option here ”only those that relate directly to performance. In most cases, we'll discuss options that you should not change. Some of these are resource settings that relate to performance only in that they consume memory (for example, open objects ). But if they are configured too high, they can rob a system of memory and degrade performance. We'll group the sp_configure settings by functionality. Keep in mind that SQL Server sets almost all of these options automatically, and your applications will work well without you ever looking at these options.

Memory Options

In previous versions of SQL Server, you had to manually configure memory options; this is not the case in SQL Server 7. In Chapter 3, we saw how SQL Server uses memory, including how memory is allocated for different uses within SQL Server and when data is read from or written to disk. But we did not discuss how much memory SQL Server actually uses for these various purposes or how to monitor the amount of the system's memory resources used by SQL Server.

min server memory and max server memory By default, SQL Server automatically adjusts the total amount of the memory resources it will use. However, you can use the min server memory and max server memory configuration options to take manual control. The default setting for min server memory is 0 (MB), and the default setting for max server memory is 2147483647. If you use the sp_configure stored procedure to change both of these options to the same value, you basically take full control and tell SQL Server to use a fixed memory size . This mimics the memory configuration behavior in earlier versions of SQL Server. You can also use SQL Server Enterprise Manager to set a minimum and maximum value for memory, using the Memory tab of the SQL Server Properties dialog box (shown in Figure 15-2). Right-click on the name of your server, and choose Properties to bring up the dialog box.

Figure 15-2. The Memory tab of the SQL Server Properties dialog box.

You can set the minimum and maximum memory using slider bars. Note that the maximum allowed is slightly less than the amount of physical memory in the machine. The absolute maximum of 2147483647 MB previously mentioned is actually the largest value that can be stored in the integer field of the sysconfigures table. It is not related to the actual resources of your system. You can select a separate option button to indicate that you want to use a fixed size. Setting the minimum and maximum sliders to the same value has the same effect.

set working set size The dialog box also includes a check box labeled Reserve Physical Memory For SQL Server. Selecting this is equivalent to setting the configuration option set working set size to 1, which reserves physical memory space for SQL Server that is equal to the server memory setting. Do this only if you don't want SQL Server to dynamically adjust its memory usage. Setting set working set size means that the operating system does not swap out SQL Server pages even if they can be used more readily by another process when SQL Server is idle. If the Windows NT virtual memory manager must page, it must do so from other processes. This setting defaults to 0, which allows the Windows NT virtual memory manager to determine the working set size of SQL Server.

As we've mentioned, you should generally let SQL Server take full control over the memory values, but if your server might be inactive for long periods, you can manually set min server memory so that when someone finally submits a query, memory will be available and the query can be processed quickly. If you have other critical applications on your server that might have inactive periods, you can manually set max server memory so that SQL Server will not use too much system memory and so that when the other application becomes active again, memory will be available for it to use.

When SQL Server uses memory dynamically, the lazywriter queries the system periodically to determine the amount of free physical memory available. The lazywriter expands or shrinks the buffer cache to keep the operating system's free physical memory at 5 MB plus or minus 200 KB to prevent paging. If less than 5 MB is free, the lazywriter releases memory to the operating system that usually goes on the free list. If more than 5 MB of physical memory is free, the lazywriter recommits memory to the buffer cache. The lazywriter recommits memory to the buffer cache only when it repopulates the free list; a server at rest does not grow its buffer cache.

SQL Server also releases memory to the operating system if it detects that too much paging is taking place. You can tell when SQL Server increases or decreases its total memory use by using SQL Server Profiler to monitor the Server Memory Change event (in the Misc. category). An event is generated whenever SQL Server's memory increases or decreases by 1 MB or 5 percent of the maximum server memory, whichever is greater. You can look at the value of the data element called Event Sub Class to see whether the change was an increase or a decrease. An Event Sub Class value of 1 means a memory increase; a value of 2 means a memory decrease. We'll discuss SQL Server Profiler in more detail later in this chapter.

extended memory size A third memory option, called extended memory size , is intended for SQL Server Enterprise edition running under future versions of Windows NT, which will support a full 64-bit address space. The extended memory size option refers to Enterprise Memory Architecture (EMA), which allows memory beyond the 3-GB range (or 2-GB range on Alpha processors) to be treated as a second-level cache. However, with certain hardware, you can use this feature with Windows NT 4. Pentium II Xeon chips ship with a special driver called PSE-36 (which was codeveloped by Microsoft and Intel)that allows up to 64 GB of memory to be used. Of course, you must have a machine that can support that much memory, but this feature will let you take advantage of memory well beyond the current limits. Although the extended memory size option is visible in all editions of SQL Server 7, its setting is ignored unless you are running the Enterprise edition.

user connections SQL Server 7 dynamically adjusts the number of simultaneous connections to the server if the user connections configuration setting is left at its default of 0. Even if you set this value to a different number, SQL Server does not actually allocate the full amount of memory needed for each user connection until a user actually connects. When SQL Server starts up, it allocates an array of pointers with as many entries as the configured value for user connections. Each entry uses less than 1000 bytes of memory. When a user connects to SQL Server, it is allocated 24 KB for three memory objects of one page (8 KB) each. These memory objects keep track of things such as the Process Status Structure (PSS) for each connection and the context area needed by the User Mode Scheduler (UMS) to keep track of the user's context as it moves the wait queue to a CPU and back again.

You should always let SQL Server dynamically adjust the value of the user connections option. If you configure the value too low, future connections will be denied access to SQL Server. Even if you are trying to limit actual simultaneous users, this can backfire because connections are not the same as users. One user can open multiple connections through the same application. If you're using a query tool such as the SQL Server Query Analyzer, each time you click the New Query button you start a new connection to the server that counts against the total.

locks The locks configuration option sets the number of available locks (of all types). The default is 0, which means that SQL Server adjusts this value dynamically. It starts by allocating 2 percent of the memory allotted SQL Server to an initial pool of lock structures. When the pool of locks is exhausted, it allocates additional locks. The dynamic lock pool does not allocate more than 40 percent of the memory allocated to SQL Server. If too much memory is being used for locks, SQL Server escalates row locks into table locks wherever possible. You should leave this value at its default. If you manually supply a value for the maximum number of locks, once that number of locks is reached, any query that requests additional lock resources will receive an error message and the batch will terminate.

Scheduling Options

SQL Server 7 has a completely new algorithm for scheduling user processes, which uses special UMS threads (as described in Chapter 3). There is one UMS thread per processor, and the UMS makes sure that only one process can run on a scheduler at any given time. The UMS manages assignment of user connections to UMS threads to keep the number of users per CPU as balanced as possible. Four configuration options affect the behavior of the scheduler: lightweight pooling , affinity mask , priority boost , and max worker threads.

lightweight pooling By default, SQL Server operates in thread mode, which means that the UMS schedules Windows NT threads. However, SQL Server also lets user connections run in fiber mode; fibers are less expensive to manage than threads. The lightweight pooling option has a value of 0 or 1; 1 means that SQL Server should run in fiber mode. Using fibers can yield a minor performance advantage, perhaps as much as a 5 percent increase in throughput when all of the available CPUs are operating at 100 percent. However, the tradeoff is that certain operations, such as running queries on linked servers or executing extended stored procedures, must run in thread mode. The cost of switching from fiber to thread mode for those connections can be noticeable, and in some cases can offset any benefit of operating in fiber mode.

If you are running in an environment with multiple CPUs, all of which are operating at 100 percent capacity, and if Performance Monitor shows a lot of context switching, setting lightweight pooling to 1 might have some performance benefit. With a system using more than four processors, the performance improvements should be even greater.

affinity mask You can use the affinity mask setting to bind all the threads (or fibers) handled by one UMS thread to a certain processor. You should avoid doing this because this setting prevents Windows NT from using whatever processor is currently most available; each UMS must then schedule its threads on the same processor.

You can also use this setting to limit the number of processors that SQL Server can use. Previous versions of SQL Server had a configuration option called SMP concurrency that specified a maximum number of processors that SQL Server would use. That option is no longer available, but if you want to test the benefit of limiting the number of processors SQL Server uses, you can use affinity mask to disable the use of some of the processors. For example, on an eight-processor system, setting affinity mask to 63 decimal or 00111111 binary means that SQL Server can use only processors 0 through 5. Two processors are reserved for other, non_SQL Server activity.

The affinity mask option was added in version 6.5 at the request of some major customers who ran SMP hardware with more than four processors. These sites were accustomed to similar options on UNIX or mainframe systems. Most sites do not need this option. To learn more about this setting, see the SQL Server documentation on the companion CD.

priority boost If the priority boost setting is enabled, SQL Server runs at a higher Windows NT scheduling priority. The default is 0, which means that SQL Server runs at normal priority whether you are running it on a single-processor machine or on an SMP machine. Enabling the priority boost option will allow the SQL Server process to run at high priority. There are probably few sites or applications for which setting this option will make much difference, so we recommend leaving it alone. But if your machine is totally dedicated to running SQL Server, you might want to enable this option (set it to 1) to see for yourself. It can potentially offer a performance advantage on a heavily loaded, dedicated system. Contrary to some folklore, this setting does not make SQL Server run at the highest Windows NT priority.

max worker threads SQL Server uses the Windows NT thread services by keeping a pool of worker threads (or fibers) that take requests off the queue. It attempts to evenly divide the worker threads among the UMS schedulers so the number of threads available to each UMS is the setting of max worker threads divided by the number of CPUs. With 100 or fewer users, there are usually as many worker threads as active users (not just connected users who are idle). With more users, it often makes sense to have fewer worker threads than active users. Although some user requests have to wait for a worker thread to become available, total throughput increases because less context switching occurs.

The max worker threads setting is somewhat autoconfigured. The default is 255, which does not mean that 255 worker threads are in the pool. It means that if a connection is waiting to be serviced and no thread is available, a new thread is created if the thread total is currently below 255. If this setting is configured to 255 but the highest number of simultaneously executing commands is, say, 125, the actual number of worker threads will not exceed 125. It might be less than that because SQL Server destroys and trims away worker threads that are no longer being used. You should probably leave this setting alone if your system is handling 100 or fewer simultaneous connections. In that case, the worker thread pool will not be greater than 100.

Even systems that handle 4000 or more connected users run fine with the default setting of 255. When thousands of users are simultaneously connected, the actual worker thread pool is usually well below 255 since from the back-end database perspective most connections are idle, even though the user might be doing plenty of work on the front end.

Disk I/O Options

No options are available for controlling SQL Server's disk read behavior. All the tuning options to control read-ahead in previous versions of SQL Server are now handled completely internally. Two options are available to control disk write behavior. One controls the number of simultaneous write requests that can be issued, and the other controls the frequency with which the checkpoint process writes to disk.

max async IO The max async IO option controls how many outstanding I/O operations SQL Server can have at a time. This number is specific to the speed of the I/O system of the hardware. The default setting of 32 is reasonable for most systems, but it is too low for systems with good I/O subsystems and high OLTP transaction rates. As usual, you shouldn't change the default capriciously, but this setting probably warrants changing for such systems.

This setting governs the checkpoint, lazywriter, and recovery processes, since only during these operations do multiple outstanding I/O operations come into play. During the checkpoint process, large numbers of pages might need to be flushed. Recall that asynchronous I/O is never used for writing to the log because write-ahead logging demands synchronous I/O to ensure recoverability. If you have a fast RAID system and your checkpoint process flushes many pages (that is, if you have a high OLTP-type application), you can change this value and then measure the difference by monitoring the throughput of your application or benchmark test. (The SQL Server Benchmark Kit on the companion CD is a good proxy test for this setting, since this setting is more a function of what your hardware can handle rather than specific to your application.) You can gauge your system's effectiveness with asynchronous I/O during the checkpoint process by watching the Checkpoint Writes/sec counter of the SQLServer: Buffer Manager object.

On systems with a fast I/O subsystem ”multiple disks in a fast RAID environment ”a setting of 32 might be too low to fully drive the hardware's capability, but you should change the setting only if you will empirically measure the result. If you set it too high, you might flood the system and hurt throughput. The large numbers of outstanding write operations issued by a checkpoint might result in other read activity being starved. Correctly setting this value results in throttling the checkpoint process, so you should see a less than 10 percent reduction in throughput even on a heavily loaded OLTP system while the checkpoint process is active. The whitepaper on the companion CD titled "Microsoft SQL Server 7.0 Performance Tuning Guide" includes the following advice:

A rule of thumb for setting max async I/O for SQL Servers running on larger disk subsystems is to multiply the number of physical drives available to do simultaneous I/O by 2 or 3. Then watch Performance Monitor for signs of disk activity or queuing issues. The negative impact of setting this configuration option too high is that it may cause Checkpoint momonopolize disk subsystem bandwidth that is required by other SQL Server I/O operations, such as reads.

recovery interval The recovery interval option can be automatically configured. SQL Server setup sets it to 0, which means autoconfiguration. In the current version, this means a recovery time of less than one minute, but that might change in later releases or service packs . This option lets the DBA control the checkpoint frequency by specifying the maximum number of minutes that recovery should take.

In earlier versions of SQL Server, the recovery interval was set in terms of minutes; the default was 5. In practice, this meant a checkpoint every 30,000 log records, which was almost always much less than every 5 minutes because the implementation was not scaled for today's faster systems. In SQL Server 7, recovery is more likely to match the recovery interval option, assuming that the system was busy before it came down. For databases with more than 20 MB of log records, the recovery interval directly corresponds to the checkpoint frequency. Thus, a recovery interval of 5 means that checkpoints occur only every 5 minutes and all dirty pages are written to disk.

Query Processing Options

SQL Server 7 introduces several options for controlling the resources available for processing queries. As with all the other tuning options, your best bet is to leave these options at their default values unless thorough testing indicates that a change might help.

min memory per query When a query requires additional memory resources, the number of pages it gets is determined partly by the min memory per query option. This minimum memory value for the sort is specified in kilobytes. Most people think this option is relevant only for sort operations that specifically request using an ORDER BY clause, but it also applies to internal memory needed by merge join operations and by hash merge and hash grouping operations. In SQL Server 7, sort and hash operations receive memory in a much more dynamic fashion than in previous versions, so you rarely need to adjust this value. In fact, on larger machines, your sort and hash queries typically get much more than the min memory per query setting, so you don't restrict yourself unnecessarily. If you need to do a lot of hashing or sorting, however, and you have few users or a lot of available memory, you might improve performance by adjusting this value. On smaller machines, setting this value too high can cause virtual memory to page, which hurts server performance.

query wait The query wait option controls how long a query that needs additional memory waits, if that memory is not available. A setting of _1 means that the query waits 25 times the estimated execution time of the query. A value of 0 or more specifies the number of seconds that a query waits. If the wait time is exceeded, SQL Server generates error 8645:

 Server: Msg 8645, Level 17, State 1, Line 1 A time out occurred while waiting for memory resources to execute the  query. Re-run the query. 

Even though memory is allocated dynamically, SQL Server can still run out if the memory resources on the machine are exhausted. If your queries time out with error 8645, you can try increasing the paging file size or even add more physical memory. You can also try tuning the query by creating more useful indexes so that hash or merge operations aren't needed.

index create memory The min memory per query option applies only to sorting and hashing used during query execution; it does not apply to the sorting that takes place during index creation. Another option, index create memory , lets you allocate a specific amount of memory for index creation. Its value is also specified in kilobytes.

query governor cost limit You can use the query governor cost limit option to set a maximum length of time that a query will run. SQL Server will not execute queries that the optimizer estimates will take longer than this. The option is specified in seconds, but it might not map exactly to seconds on your machine. You should think of the value as an abstract cost of the query rather than an actual number of seconds. The cost figure was correlated to seconds on one test machine in the SQL Server development group, and no information is available on the exact specs of that machine.

You can come up with your own correlation using SQL Serv Profiler. If you capture the Misc: Execution Plan events, the data element called Binary Data reflects the optimizer's estimated cost for each SQL statement. This value is compared to the query governor cost limit value. If you then also capture the Duration data for each SQL statement, you can compare the estimated cost with the actual duration. For example, if on the average, queries that show a cost (in the SQL Server Profiler trace) of 8 seconds take 10 seconds to run, you should specify a query governor cost limit of about 80 percent of what you'll really allow. If you want to block all queries that are estimated to take longer than 1 minute to run, you should set the option to 48 (80 percent of 60 seconds). We'll look at SQL Server Profiler in more detail later in this chapter.

max degree of parallelism and cost threshold for parallelism Version 7 of SQL Server lets you run certain kinds of complex queries simultaneously on two or more processors. The queries must lend themselves to being executed in sections. Here's an example:

 SELECT avg(charge_amt), category FROM charge  GROUP BY category 

If the charge table has 100,000 rows and there are 10 different values for category , SQL Server can split the rows into groups and have only a subset of the groups processed on each processor. For example, with a 4-CPU machine, categories 1 through 3 can be averaged on the first processor, categories 4 through 6 on the second processor, categories 7 and 8 on the third, and categories 9 and 10 on the fourth. Each processor can come up with averages for only its groups, and the separate averages are brought together for the final result.

During query optimization, SQL Server looks for queries that might benefit from parallel execution. It inserts exchange operators into the query execution plan to prepare the query for parallel execution. An exchange operator is an operator in a query execution plan that provides process management, data redistribution, and flow control. The two main types of exchange operators that you'll observe if parallel plans are generated are the distribute operator, which separates the processing into multiple streams, and the gather operator, which retrieves the separate results and combines them into a single result. Occasionally, you will also see a redistribute operator if the original streams need to be moved onto different processors at some point during the processing. Once exchange operators are inserted, the result is a parallel query execution plan. A portion of a parallel execution plan is shown in Figure 15-3.

click to view at full size.

Figure 15-3. A portion of a parallel query plan showing the gather operator.

A parallel query execution plan can use more than one thread; a serial execution plan, which is used by a nonparallel query, uses only a single thread. The actual number of threads used by a parallel query is determined at query plan execution initialization and is called the degree of parallelism . Even though a parallel plan might be developed, at execution time SQL Server might decide not to use it.

Data modification statements are always carried out on a single processor, but the search portion of the update, delete, and insert statements (in which SQL Server determines which rows to modify) might be still performed in parallel. In addition, even if the query is a SELECT query, parts of it might run single threaded within a parallel plan.

SQL Server automatically determines the best degree of parallelism for each instance of a parallel query execution by considering the following:

  • Is SQL Server running on a computer with more than one processor? (Only computers with more than one processor can take advantage of parallel queries.)
  • How many concurrent users are active on the server? SQL Server monitors its CPU use and adjusts the degree of parallelism at query startup time. It chooses lower degrees of parallelism if the CPUs are already busy. Parallel queries are valuable only if few users are on the system. Once a parallel query starts running, it consumes most of the CPU resources until it has completed.
  • Is sufficient memory available for parallel query execution? Each query requires a certain amount of memory to execute. Executing a parallel query requires more memory than a nonparallel query. The amount of memory required for executing a parallel query is equivalent to the amount of memory required for serial query execution multiplied by the degree of parallelism. If the memory requirement of the parallel plan cannot be satisfied, SQL Server automatically executes the serial plan for the corresponding query.
  • What type of query is being executed? Queries that consume a lot of CPU cycles are the best candidates for a parallel query (for example, joins of large tables, substantial aggregations, and sorting of large result sets). With simple queries, which are often found in transaction processing applications, the additional coordination required to execute a query in parallel outweighs the potential performance boost.
  • Are a sufficient number of rows processed in the given stream? If the query optimizer determines that the number of rows in a stream is too low, it does not introduce exchange operators to distribute the stream.

Once a query starts executing on multiple threads for parallel execution, the query uses the same number of threads until completion. SQL Server reexamines the optimal number of thread decisions each time a parallel query execution plan is retrieved from the cache. One execution of a query can result in the use of a single thread, and another execution of the same query (at a different time) can result in using two or more threads.

Two configuration options are available for controlling parallel queries. The max degree of parallelism option indicates the maximum number of processors that are used to run queries in parallel. The default value of 0 means that SQL Server should use the number of available CPUs. A value of 1 means that SQL Server should suppress parallelism completely. Any other value restricts the degree of parallelism below the number of CPUs. If your computer has only one processor, the max degree of parallelism value is ignored.

The second configuration option for controlling parallelism is cost threshold for parallelism. You can specify that SQL Server should not even consider a parallel plan for a query with an estimated cost lower than this threshold. The unit of costing is the same one used by the query governor cost limit option. The default of 5 seconds is fine for most systems.

Be careful when you use either of these configuration options ”they have serverwide impact. You can also use trace flag 8687 to turn off parallelism for individual connections

SQL Server Profiler lets you trace the degree of parallelism for each query captured. If you choose to monitor events in the SQL Operators category (Delete, Insert, Select, or Update), the Event Sub Class data is a number indicating the degree of parallelism for that query. The values below are possible for the Event Sub Class value for the SQL Operators events.

Value Meaning
No parallelism was considered. For example, the computer executing the query contains only one processor or the estimated cost of the query execution is lower than the cost threshold for parallelism value.
1 Parallel execution was considered, but the query was executed using a serial plan because a parallel execution plan would have required an unavailable amount of resources.
>1 A portion of the query has been executed using a parallel execution plan with the degree of parallelism indicated.

Database Options

Several database options can affect performance. The read only and single user options affect performance because they eliminate locking in some cases. If your database is used only for decision support (for example, a data mart), it might make sense to set your database to read only , as follows :

 EXEC sp_dboption 'dbname', 'read only', TRUE 

In this way, no locking is performed in the database; this greatly reduces overhead. This option prevents changes from being made to the database. You can easily toggle it off to perform tasks such as bulk loading of data. For an operation in which the data is changed via large batch jobs that can run during off-hours and the normal workload is query only, read only is a good choice.

Even if your database is not read only, you might want to perform off- hour maintenance tasks and bulk loading operations with the single user option enabled, as shown here:

 EXEC sp_dboption 'dbname', 'single user', TRUE 

This option also eliminates the need for locking, since only one connection can use the database at a time. Of course, you need locking to make a multiple-user system behave like a single-user system. But in this case, it is a single-user system, so locking is not required. If you need to do a big bulk load and index creation at night while no one else is using the system, you can eliminate the need to take locks during the bulk load.

Two other options are particularly useful if you are running the SQL Server Desktop edition on a small machine and your resources are extremely limited: autoclose and autoshrink . The autoclose option causes the database to shut down cleanly and free resources after the last user exits. By default, it is set to TRUE for all databases when SQL Server runs on Windows 95 or Windows 98. The database reopens automatically when a user tries to use the database. The autoshrink option (discussed in Chapter 5) can be useful if your disk space resources are extremely limited, but there is a performance tradeoff. Shrinking a database is a CPU- intensive operation and takes a long time. All indexes on heaps affected by the shrink must be adjusted because the row locators change.

Finally, two database options control the automatic gathering of statistics: auto create statistics and auto update statistics . These were both discussed in Chapter 14.

Buffer Manager Options

In Chapter 3, we saw how the Buffer Manager (also known as the Cache Manager) uses a queue of "favored pages," which you can think of as a least-recently-used (LRU) algorithm, but no queue list of buffers is maintained in the order of most recent access. You can use the pintable option to directly influence the favoring behavior.

Pinning a Table in the Cache

You can permanently remove a table from the queue of pages examined by the lazywriter so that once they are read into the data cache, they are never forced from cache. (This favors them permanently, although it is really more than that: they are entirely exempted.) You can enable this option using the sp_tableoption stored procedure with the pintable option.

This option is not appropriate for most sites. But if you get very random cache hits and you have some relatively small tables that are hot compared to other larger tables, this option might be beneficial. If those small, hot tables keep getting forced out by the random access to larger tables and you have plenty of memory, you can pin those small tables. By pinning, you override the Buffer Manager. But you take away the pages in cache used for that table, so you give the Buffer Manager less memory to work with. Pinning the table does not initially read it in; pinning just makes the table "sticky" so that once a page of the table is in the cache, it doesn't get forced out. If you want the table preloaded and sticky, you can enable the option and then do SELECT * FROM table to read all the pages into cache. (The table shouldn't be so big that you eat up more memory than your system can afford to use.)

Monitoring Buffer Manager Performance

Two DBCC commands exist to monitor the performance of the Buffer Manager; they are known as SQLPERF(WAITSTATS) and SQLPERF(LRUSTATS). DBCC SQLPERF(WAITSTATS) not only provides an overview of the Buffer Manager, it helps identify where a transaction is delayed. It shows the total milliseconds of wait time and how much of the time is due to waiting for reads, waiting for different types of locks, waiting to add a log record to the log buffer, and waiting for log buffers to be physically written. The command DBCC SQLPERF(LRUSTATS) gives details on the lazywriter and the state of the cache, such as the cache-hit ratio, the size of the cache, the average number of pages scanned to find an available buffer, and the number of free pages. Although this information is available through the Performance Monitor, the DBCC SQLPERF command lets you capture the output to a table and save the data for analysis.

For example, to capture the DBCC SQLPERF(WAITSTATS) output, you can create a table with four columns and use the dynamic EXECUTE command to populate the table. The following script shows one way to define and populate such a table. It actually adds a fifth column to the table, which indicates the current date and time that the rows were inserted into the table.

 CREATE TABLE waitstats  (wait_type     varchar (20), requests       numeric(10,0), wait_time      numeric(12,2), signal_wait    numeric(12,0), recorded_time  datetime DEFAULT getdate() ) GO INSERT INTO waitstats (wait_type, requests, wait_time, signal_wait)     EXEC ('DBCC SQLPERF(WAITSTATS)') 

Startup Parameters on SQLSERVR .EXE

You can alter startup parameters used by the SQL Server process to tune performance by disabling the performance statistics collection or specifying trace flags.

Disabling the Performance Collection

Normally, SQL Server keeps performance statistics such as CPU usage and amount of I/O on a per-user basis. The values are materialized in the sysprocesses table (really just memory) when queried. If you never query or monitor these values, the work to keep track of them is unnecessary. You can eliminate the calls that produce these performance statistics by passing the -x startup flag to sqlservr.exe. Or you can add -x as an additional parameter from the Setup program via the Set Server Options dialog box.

Specifying Trace Flags

We've discussed the use of trace flags several times, and we've seen how to enable them using DBCC TRACEON. You can also specify a trace flag at server startup; some trace flags make the most sense if they are specified serverwide rather than on a per-user basis. For example, trace flag 1204, which records information to the error log when a deadlock occurs, doesn't make sense for a specific connection because you probably have no idea ahead of time which connections will encounter deadlocks. To enable a trace flag serverwide, you add the flag as a parameter when you start SQL Server from the command line (for example, sqlservr.exe -c -t1204 ).

To support tracing on a per-user basis, a flag in the PSS must also be set to enable any of the trace flags that are tested with the original (non_user-specific) TRACE macro. Each place in the code that tests for these trace flags first determines whether the PSS flag is set before testing the trace-flag array. Many trace flags (such as 1081) don't make sense on a per-user basis. If you specify a lowercase t , SQL Server skips the per-user test to determine whether the trace flag is enabled. Using an uppercase T sets the PSS flag and requires per-user checking. A lowercase t flag is not enabled unless at least one uppercase T is also set or until some connection executes a DBCC TRACEON with _1 (which means that it is set for all connections).



Inside Microsoft SQL Server 7.0
Inside Microsoft SQL Server 7.0 (Mps)
ISBN: 0735605173
EAN: 2147483647
Year: 1999
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net