Monitoring System Behavior

Throughout this book, we've looked at statements that are useful for monitoring some aspect of SQL Server's performance. These include procedures such as sp_who2 , sp_lock , SET SHOWPLAN_TEXT ON , DBCC SQLPERF, and various trace flags for analyzing deadlock issues. SQL Server Enterprise Manager provides a graphical display that is a combination of sp_who2 and sp_lock , and you should make use of it. In addition, two other tools let you monitor the behavior of your SQL Server system: SQL Server Profiler and Performance Monitor.

SQL Server Profiler

SQL Server Profiler is new with SQL Server 7. It looks similar to the SQL Trace tool in version 6.5, but it has far greater capabilities. SQL Trace was basically an "ODS Sniffer," which itself was based on an earlier freeware product called SQLEye. SQLEye was essentially a filter that listened on the network using the ODS API for all the batches that were sent to a specified SQL Server. When SQLEye received an incoming batch, it could apply one of several filters to it and record in a file all batches that satisfied the filters' conditions. In version 6.5, the same developer who wrote SQLEye put a GUI wrapper around its configuration and specification commands and created the SQL Trace tool.

Because SQL Trace worked only at the ODS layer, it could capture only the commands that were actually being sent to SQL Server. Once SQL Server received the command, SQL Trace had no further knowledge of what SQL Server did with it. For example, if a client connection sent a batch to SQL Server that consisted of only a stored procedure call, that is all SQL Trace could capture and record. It could not record all the statements that were executed within the stored procedure, and it had no knowledge of other procedures that were called by the original stored procedure.

SQL Server Profiler belongs to a new generation of profiling tools. Not only can it keep track of every statement executed and every statement within every stored procedure, it can keep track of every time a table is accessed, every time a lock is acquired , and every time an error occurs. In fact, it can capture 68 predefined events, and you can configure 5 additional events of your own.

SQL Server Profiler relies on the concept of events. Event producers write to queues maintained by SQL Server. Event consumers read events off of the queues. Figure 15-4 shows the general relationship between producers and consumers. (Event producers are server components such as the lock manager, Buffer Manager, and ODS, which generate the events recorded in the queue.)

SQL Server Profiler is more than just the user interface that is available through the SQL Server Profiler icon. You use the interface for defining traces from a client machine, watching the events recorded by those client side traces, and replaying traces. SQL Server Profiler can also define server-side traces that run invisibly , using extended stored procedures. One of the advantages of server-side traces is that they can be configured to start automatically. We'll see more on server-side traces later.

Defining a Trace

We'll look at how to define a trace by using the SQL Server Profiler interface, but the details regarding the types of events, data elements available, and filters are equally applicable to server-side traces defined using the extended stored procedures. To start a new trace, choose New from the SQL Server Profiler File menu. You'll see a dialog box like the one shown in Figure 15-5, with four tabs for defining the trace properties.

click to view at full size.

Figure 15-4. The SQL Server Profiler architecture.

Figure 15-5. The General tab of the Trace Properties dialog box.

You can define a trace as either shared or private. A private trace is available only to the operating system user (as opposed to the SQL Server user) who created it. Private and shared traces defined in the SQL Server Profiler interface are available only on the client machine on which they were defined, but you can export the definition to another client machine. Import and export options are available from the File menu. You can specify where you want the captured information to be saved. The captured events are always displayed in the SQL Server Profiler user interface, and when you define client-side traces, you cannot turn this off. In addition, you can save the events to either a file or a table within SQL Server. If you save to a table, the table is created automatically, with the appropriate columns defined to hold the data values that are being collected.

On the Events tab of the Trace Properties dialog box, you can select from the 68 available events, which are grouped into 12 categories. You can select an entire category or select events from within a category. The online documentation gives a complete description of each event.

On the Data Columns tab, you can specify which data items you want to record for each event. For example, you might want to keep track of only the SQL Server user, the application being used, and the text of the SQL statement being executed. Or you might want to record performance information such as the duration of the event, the table or index accessed, the severity of an error, and the start and end time of the event. Three data elements ”Binary Data, Integer Data, and Event Sub Class ”have different meanings depending on what event is being captured. For example, if you are capturing the Execution Plan event, Binary Data reflects the estimated cost of the query and Integer Data reflects the estimate of the number of rows to be returned. If you're capturing the Server Memory event, Event Sub Class data can be 1, which means that the server memory was increased, or it can be 2, which means it was decreased. The Integer Data column reflects the new memory size . Not all events have meaningful values for every possible data element. Again, full details are available in the online documentation.

NOTE
By default, you do not see all 68 events and 13 data elements listed in the Trace Properties dialog box. You see only a subset of the events and data elements. To see the complete list, you must close the dialog box and choose Options from the Tools menu. Select the All Event Classes and All Data Columns option buttons .

On the Filters tab of the Trace Properties dialog box, you can filter out events that aren't of interest. Three types of filters are available ” name filters, range filters, and ID filters. Each possible data element can be filtered in only one of the three ways.

You use a name filter to include or exclude specific values. Typically, values entered in the Include box override values entered in the Exclude box. For example, if you enter MS SQL Query Analyzer in the Include box, you capture only events that originate through the Query Analyzer. All other events are ignored, regardless of what you entered in the Exclude box. The exception is when you use wildcards. You can enter %SQL Server% in the Include box to include all applications whose name includes that string, and then you can exclude specific applications that include that string, such as SQL Server Profiler. This is shown in Figure 15-6. By default, all traces exclude events originating from SQL Server Profiler, because typically you don't want to trace the events that are involved in the actual tracing.

Figure 15-6. The Filters tab of the Trace Properties dialog box.

A range filter takes a low value and a high value. Only events for which the filtered data element falls within the specified range are recorded. Probably the most common filter is for duration. If you are tracing your long-running queries, you are probably not interested in any queries that take less than a second to execute. You can enter 1000 ( milliseconds ) as the minimum duration and leave the maximum unspecified.

An ID filter lets you indicate specific values that you are interested in. For example, if you want to record only events that occur in a particular database, you can select one specific value for the Database ID filter. In the first release of SQL Server 7, you can choose only one specific value. If you want to trace two different databases, you must set up two separate traces. One of the data elements that allows an ID filter is the Object ID that an event is affecting (that is, the table being updated). If you want to record Object ID information but are not interested in when system tables are accessed, you can select a special check box for that purpose. As Figure 15-7 shows, you can either record events dealing with one specific Object ID or you can select the check box to indicate that you want all objects except the system tables.

Figure 15-7. Creating an Object ID filter.

The SQL Server online documentation provides full details of all the filters and which data elements allow which of the three types of filters.

Information gathered by SQL Server Profiler can be extremely useful for tracking all kinds of performance problems. By saving the captured events to a table, you can analyze usage patterns and query behavior using Transact -SQL stored procedures. By saving your captured events to a file, you can copy that file to another machine or e-mail it to a central location (or even to your support provider).

Gathering Diagnostic Information

A lot of the information that SQL Server Profiler gathers can help your support provider troubleshoot system problems. However, this information is usually not enough by itself. Support providers ask a standard set of questions when you call. A special utility in SQL Server 7 helps you gather this information with a single command: sqldiag . It can work with SQL Server Profiler if you have enabled automatic query recording, as shown here:

 EXEC xp_trace_setqueryhistory 1 

This procedure tells SQL Server to automatically keep track of the most recent 100 queries in an internal ring buffer. When the 101st query is recorded, it overwrites the first, so there are never more than 100 queries in the buffer. Once enabled, it autostarts every time SQL Server starts. The only way to stop the automatic recording of queries to the ring buffer is by running the xp_trace_setqueryhistory procedure with an argument of 0.

When you execute sqldiag from a command prompt while SQL Server is running, it gathers the following information:

  • Text of all error logs
  • Registry information
  • DLL version information
  • The contents of master.dbo.sysprocesses
  • Output from the following SQL Server stored procedures:
    • sp_configure
    • sp_who
    • sp_lock
    • sp_helpdb
    • xp_msver
    • sp_helpextendedproc
  • Input buffer from all active processes
  • Any available deadlock information
  • Microsoft Diagnostics Report for the server

The information is stored in a file called sqldiag.txt; you can specify a different filename in the command line for sqldiag . Executing sqldiag also places the contents of the internal ring buffer containing the most recent 100 queries in the file called sqldiag.trc. This is a normal SQL Server Profiler trace file, which you open and examine using SQL Server Profiler.

Obviously, if SQL Server is not running, some of the information in the above list won't be available. If SQL Server is down because of a severe system error, knowing the nature of the most recent queries could be of tremendous value. If you have enabled the ring buffer by using xp_trace_setqueryhistory , that information is available. Every time SQL Server encounters a system-generated error with a severity level greater than 17, it automatically dumps the contents of the ring buffer to a trace file called blackbox .trc. If the server has crashed with a severe error, you have a record of the final moments of activity.

Note that blackbox.trc is created only if the error is system generated. Also note that SQL Server always appends to the blackbox.trc file. If you encounter a situation in which frequent failures occur, this file will continue to grow. You must periodically delete this file or remove all its contents (perhaps after copying its contents to another location) to keep it from growing indefinitely.

If you use RAISERROR to generate an error of your own, even if you request a severity level greater than 17, blackbox.trc will not be created. However, you can use the command xp_trace_flushqueryhistory at any time to dump the ring buffer to a file of your choice. A complete description of this command is in the online documentation.

You can save and restart traces that you define using SQL Server Profiler. You can also copy the definition to another machine and start it from the SQL Server Profiler user interface there. If you want to start a particular trace programmatically (or automatically) or save the captured events someplace other than a SQL Server table or trace file, you must define a server-side trace using the xp_trace_* extended stored procedures. You can include the calls to these procedures in any SQL Server stored procedure or batch.

To define a server-side trace, client programs can use the SQL Server Profiler extended stored procedure xp_trace_addnewqueue . They can use xp_trace_setqueuedestination to direct the output of the trace to one of four destinations: a file, the Windows NT application log, a table, or a forwarded server. The procedure xp_sqltrace is still available for backward compatibility, but the options for configuring traces using this procedure are much more limited, so you should use xp_trace_addnewqueue to define a trace.

Here are the steps for defining a trace from a client application:

  1. Execute xp_trace_addnewqueue with the required parameters to create a trace queue and determine the columns to record. This is run once for each trace. An integer bit mask is used to specify all the data columns to be captured.
  2. Execute xp_trace_seteventclassrequired with the required parameters to select the events to trace. This is run once for each event class for each trace. All events in the specified event class will be captured.
  3. You can optionally execute the applicable xp_trace_set* extended stored procedures to set any, none, or a combination of filters.
  4. Execute sp_setqueuedestination with the required parameters to select a consumer for the trace data. This is run once for each destination for each trace.
  5. Optionally, you can execute xp_trace_savequeuedefinition to save the trace queue definition.
  6. Execute xp_trace_startconsumer with the required parameters to start the consumer, which sends the trace queue information to its destination.

Here's an example that creates a queue that writes events to a file and optionally saves the trace and marks it for autostart whenever SQL Server is restarted. (For full details on the arguments of all the extended procedure calls, see the online documentation.)

 -- Declare variables. declare @queue_handle int  -- queue handle to refer to this trace by declare @column_value int  -- data column bit mask -- Set the column mask for the data columns to capture. SET @column_value = 1163212851240968192 --  1  = Text data  --  16  = Connection ID  --  32  = Windows NT username  --  128  = Host  --  512  = Application name  --  4096  = Duration --  8192  = Start time  -- Create a queue. exec xp_trace_addnewqueue 1000, 5, 95, 90, @column_value,      @queue_handle output -- Specify the event classes to trace. exec xp_trace_seteventclassrequired @queue_handle,      10 ,1 -- RPC:Completed exec xp_trace_seteventclassrequired @queue_handle,      12 ,1 -- SQL:BatchCompleted exec xp_trace_seteventclassrequired @queue_handle,      14 ,1 -- Connect exec xp_trace_seteventclassrequired @queue_handle,      16 ,1 -- Disconnect exec xp_trace_seteventclassrequired @queue_handle,      17 ,1 -- ExistingConnection -- Create a filter that omits events created by -- SQL Server Profiler and this script from the trace. EXEC xp_trace_setappfilter @queue_handle, NULL,      'SQL Server Profiler%' EXEC xp_trace_settextfilter @queue_handle, NULL,      'EXEC xp_trace%;SET ANSI%' -- Configure the queue to write to a file. exec xp_trace_setqueuedestination @queue_handle, 2, 1, NULL,      'c:\temp\test_trace1.trc' -- Start the consumer that actually writes to the file. exec xp_trace_startconsumer @queue_handle -- Display the queue handle; will need it later to stop the queue. select @queue_handle  -- Save the definition as TestTrace1. -- exec xp_trace_savequeuedefinition @queue_handle, "TestTrace1" ,1 -- Mark it for autostart. -- exec xp_trace_setqueueautostart "TestTrace1" ,1 

You might find it useful to have certain traces running constantly, but be careful about the amount of information you capture. Trace files can grow quite large if they are left running the entire time SQL Server is running. The companion CD contains an example of a SQL batch that stops a trace if it is running and restarts it with a new filename. You can use SQL Server Agent to run the batch periodically ”for example, every four hours ”so that each file contains only four hours' worth of trace information.

Tuning SQL Server Profiler

SQL Server Profiler uses a "pay as you go" model. There is no overhead for events that are not captured. Most events need very few resources. SQL Server Profiler becomes expensive only if you trace all 68 event classes and capture all data from those events. Early testing shows an absolute maximum of 5 to 10 percent overhead if you capture everything. In addition, most of the performance hit is due to a longer code path ; the actual resources needed to capture the event data are not particularly CPU intensive .

You can tune the behavior and the resources required by SQL Server Profiler on the General tab of the Trace Properties dialog box. To the right of the server name is an icon of a server; if you click on that icon, you see the dialog box shown in Figure 15-8, which shows four options. If you're defining a server-side trace using the extended procedure, these options are specified as parameters to the xp_trace_addnewqueue procedure.

The Number Of Rows To Buffer setting indicates the maximum number of events that can be buffered on the server queue to which the producers are writing. One consumer thread on the server consumes events to send back to whatever clients are collecting the events.

The Server Timeout setting indicates when to autopause after a producer has been backed up.

Figure 15-8. Using the Source Server dialog box to set trace tuning options.

The Boost Priority setting indicates how full (by percentage) the queue can be before a boost in priority takes place. If the queue exceeds the setting, the consumer thread priority is boosted to consume events faster. Because the consumer thread can read events faster after the priority is boosted, either the number of events in the queue decreases or the queue fills more slowly.

The Reduce Priority setting is also expressed as a percentage. If the number of events drops below this level, the priority of the consuming thread will be reduced.

If the queue fills up, events back up on a producer-by-producer basis. When a producer is backed up for longer than the timeout setting, it is autopaused and an event is written to the trace so that you know that the trace was autopaused. Backed-up events and any events that occurred during the autopause are lost. When the queue opens up, producers begin producing events again. If the queue opens up before the timeout, backed-up events are written to the queue and no events are lost.

NOTE
This discussion is just the tip of the iceberg as far as SQL Server Profiler is concerned . The online documentation offers much more information, but the best way to understand SQL Server Profiler's event tracing capabilities is to start using the tool. The SQL Server Profiler user interface comes with six predefined traces, so you can run one of them while you work through examples in this book. Save the traces to files, and then reopen the files to see what kinds of replaying are possible and how you can reorganize the events to get different perspectives on your server's behavior.

Performance Monitor

You can use SQL Server Profiler to monitor the behavior of your server on a query-by-query or event-by-event basis. But if you want to monitor your server's performance as an entire system, the best tool is Performance Monitor. This tool is extensible, which allows SQL Server to export its performance statistics so that you can monitor your entire system. That's crucial because such important statistics as CPU use must be monitored for the entire system, not just for SQL Server. In fact, many of the most important counters to watch while performance tuning SQL Server don't even belong to any of the SQL Server objects.

Performance Monitor comes with Windows NT. Traditionally, in mainframe and minicomputer systems, you had to buy a separate system monitor ”at considerable expense ”or use a hodgepodge of utilities, some to monitor the operating system and others to monitor the database. Then, if you worked on some other system, it didn't have the same set of tools you were accustomed to using.

In the SQL Server folder, click on the Performance Monitor icon to start Performance Monitor with a saved PMC file that allows certain counters to be preloaded. There is no separate Performance Monitor for use with SQL Server; the icon in the SQL Server folder calls the Windows NT Performance Monitor with a specific set of counters. You can also manually save your own settings in a PMC file and set up a shortcut in the same way.

Performance Monitor provides a huge set of counters. Probably no one understands all of them, so don't be intimidated. Peruse all the objects and counters and note which ones have separate instances. Use the Explain button for helpful information, or see the SQL Server and Windows NT documentation.

SEE ALSO
The whitepaper "Microsoft SQL Server 7.0 Performance Tuning Guide" on the companion CD discusses many of the Performance Monitor counters and explains how to determine the ideal value for your SQL Server. Also helpful is the Windows NT Workstation Resource Kit Version 4.0 (Chapters 9 through 16). Prior to Windows NT 4, the Performance Monitoring section of the resource kit was contained in a separate volume called Optimizing Windows NT by Russ Blake.

Performance Monitor Counters

In this section, we'll look at several important counters. We'll provide a brief explanation of how they can be useful and what actions you should consider based on the information they provide. Often, of course, the appropriate action is generic: for example, if CPU usage is high, you should try to reduce it.

The methods you can use to make such adjustments are varied and vast ”from redesigning your application to reworking some queries, adding indexes, or getting faster or additional processors.

Object: Processor
Counter: % Processor Time

This counter monitors systemwide CPU usage. If you use multiple processors, you can set up an instance for each processor. Each processor's CPU usage count should be similar. If not, you should examine other processes on the system that have only one thread and are executing on a given CPU. Ideally , your system shouldn't consistently run with CPU usage of 80 percent or more, although short spikes of up to 100 percent are normal, even for systems with plenty of spare CPU capacity. If your system runs consistently above 80 percent or will grow to that level soon, or if it frequently spikes above 90 percent and stays there for 10 seconds or longer, you should try to reduce CPU usage.

First, consider making your application more efficient. High CPU usage counts can result from just one or two problematic queries. The queries might get high cache-hit ratios but still require a large amount of logical I/O. Try to rework those queries or add indexes. If the CPU usage count continues to be high, you might consider getting a faster processor or adding processors to your system. If your system is running consistently with 100 percent CPU usage, look at specific processes to see which are consuming the CPUs. It's likely that the offending process is doing some polling or is stuck in a tight loop; if so, the application needs some work, such as adding a sleep.

You should also watch for excessively low CPU usage, which indicates that your system is stalled somewhere. If locking contention is occurring, your system might be running at close to 0 percent CPU usage when no productive work is happening! Very low CPU usage can be a bigger problem than very high CPU usage. If you have poor overall throughput and low CPU usage, your application has a bottleneck somewhere and you must find and clear it.

Note that the Task Manager (Ctrl-Alt-Delete) in Windows NT 4 also provides a way to monitor CPU usage. The Task Manager is even easier to use than Performance Monitor.

Object: PhysicalDisk
Counter: Disk Transfers/sec

This counter shows physical I/O rates for all activity on the machine. You can set up an instance for each physical disk in the system or watch it for the total of all disks. SQL Server does most I/O in 8-KB chunks , although read-ahead I/O is essentially done with an I/O size of 64 KB. Watch this counter to be sure that you are not maxing out the I/O capacity of your system or of a particular disk. The I/O capacity of disk drives and controllers varies considerably depending on the hardware. But today's typical SCSI hard drive can do 80 to 90 random 8-KB reads per second, assuming that the controller can drive it that hard. If you see I/O rates approaching these rates per drive, you should verify that your specific hardware can sustain more. If not, add more disks and controllers, add memory, or rework the database to try to get a higher cache-hit ratio and require less physical I/O (via better design, better indexes, possible denormalization, and so on).

To see any of the counters from the PhysicalDisk object, you must reboot your computer with the Diskperf service started. You do this from the Devices applet in the Control Panel. Find Diskperf and change its startup option to Boot; then reboot the machine. After you're done monitoring, disable Diskperf.

Object: PhysicalDisk
Counter: Current Disk Queue Length

This counter indicates the number of reads that are currently outstanding for a disk. Occasional spikes are OK, especially when asynchronous I/O such as checkpoint kick in. But for the most part, the disks should not have a lot of queued I/O. Those operations, of course, must ultimately complete, so if more than one operation is queued consistently, the disk is probably overworked. You should either decrease physical I/O or add more I/O capacity.

Object: Memory Counter:
Pages/sec and Page Faults/sec

This counter watches the amount of paging on the system. As the system settles into a steady state, you want these values to be 0 ”that is, no paging going on in the system. In fact, if you allow SQL Server to automatically adjust its memory usage, it will reduce its memory resources when paging occurs. You should find that any paging that does occur is not due to SQL Server. If your system does experience regular paging, perhaps due to other applications running on the machine, you should consider adding more physical memory.

Object: Process
Counter: % Processor Time

Typically, you run this counter for the SQLServer process instance, but you might want to run it for other processes. It confirms that SQL Server (or some other process) is using a reasonable amount of CPU time. (It doesn't make much sense to spend a lot of time reducing SQL Server's CPU usage if some other process on the machine is using the larger percentage of the CPU to drive the total CPU usage near capacity.)

Object: Process
Counter: Virtual Bytes

Use this counter to see the total virtual memory being used by SQL Server, especially when a large number of threads and memory are being consumed. If this number gets too high, you might see Out Of Virtual Memory errors.

Object: Process
Counter: Private Bytes

This counter shows the current number of bytes allocated to a process that cannot be shared with other processes. It is probably the best Performance Monitor counter for viewing the approximate amount of memory committed by any threads within the sqlservr .exe process space.

Object: Process
Counter: Working Set

This counter shows the amount of memory recently used by a process. For the SQL Server process instance, this counter can actually be a valuable indicator of how much memory has been allocated within the SQL Server process space, especially for a dedicated server (because working-set trimming will likely not happen much). The value recorded by the working set should be very close to the value reported in the Task Manager, on the Processes tab, as the Mem Usage value for sqlservr.exe . Working Set is the current memory that SQL Server (and any components loaded in it) is currently accessing. It might not reflect the total amount of memory that SQL Server (and any component loaded in its process space) has allocated. Here, we use the term "allocated" to mean memory that has been committed to SQL Server. As long as no trimming has occurred, Working Set is the best counter for seeing how much memory has been allocated within the SQL Server process space.

Process: Private Bytes (described above) does not show all the memory committed, but the value can be more stable than Working Set if trimming occurs. You should consider monitoring both Private Bytes and Working Set. If you see Working Set dip below Private Bytes, you should look at Private Bytes. If Working Set dips below Private Bytes, the operating system must be trimming SQL Server's Working Set. This means other processes are competing for memory that SQL Server might need to get back, and you should evaluate what other processes are competing with SQL Server for memory resources.

Object: SQLServer: Buffer Manager
Counter: Buffer Cache Hit Ratio

There is no right value for the buffer cache-hit ratio since it is application-specific. If your system has settled into a steady state, ideally you want to achieve rates of 90 percent or higher, but this is not always possible if the I/O is random. Keep adding more physical memory as long as this value continues to rise or until you run out of money.

Object: SQLServer: Memory Manager
Counter: Total Server Memory

This counter can be useful, but it does not reflect all memory allocated within the SQL Server process space. It only reflects memory allocated in the SQL Server buffer pool. Note that the buffer pool is used much more extensively in version 7 than in previous versions (as discussed in Chapter 3). It is not just for data pages; it is also for other memory allocations within the server, including plans for stored procedures and for ad hoc queries. Certain components can get loaded into the SQL Server process space and allocate memory that is not under SQL Server's direct control. Examples of these are extended stored procedures, OLE Automation objects, and OLE DB provider DLLs. The memory space needed for these types of objects is included in SQL Server's Working Set, but not in the Total Server Memory counter.

Object: SQLServer: Cache Manager

This object contains counters for monitoring how the cache is being used for various types of objects, including ad hoc query plans, procedure plans, trigger plans, and prepared SQL plans. A separate Cache Hit Ratio counter is available for each type of plan, as is a counter showing the number of such objects and the number of pages used by the objects.

Object: SQLServer Buffer Manager
Counter: Page Reads/sec

Watch this counter in combination with the counters from the PhysicalDisk object. If you see rates approaching the capacity of your hardware's I/O rates (use 80 to 90 I/Os per disk per second as a guide), you should reduce I/O rates by making your application more efficient (via better design, better indexes, denormalization, and so on). Or you can increase the hardware's I/O capacity. This statistic measures only read operations, not writes, and it does so only for SQL Server, so you don't see the whole picture.

Object: SQLServer Buffer Manager
Counter: Checkpoint Writes/sec

Since the checkpoint process happens only at periodic intervals, you'll see a 0 value for this counter much of the time. During the checkpoint process, you should sustain as high an I/O rate as possible (perhaps hundreds per second) to get the checkpoint to complete as quickly as possible. If you have multiple disks and a fast controller, consider changing the max async IO option using sp_configure to try to sustain higher rates and shorter durations for the checkpoint. You can also use the recovery interval configuration option to affect the frequency of checkpointing.

Object: SQLServer: Buffer Manager
Counter: Page Writes/sec

This value keeps track of all physical writes done by SQL Server, for any reason. It includes checkpoint writes, lazywriter writes, and large block writes done during index creation or bulk copy operations. Separate counters are available for checkpoint writes and lazy writes; you're probably better off using these more specific counters.

Object: SQLServer: Databases
Counter: Log Flushes/sec

This value should be well below the capacity of the disk on which the transaction log resides. It is best to place the transaction log on a separate physical disk drive (or on a mirrored drive) so that the disk drive is always in place for the next write, since transaction log writes are sequential. There is a separate counter for each database, so make sure you are monitoring the right database instance.

Object: SQLServer: Databases
Counter: Transactions/sec

This counter measures actual transactions ”either user-defined transactions surrounded by BEGIN TRAN and COMMIT TRAN or individual data modification statements if no BEGIN TRAN has been issued. For example, if you have a batch that contains two individual INSERT statements, this counter records two transactions. The counter has a separate instance for each database and there is no way to keep track of total transactions for all of SQL Server. Use it only as a general indication of your system's throughput. There is obviously no "correct" value ”just the higher the better.

User-Defined Counters

SQL Server 7 offers 10 performance monitor counters for keeping track of any data or information that is useful to you. These counters are under the User Settable object. The only counter is called Query, and there are 10 instances to choose from. Ten stored procedures are available for specifying a value that Performance Monitor will chart for a particular instance. For example, if you want to use User Counter 1 to keep track of how many rows are in the invoices table, you can create triggers on the invoices table so that every time a row is inserted or deleted, the stored procedure sp_user_counter1 is executed. You can include the following code in the triggers:

 DECLARE @numrows int SELECT @numrows = count(*) FROM invoices EXEC sp_user_counter1 @numrows 

Once it is assigned a value, the user counter maintains that value until a new value is assigned. User counters are much more passive in SQL Server 7 than in earlier versions. In version 6.5, the stored procedures for defining user counters contained whatever calculations were needed to generate the counter's value, and Performance Monitor continually polled the server to determine the value for the counter. Polling meant actually executing the stored procedure for the counter. It took place at whatever polling interval Performance Monitor was configured to use. If your stored procedure was complex, it could take longer than the polling interval to generate the new value, and the cost of generating the values could be quite expensive. In SQL Server 7, the stored procedure must be called explicitly to set a counter value.

Note that the value passed to the stored procedure must be a constant, a variable, or a parameterless system function starting with @@ , and its datatype must be Integer.

Other Performance Monitor Counters

The counters mentioned in this chapter are just the tip of the iceberg. SQL Server has dozens of others, for monitoring almost every aspect of SQL Server behavior discussed in this book. The best way to learn about them is to experiment with Performance Monitor. Look through the list of available objects and counters, select a few for viewing, and see what results you get. You'll probably find a few that you'll want to monitor all the time, and others that you'll be interested in only occasionally. But do revisit the list of all available counters, because there are too many to remember. A month from now, you'll rediscover a counter that you completely overlooked the first time. You should also revisit the available objects and counters after every service pack upgrade, because new ones might be added as the SQL Server development team gets feedback.

Tables 15-1 through 15-5 show some of the counters you might want to experiment with; the descriptions are adapted from the online documentation.

Table 15-1. Counters for the Access Methods object.

Counter Description
Extents Allocated/sec Number of extents allocated per second to database objects used for storing index or data records.
Forwarded Records/sec Number of records per second fetched through forwarded record pointers.
Full Scans /sec Number of unrestricted full scans per second. These can be either base-table or full-index scans.
Index Searches/sec Number of index searches per second. These are used to start range scans and single index record fetches and to reposition an index.
Page Splits /sec Number of page splits per second that occur as the result of overflowing index pages.
Pages Allocated/sec Number of pages allocated per second to database objects used for storing index or data records.
Probe Scans/sec Number of probe scans per second. These are used to find rows in an index or base table directly.
Range Scans/sec Number of qualified range scans through indexes per second.
Skipped Ghosted Records/sec Number of ghosted records per second skipped during scans.
Table Lock Escalations/sec Number of times locks on a table were escalated.
Worktables Created/sec Number of worktables created per second.

Table 15-2. Counters for the Databases object.

Counter * Description
Active Transactions Number of active transactions for the database.
Bulk Copy Rows/sec Number of rows bulk copied per second.
Bulk Copy Throughput/sec Amount (in kilobytes) of data bulk copied per second.
Data File(s) Size (KB) Cumulative size (in kilobytes) of all the data files in the database, including any automatic growth. This counter is useful for determining the correct size of tempdb , for example.
Log Cache Hit Ratio Percentage of log cache reads satisfied from the log cache.
Log File(s) Size (KB) Cumulative size (in kilobytes) of all the transaction log files in the database.
Log Growths Total number of times the transaction log for the database has been expanded.
Log Shrinks Total number of times the transaction log for the database has been shrunk.
Log Truncations Total number of times the transaction log for the database has been truncated.
Percent Log Used Percentage of space in the log that is in use.
Shrink Data Movement Bytes/sec Amount of data being moved per second by autoshrink operations or by DBCC SHRINKDATABASE or DBCC SHRINKFILE statements.
Transactions/sec Number of transactions started for the database per second.

*Separate instances of these counters exist for each database.

Table 15-3. Counters for the Locks object.

Counter Description
Average Wait Time (ms) Average amount of wait time (in milliseconds) for each lock request that resulted in a wait.
Lock Requests /sec Number of new locks and lock conversions per second requested from the lock manager.
Lock Timeouts/sec Number of lock requests per second that timed out, including internal requests for NOWAIT locks.
Lock Wait Time (ms) Total wait time (in milliseconds) for locks in the last second.
Lock Waits/sec Number of lock requests per second that could not be satisfied immediately and required the caller to wait.
Number of Deadlocks/sec Number of lock requests per second that resulted in a deadlock.

Table 15-4. Counters for the Memory Manager object.

Counter Description
Connection Memory (KB) Total amount of dynamic memory the server is using for maintaining connections.
Lock Memory (KB) Total amount of dynamic memory the server is using for locks.
Maximum Workspace Memory (KB) Maximum amount of memory available for executing processes such as hash, sort , bulk copy, and index creation operations.
Memory Grants Outstanding Total number of processes per second that have successfully acquired a workspace memory grant.
Memory Grants Pending Total number of processes per second waiting for a workspace memory grant.
Optimizer Memory (KB) Total amount of dynamic memory the server is using for query optimization.
SQL Cache Memory (KB) Total amount of dynamic memory the server is using for the dynamic SQL cache.
Target Server Memory (KB) Total amount of dynamic memory the server is willing to consume.
Total Server Memory (KB) Total amount (in kilobytes) of dynamic memory that the server is currently using.

Table 15-5. Counters for the SQL Statistics object.

Counter Description
Auto-Param Attempts/sec Number of autoparameterization attempts per second. Total should be the sum of the failed, safe, and unsafe autoparameterizations. Autoparameterization occurs when SQL Server attempts to reuse a cached plan for a previously executed query that is similar, but not exactly the same, as the current query.
Batch Requests/sec Number of Transact-SQL command batches received per second.
Failed Auto-Params/sec Number of failed autoparameterization attempts per second. This should be small.
Safe Auto-Params/sec Number of safe autoparameterization attempts per second.
SQL Compilations/sec Number of SQL compilations per second. Indicates the number of times the compile code path is entered. Includes compiles due to recompiles. Once SQL Server user activity is stable, this value should reach a steady state.
Unsafe Auto-Params/sec Number of unsafe autoparameterization attempts per second. The table has characteristics that prevent the cached plan from being shared. These are designated as unsafe . The fewer of these that occur the better.

Other Performance Monitoring Considerations

Any time you monitor performance, you also slightly alter performance simply because of the overhead cost of monitoring. It can be helpful to run Performance Monitor on a separate machine from SQL Server to reduce that overhead. The SQL Server_specific counters are obtained by querying SQL Server, so they use connections to SQL Server. (Performance Monitor always uses Windows NT Authentication to connect to the SQL Server.) It is best to monitor only the counters you are interested in.



Inside Microsoft SQL Server 7.0
Inside Microsoft SQL Server 7.0 (Mps)
ISBN: 0735605173
EAN: 2147483647
Year: 1999
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net