Statistics


SQL Server 2000 still supports both the SET STATISTICS IO option and the SET STATISTIC TIME option, which will display the actual logical and physical page reads incurred by a query and the CPU and elapsed time, respectively. These two SET options return actual execution statistics, as opposed to the estimates returned by the Query Analyzer and the two showplan options discussed in the previous section. These two tools are invaluable for determining the actual query impact.

The SET STATISTICS PROFILE option is also provided to display execution plan information while still allowing the query to run.

STATISTICS PROFILE

The SET STATISTICS PROFILE option returns the same textual information that is displayed with the SET SHOWPLAN_ALL statement, with the addition of two columns that display actual execution information. This is essentially the same as enabling the Show Execution Plan option in Query Analyzer, but without the graphical display.

The Rows column displays the actual number of rows that is returned in the execution step, and the Executions column shows the actual number of executions for the step. The Rows column can be compared to the EstimatedRows column, and the Execution column can be compared to the EstimatedExecution column to determine the accuracy of the execution plan.

You can set the STATISTICS PROFILE option for individual user sessions. In a Query Analyzer window, type the following:

  SET STATISTICS PROFILE ON   GO  

STATISTICS IO

You can set the STATISTICS IO option for individual user sessions, and you can turn it on in a Query Analyzer window by typing the following:

  SET STATISTICS IO ON   GO  

You can also set this option for the user session in Query Analyzer by choosing the Options item in the Tools menu, or the Current Connection Properties item in the Query menu. In the Connection Properties tab in the dialog box displayed, check the Show Statistics IO check box.

The STATISTICS IO option displays the scan count (number of iterations), the logical reads (from cached data), the physical reads (from physical storage), and the read-ahead reads.

Listing 36.2 displays the STATISTICS IO output for the same query that was executed in Listing 36.1. (Note that the result set has been deleted to save space.)

Listing 36.2 Example of the STATISTICS IO Output
 set statistics io on go select st.stor_name, ord_date, qty from stores st join sales_noclust s on st.stor_id = s.stor_id where st.stor_id between 'B100' and 'B199' go -- output deleted Table 'sales_noclust'. Scan count 100, logical reads 1279, physical reads 0,  read-ahead reads 0. Table 'stores'. Scan count 1, logical reads 3, physical reads 0,  read-ahead reads 0. 
Logical Reads

The logical reads value indicates the total number of page accesses necessary to process the query. Every page is read from cache memory, even if it first has to be read from disk. Every physical read will always have a corresponding logical read, so the number of physical reads will never exceed the number of logical reads. Because the same page might be accessed multiple times, the number of logical reads for a table could exceed the total number of pages in the table.

Physical Reads

The physical reads value indicates the actual number of pages that were read from disk. The value for physical reads can vary greatly and should decrease, or drop to zero, with subsequent executions of the query because the data will be loaded into the data cache by the first execution. The number of physical reads will also be lowered by pages that were brought into memory by the read-ahead mechanism.

Read-Ahead Reads

The read-ahead reads value indicates the number of pages that were read into cache memory using the read-ahead mechanism while the query was processed . Pages read by the read-ahead mechanism will not necessarily be used by the query. When a page that was read by the read-ahead mechanism is accessed by the query, it counts as a logical read, but not as a physical read.

The read-ahead mechanism can be thought of as an optimistic form of physical I/O, reading the pages into cache memory that it expects the query will need before the query needs them. When scanning a table or index, the table's index allocation map pages (IAMs) are looked at to determine which extents belong to the object. An extent consists of eight data pages. The eight pages in the extent are read with a single read and the extents are read in the order that they are stored on disk. If the table is spread across multiple files, the read-ahead mechanism attempts parallel reads from up to eight files at a time instead of sequentially reading from the files.

Scan Count

The scan count value indicates the number of times that the corresponding table was accessed during query execution. The outer table of a nested loop join will have a scan count of 1. The scan count for the inner tables will typically reflect the number of times that the inner table is searched, usually the same as the number of qualifying rows in the outer table. The number of logical reads for the inner table will be equal to the scan count times the number of pages per lookup for each scan. Note that the scan count for the inner table might sometimes be only 1 for a nested join if SQL Server copies the needed rows from the inner table into a worktable in cache memory and reads from the worktable for subsequent iterations. The scan count for hash joins and merge joins will typically be 1 for both tables involved in the join, but the logical reads for these types of joins will usually be substantially higher.

Analyzing STATISTICS IO output

The output shown in Listing 36.2 indicates that the sales_noclust table was scanned 100 times, with all reads coming from cache. (No physical I/Os were performed.) The stores table was scanned once, with all reads coming from cache as well.

You can use the STATISTICS IO option to evaluate the effectiveness of the size of the data cache, and to evaluate over time how long a table will stay in cache. The lack of physical reads is a good sign, indicating that memory is sufficient to keep the data in cache. If you keep seeing many physical reads when you are analyzing and testing your queries, you might want to consider adding more memory to the server to improve the cache hit ratio. You can estimate the cache-hit ratio for a query using the following formula:

 cache hit ratio = (logical reads  physical reads) / logical reads 

The number of physical reads will also appear lower if pages were preloaded by read-ahead activity. Because read-ahead reads lower the physical read count, they give the indication of a good cache-hit ratio, when in actuality, the data is still being physically read from disk. The system could still benefit from more memory so that the data remains in cache and the number of read-ahead reads is reduced. STATISTICS IO is generally more useful for evaluating individual query performance than for evaluating overall cache-hit ratio. The pages that reside and remain in memory for subsequent executions will be determined by the data pages and other queries being executed at the same time, and the number of data pages that are being accessed by the other queries. If no other activity is occurring, you will likely see no physical reads for subsequent executions of the query if the amount of data being accessed fits in the available cache memory. Likewise, if the same data is being accessed by multiple queries, the data will tend to stay in cache and the number of physical reads will be low. However, if other queries executing at the same time are accessing large volumes of data from different tables or ranges of values, the data needed for the query you are testing might end up being flushed from cache, and the physical I/Os will increase. (For more information on how the data cache in SQL Server is managed, see Chapter 33, "SQL Server Internals.") Depending on the other ongoing SQL Server activity, the physical reads you see displayed by STATSITICS IO can be inconsistent.

When you are evaluating individual query performance, examining the logical reads value is usually more helpful because the information is consistent across all executions, regardless of other SQL Server activity. Generally speaking, the queries with the fewest logical reads will be the fastest queries. If you want to monitor the overall cache-hit ratio for all SQL Server activity to evaluate the SQL Server memory configuration, use Performance Monitor, which is discussed in Chapter 37, "Monitoring SQL Server Performance."

STATISTICS TIME

You can set the STATISTICS TIME option for individual user sessions. In a Query Analyzer window, type the following:

  SET STATISTICS TIME ON  

You can also set this option for the user session in Query Analyzer by choosing the Options item in the Tools menu, or the Current Connection Properties item in the Query menu. In the Connection Properties tab in the dialog box displayed, check the Show Statistics Time check box.

The STATISTICS TIME option displays the total CPU and elapsed time that it takes to actually execute a query. The STATISTICS TIME output for the query in Listing 36.1 returns the output shown in Listing 36.3. (Again, the output has been deleted to save space.)

Listing 36.3 Example of the STATISTICS TIME Output
 set statistics io on set statistics time on go select st.stor_name, ord_date, qty    from bigpubs2000.dbo.stores st    join bigpubs2000.dbo.sales_noclust s      on st.stor_id = s.stor_id    where st.stor_id between 'B100' and 'B199'go go SQL Server parse and compile time:    CPU time = 0 ms, elapsed time = 0 ms. --output deleted Table 'sales_noclust'. Scan count 100, logical reads 1383, physical reads 0,  read-ahead reads 0. Table 'stores'. Scan count 1, logical reads 3, physical reads 0,  read-ahead reads 0. SQL Server Execution Times:    CPU time = 15 ms,  elapsed time = 15 ms. 

Here, you can see that the total execution time, denoted by the elapsed time, was all CPU time. This is due to the lack of any physical reads and the fact that all activity is performed in memory.

NOTE

In some situations, you might notice that there will be two sets of output for parse and compile time for a query. This happens when the query plan is being added to the syscacheobjects table for possible reuse. The first set of information output is the actual parse and compile before placing the plan in cache, and the second set of information output appears when SQL Server is retrieving the plan from cache. Subsequent executions will still show the same two sets of output, but the parse and compile time will be 0 when the plan is reused because a query plan is not being compiled.

If elapsed time is much higher than CPU time, the query had to wait for something, either I/O or locks. If you want to see the effect of physical versus logical I/Os on your query performance, you need to flush the pages accessed by the query from memory. You can use the DBCC DROPCLEANBUFFERS command to clear all clean buffer pages out of memory. Listing 36.4 shows an example of clearing the pages from cache and rerunning the query with the STATISTICS IO and STATISTICS TIME options enabled.

TIP

To ensure that none of the table is left in cache, make sure that all pages are marked as clean before running the DBCC DROPCLEANBUFFERS command. A buffer is dirty if it contains a data row modification that has either not been committed yet, or has not been written out to disk yet. To clear the greatest number of buffer pages from cache memory, make sure all work is committed, checkpoint the database to force all modified pages to be written out to disk, and then execute the DBCC DROPCLEANBUFFERS command.

Listing 36.4 Example of Clearing the Clean Pages from Cache to Generate Physical I/Os
 USE bigpubs2000 go CHECKPOINT go DBCC DROPCLEANBUFFERS go SET STATISTICS IO ON SET STATISTICS TIME ON go select st.stor_name, ord_date, qty    from bigpubs2000.dbo.stores st    join bigpubs2000.dbo.sales_noclust s      on st.stor_id = s.stor_id    where st.stor_id between 'B100' and 'B199'go go SQL Server parse and compile time:    CPU time = 0 ms, elapsed time = 0 ms. --output deleted Table 'sales_noclust'. Scan count 100, logical reads 1383, physical reads 16,  read-ahead reads 3. Table 'stores'. Scan count 1, logical reads 3, physical reads 1,  read-ahead reads 2. SQL Server Execution Times:    CPU time = 15 ms,  elapsed time = 78 ms. 

Notice that this time around, the CPU time was the same, but the elapsed time was 78 milliseconds (ms), nearly 5 times slower, due to the physical I/Os that had to be performed during this execution.

You can use the STATISTICS TIME and STATISTICS IO options together in this way as a useful tool for benchmarking and comparing performance.

Using datediff () to Measure Runtime

While the STATISTICS TIME option works fine for displaying the runtime of a single query, it is not as useful for displaying the total CPU time and elapsed time for a stored procedure. This option will end up generating time statistics for every command run in the stored procedure. This makes it difficult to read the output and determine the total elapsed time for the stored procedure.

Another way to display runtime for a stored procedure is to capture the time right before it starts, capture the time as it completes, and display the difference between the two. Specify the appropriate- sized datepart parameter depending on how long your procedures typically run. For example, if a procedure takes minutes to complete, you probably want to display the difference in seconds or minutes, rather than milliseconds. If the time to complete is in seconds, then you would want to specify a datepart of seconds or milliseconds. Listing 36.5 displays an example of using this approach.

Listing 36.5 Using datediff() to Determine Stored Procedure Runtime
 declare @start datetime select @start = getdate() exec sp_help select datediff(ms, @start, getdate()) as 'runtime(ms)' go -- output deleted runtime(ms) -----------          20 

NOTE

With all the fancy graphical tools available, why would you want to use the text-based analysis tools? Although the graphical tools are useful for analyzing individual queries one at a time, they can be a bit tedious if you have to perform analysis on a number of queries. As an alternative, you can put all the queries you want to analyze in a script file and set the appropriate options to get the query plan and statistics output you want to see. You can then run the script through a tool such as isql and route the output to a file. You can then quickly scan the file, or use an editor's Find utility, to look for the more obvious potential performance issues, such as table scans or long running queries. Next you can copy the problem queries you identify from the output file into Query Analyzer, where you can perform a more thorough analysis on them.

You could also set up a job to run this SQL script periodically to constantly capture and save performance statistics, providing you with a means to keep a history of the query performance and execution plans over time. This information can be used to compare performance differences as the data volumes and SQL Server activity levels change over time. It is impossible to save the Query Analyzer graphical plan information to a file for historical analysis.

In addition, for very complex queries, the graphical plan tends to get very big and spread out so much that it's difficult to read and follow. The textual output is much more compact and easier to see all at once.

It also helps to become familiar with the query plan textual output because that is what is displayed when you capture execution plan information with SQL Profiler.



Microsoft SQL Server 2000 Unleashed
Microsoft SQL Server 2000 Unleashed (2nd Edition)
ISBN: 0672324679
EAN: 2147483647
Year: 2002
Pages: 503

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net