Information Gathering


The SQL Server Enterprise Manager provides a facility where an administrator can go to find out pertinent information about what the server is currently doing. Use the Current Activity window to perform monitoring of SQL Server. This enables you to determine, at a glance, the volume and general types of activity on the system related to current blocked and blocking transactions in the system, connected users, the last statement executed, and locks that are currently in effect.

The Current Activity view provides a display of process information, locks broken down by process identification, and locks broken down by object.

Processes Operating in the Environment

The process information provides information on all activity currently executing against the system. It also lists current connections that may not be active but are still using resources. Here are descriptions of the process information columns:

  • Process ID SQL Server Process Identifier (SPID).

  • Context ID Execution Context Identifier, used to uniquely identify the subthreads operating on behalf of the process.

  • User Identifier of the user who executed the command.

  • Database Database currently being used by the process.

  • Status Status of the process.

  • Open Transactions Number of open transactions.

  • Command Command currently being executed.

  • Application Name of the application program being used.

  • Wait Time Current wait time in milliseconds. When the process is not waiting, the wait time is zero.

  • Wait Type Name of the last or current wait type.

  • Wait Resources Textual representation of a lock resource.

  • CPU Cumulative CPU time for the process.

  • Physical IO Cumulative disk reads and writes.

  • Memory Usage Number of pages in the procedure cache that are currently allocated. A negative number indicates that the process is freeing memory allocated by another process.

  • Login Time Time at which a client process logged in to the server.

  • Last Batch Last time a client process executed a remote stored procedure call or an EXECUTE statement.

  • Host Name of the workstation.

  • Network Library Column in which the client's network library is stored. Every client process comes in on a network connection. Network connections have a network library associated with them that enables them to make the connection.

  • Network Address Assigned unique identifier for the network interface card on each user's workstation.

  • Blocked By Process ID (SPID) of a blocking process.

  • Blocking Process ID (SPID) of processes that are blocked.

The process information presented can be a good starting point to see what is going on within the SQL Server environment. But you are going to have to go into a lot more depth and use many of the other tools available if you are truly going to see what is happening on the server and, perhaps of more interest, how it is happening.

By far the two most diverse and granular tools for analysis of the database system are the Query Analyzer and the Profiler. Gathering information from the operating system tools and the server activity procedures and displays plays an important part in the initial fact-finding that must be performed to alter any of the properties of the server.

It is a good idea to not run the SQL Server 2000 Profiler on the same server you are monitoring. Running the Profiler uses considerable resources and this can noticeably affect the server's performance. Instead, run it on another server or workstation acting as a monitoring machine and have all data collected there.

The processes we are discussing are often used in an implementation as a starting point toward the following:

  • Developing system documentation

  • Establishing record-keeping procedures

  • Troubleshooting system problems

  • Optimizing the database server

  • Tracking data trends

As you begin any of these processes, begin with the hardware and operating system and then proceed into the application server. As you get further into data gathering and analysis, you should look into each database and the interactions between the data and the user applications. To view SQL Server tasks in detail, after the initial data-gathering processes, use the SQL Server Profiler to develop a more complete picture. When more granularity is desired and you want to look into the user applications, the functionality provided by the Query Analyzer can be used to obtain the most detailed and granular data.

External Tools Provided by the Operating System

Much of the performance of any database system relies on the application and database designers. The use of the network, processor, memory, and disk storage can all be dictated by the type of database design and the use of the applications operating against the database server. For this reason, the operating system usually acts as only a starting point in any performance analysis testing. If the hardware configuration is not adequate and the operating system is not properly configured, the database engine won't be able to respond optimally.

The first task that the hardware and operating system serve in any database system is to provide a means to store and operate the database software and objects. The operating system is also responsible for reporting hardware and software problems, as well as making it possible for you to monitor everything executing on the machine.

From the Event Viewer you can quickly spot problems and diagnose the series of steps that led up to any given situation. The Event Viewer has options available for you to filter events to specific categories and severity. You can also select other computers to view and find out additional information about any event.

To see only the desired events, you need to configure a view for the log with the appropriate selections. If you view the properties of any of the logs, you can find general and filter information under the appropriate tabs. In the General tab you can see and set pertinent information for the log itself. The Filter tab can be used to view only the desired errors. The application log will show the bulk of the messages you would be interested in under the MSSQLSERVER source. Reporting to the Event Viewer by SQL Server is made to the application event log. Use this log when looking for messages sent by any of the SQL Serverrelated services. The application log records events that pertain to SQL Server, using any of the following sources:

  • MSSQLSERVER This is the primary source for event information and, along with SQLSERVERAGENT, will be responsible for most of your diagnostic focus. The SQL Server service manages all the database files used by an instance of SQL Server. It is the component that processes all statements sent from client applications. The service allocates resources between users and enforces business rules as defined.

  • SQLSERVERAGENT The SQL Server agent supports the nondata-specific activity in an installation. It is involved in the scheduling of activities, notification of problems, and definition of important contact personnel. This is done in the form of Jobs, Schedules, Operators, and Alerts.

  • MSSQLServerOLAPService The primary service used in OLAP (Online Analytical Processing) to manage statistical operations between data cubes.

  • MSDTC The Microsoft Distributed Transaction Coordinator is an operating service accessed by SQL Server for coordinating transaction atomicity across multiple data resources using a two-phase commit. You can enable distributed transaction support in applications by using the appropriate distributed transaction identifiers around the block of T-SQL code to be handled by MSDTC similar to the following:

     BEGIN DISTRIBUTED TRANSACTION        /* Commands used in the           distributed transaction */ END DISTRIBUTED TRANSACTION 

  • MSSQLServerADHelper The MSSQLServerADHelper service performs the two functions necessary for active directory integration. It adds and removes objects used in Active Directory, and it ensures that the Windows account under which a SQL Server service is running has permissions to update the Active Directory objects.

Of course, if you are running multiple versions of SQL Server on the same machines, you will see these also represented as information sources. Some of these sources do not show up on all installations. If a server is not using an aspect of SQL Server, that source does not report any information. Now that the hardware and operating system interactions have been addressed, it's time to move into the server itself to begin a more granular look.

Monitor Activity with the Profiler

The SQL Profiler tool is a graphical mechanism that enables you to monitor SQL Server events. The tool enables you to capture and save data about every event on the server. The data can be stored to a file or SQL Server table. Stored data can then be analyzed and events can be replayed.

The SQL Profiler should be used to monitor only the events of interest. Monitoring all events on the server produces so much data that the data can become overwhelming. Large traces can be filtered so that you are viewing only the information you want to see. You can use filters to view a subset of the event data that was captured. Monitoring too many events also adds excessive amounts of overhead to the server. This overhead can slow the monitoring process and cause the large amounts of output from the trace to create a file or table that is very large. This is particularly important when you are going to be performing the monitoring over long periods.

SQL Profiler is a useful tool for various circumstances. Use SQL Profiler to do the following:

  • Monitor the performance of SQL Server

  • Debug T-SQL statements and stored procedures

  • Identify long-running queries

  • Step through procedures to ensure that they are working as expected

  • Capture events on a production system and replay them on a test system

  • Diagnose problem situations through the capturing of event data

  • Audit and review events

In troubleshooting the SQL Server environment, you will typically use the SQL Profiler. The tool is best used to find queries that are not performing well or ones that are executing for long periods. It is also useful in identifying the cause of data blocking and deadlock situations. In monitoring a healthy server, the SQL Profiler is generally used to monitor performance and to audit application, user, database, and job activity.

Before you start using the Profiler, you should become familiar with profile templates. A template defines the criteria for each event you want to monitor with SQL Profiler. Predefined templates can be used for individual circumstances, and you can create your own template as well, specifying which events, data columns, and filters to use. A template can be saved, and at any time a trace can be loaded and started with the template settings. To help identify long-running queries, use the Profiler's Create Trace Wizard to run the TSQL by Duration template. You can specify the length of the long-running queries you are trying to identify, and then have these recorded in a log.

The SQL Profiler captures data using a trace based on the selected events, data columns, and filters. The trace is the basis for all data collected and can be defined on an ad hoc basis, can be drawn from a template, or can be a combination of the two. Even though you have defined the data collected, you may still apply filters to the data after it is captured to focus on the type of information you want. For this reason you may want to save traces even after you are finished with the current activity. A past trace can possibly be used and applied to various circumstances.

At times, when monitoring with the Profiler, you will find the amount of data provided to be considerable and possibly overwhelming. It can be difficult to find what you are looking for within a trace that covers a broad range of events. A useful technique that can ease this process is to write the trace to a SQL Server table, and then query the table from within the Query Analyzer. Assuming that you know what you are looking for, this method can greatly speed up finding the data in the trace you need.

The Profiler can store captured data in a text file or in a table. If you decide to store the data in a SQL Server table, don't store it in a database you are profiling or, if possible, not even on the same server, because it could affect the performance of the server you are profiling. Instead, store the trace data in a database on another server. After data is captured (which is the easy part of the process), you must sift through the data collected to draw some meaning from the results.

The Profiler can use many existing templates to gather information for various types of circumstances. You may want to select some of these templates to see the actual information that is being gathered. After it has been created, the trace is permanently stored until it is deleted. The trace can be started again by name through the Profiler interface or via stored procedure.

To use the output generated by a trace, you must first determine what type of data you are most interested in from the trace. The next section illustrates how to get the most valuable information from the trace results.

On the exam, you may be expected to select the correct event classes and objects given a specific scenario. Ensure that you are familiar with the classifications of the Profiler trace objects and under what circumstances you would select them. By viewing the objects selected for each of the default templates, you can familiarize yourself with these choices.


Defining a Profiler Trace

When using the SQL Profiler to define a trace, you use event categories to select the events to monitor. Event categories have been grouped into classes of events. The following classes and their descriptions are available to be selected:

  • Cursors Events produced by use of cursors.

  • Database Events produced when files grow or shrink automatically.

  • Errors and Warnings Events produced when an error or warning occurs.

  • Locks Events produced when a lock is acquired, or other lock activity occurs.

  • Objects Occur as objects are created, opened, closed, or deleted.

  • Performance Events produced when SQL data manipulations execute.

  • Scans Events produced when tables and indexes are scanned.

  • Security Audit Events used to audit server activity.

  • Sessions Events produced by clients connecting and disconnecting.

  • Stored Procedures Events produced by the execution of procedures.

  • Transactions Events produced by the execution of Microsoft Distributed Transaction Coordinator transactions or by writing to the transaction log.

  • TSQL Events produced by the execution of T-SQL statements.

  • User Configurable User-configurable events.

Each event class has various objects that can be monitored. To select any of the objects when defining a trace, use the Events tab of the Trace Properties dialog box. You add and remove objects, not whole classes, although a whole class of objects can be traced, if desired.

Using Profiler Traces to Diagnose Locking

SQL Profiler provides the Locks event classes to monitor locking behavior during trace operations. Several of these classes are useful in monitoring locking, blocking, and deadlocking situations on the server. The following list represents the classes and gives a short narrative description of what each class can be used for:

  • Lock:Acquired This event fires to show the acquisition of a resource lock.

  • Lock:Cancel An event is fired when a lock on a resource has been cancelled. A lock can be cancelled by SQL Server because of a deadlock or by a programmatic cancellation by a process.

  • Lock:Deadlock A deadlock occurs if two concurrent transactions have deadlocked each other by trying to obtain locks on resources that the other owns.

  • Lock:Deadlock Chain The chain of events produced for each of the processes leading up to a deadlock situation.

  • Lock:Escalation An event fired when the server determines that a lock should be converted to a larger scope.

  • Lock:Released The event fires when a resource lock is released.

  • Lock:Timeout This event fires when a lock request has timed out because another process is blocking a resource with its own lock.

Lock:Acquired and Lock:Released are used to monitor the timing of lock behavior. These events indicate the type of lock and the length of time the lock was held. Often a redesign of the application that is setting the locks in place can lessen the lock duration considerably.

The Lock:Deadlock, Lock:Deadlock Chain, and Lock:Timeout are used to monitor deadlock and timeout situations. This information is useful to determine whether deadlocks and timeouts are affecting the user and/or application.

Locking is one of the more common aspects queried on the exam. Remember to first check the current activity to get a snapshot perspective on locking. Use the Profiler when you want to analyze locking over time.


The Results window of the SQL Profiler is segmented into two view panes. If you have included TexTData as one of the columns in your definition, the bottom pane shows you SQL statement information. The top pane illustrates the current trace data view where event information is displayed based on current filter settings.

After trace event data has been collected, you can save the trace to have it replayed later. The SQL Profiler Playback feature is powerful but carries a little overhead. It is well worth considering having a test machine available to act as a playback and troubleshooting server.

Playback of events is accomplished through the SQL Server multithreaded playback engine. This engine can simulate user connections and SQL Server authentication. The event data can be played back to reproduce the activity captured in the trace. Replay can be very useful in troubleshooting an application or another process problem.

Trace Playback and Diagnosis

After you have identified the problem and implemented corrections, run the trace that was originally collected against the corrected application or process to see whether the proposed solution accomplishes the desired effect. The replay of the original trace can be a useful mechanism in designing solutions. The trace replay feature has advanced debugging support. You can make use of break points and run-to-cursor features.

When the target computer is going to be other than the computer originally traced, you must ensure that the database IDs on the target are the same as those on the source. You can accomplish this by creating (from the source) a backup of the Master database, as well as any user databases referenced in the trace, and restoring them on the target. In this manner a test SQL Server can be used as a debugging server for any multiple-application environment.

The default database for each login contained in the trace must be set on the target. The default database of the trace activity login must be set to the database that matches that login name, even in cases in which the database name might be different. To set the default database of the login, use the sp_defaultdb system stored procedure.

You have the option of replaying the events in the order in which they were traced. If selected, this option enables debugging. This enables you to implement debugging techniques such as stepping through the trace. Replaying the events using multiple threads will optimize performance but will disable debugging. The default option is to display the results of the replay. If the trace you want to replay is a large capture, you may want to disable this option to save disk space.



    EXAM CRAM 2 Designing and Implementing Databases with SQL Server 2000 Enterprise
    MCAD/MCSE/MCDBA 70-229 Exam Cram 2: Designing & Implementing Databases w/SQL Server 2000 Enterprise Edition
    ISBN: 0789731061
    EAN: 2147483647
    Year: 2005
    Pages: 154

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net