Monitoring Events


SQL Trace and event notifications are the two ways you can monitor events that happened in the database engine. SQL Trace records the specified events and stores them in a file (or files) that you can use later to analyze the data. You have to specify which database engine events you want to trace when you define the trace. There are two ways to access the trace data: using SQL Server Profiler, a graphical user interface, or through T-SQL system stored procedures. SQL Server Profiler exploits all of the event-capturing functionality of SQL Trace, and adds the capability to trace information to or from a table, save the trace definitions as templates, extract query plans and deadlock events as separate XML files, and replay trace results for diagnosis and optimization.

Event notifications send information to a Service Broker service about many of the same events (not all) that are captured by SQL Trace. Unlike traces, event notifications can be used to perform an action inside SQL Server in response to events. Because event notifications execute asynchronously, these actions do not consume any resources defined by the immediate transaction - meaning, for example, if you want to be notified when a table is altered in a database, then the ALTER TABLE statement would not consume more resources or be delayed because you have defined event notification.

There are number of reasons you want to monitor what's happening inside your SQL Server:

  • Find the worst performing queries or stored procedures. We have provided a trace template on this book's Web site at www.wrox.com, which you can import into your SQL Server Profiler to capture this scenario. We have included the Showplan Statistics Profile, Showplan XML, and Showplan XML Statistics Profile under Performance event groups. We included these events because after you determine the worst-performing queries, we are sure that you will want to see what query plan was generated by them. Just looking at the duration of the T-SQL batch or stored procedure does not get you anywhere. You should consider filtering the trace data by setting some value in the Duration column to only retrieve those events that are longer than a specific duration so that you minimize your dataset for analysis.

  • Audit user activities. You can create a trace with Audit Login events; and by selecting the EventClass (the default), EventSubClass, LoginSID, and LoginName data columns, you can audit use activities in SQL Server. You may add more events from the Security Audit event group or data columns based on your need. You may someday need this type of information for legal purposes in addition to your technical purposes.

  • Identify the cause of a deadlock. We highly recommend that you set the startup trace flags for tracing deadlocks. SQL Trace doesn't persist between server cycles unless you use SQL Job to achieve this. You can use startup trace flag 1204 or 1222 (1222 returns more verbose information than 1204 and resembles an XML document) to trace a deadlock anytime it happens on your SQL Server. Refer to Chapter 4 to learn more about these trace flags and how to set them. To capture deadlock information using SQL Trace, you need to capture these events in your trace: Start with Standard trace template and add the Locks event classes (Locks: Deadlock graph, Lock: Deadlock, or Lock: Deadlock Chain). If you specify the Deadlock graph event class, SQL Server Profiler produces a graphical representation of the deadlock.

  • Collect a representative set of events for stress testing. For some benchmarking, you want to reply the trace generated. SQL Server provides the standard template TSQL_Replay to capture a trace that can be replayed later. If you want to use a trace to replay later, make sure that you use this standard template because in order to replay the trace, SQL Server needs some specific events captured and this template does just that. Later you will look at how to replay the trace.

  • Create a workload to use for the Database Tuning Adviser. SQL Server Profiler provides a predefined Tuning template that gathers the appropriate Transact-SQL events in the trace output so it can be used as a workload for the Database Engine Tuning Advisor.

  • Take a performance baseline. Earlier you learned that you should take a baseline and update it at regular intervals to compare with previous baselines to find out how your application is performing. For example, let's say you have a batch process that loads some data once a day and validates it, does some transformation, and so on, and puts it into your warehouse after deleting the existing set of data. After some time there is an increase in data volume and suddenly your process starts slowing down. You would guess that an increase in data volume would slow the process down, but is that the only reason? In fact, there could be more than one reason. The query plan generated may be different, because the stats may be incorrect, because your data volume increased, and so on. If you have a statistic profile for the query plan taken during the regular baseline, with other data (such as perf logs) you can quickly identify the root cause.

SQL Trace

As mentioned earlier, you have two ways to define the SQL Trace: using T-SQL system stored procedures and SQL Server Profiler. We will first explain the SQL Trace architecture and then we will study an example to create the server-side trace using the T-SQL system stored procedure.

Before we start, you need to know some basic trace terminology. For all the terminology related to Trace, please refer to the Books Online section "SQL Trace Terminology."

  • Event: The occurrence of an action within an instance of the Microsoft SQL Server database engine or the SQL Server 2005 database engine, such as the Audit: Logout event, which happens when a user logs out of SQL Server.

  • Data column: An attribute of an event, such as the SPID column for the Audit:Logout event, which indicates the SQL SPID of the user who logged off. Another example is the ApplicationName column, which gives you an application name for the event.

    Note

    In SQL Server 2005, trace column values greater than 1GB return an error and are truncated in the trace output.

  • Filter: Criteria that limit the events collected in a trace. For example, if you are interested only in the events generated by the SQL Server Management Studio - Query application, you can set the filter on the ApplicationName column to SQL Server Management Studio - Query and you will only see events generated by this application in your trace.

  • Template: In SQL Server Profiler, a file that defines the event classes and data columns to be collected in a trace. Many default templates are provided with SQL Server and these files are located in the directory \Program Files\Microsoft SQL Server\90\Tools\Profiler\ Templates\Microsoft SQL Server\90.

SQL Trace Architecture

You should understand how SQL Trace works before looking at an example. Figure 13-2 shows the basic form of the architecture. Events are the main unit of activity for tracing. When you define the trace, you specify which events you want to trace. For example, if you want to trace the SP: Starting event, SQL Server will trace only this event (with some other default events that SQL Server always captures). The event source can be any source that produces the trace event, such as a T-SQL statement, deadlocks, other events, and more.

image from book
Figure 13-2

After an event occurs, if the event class has been included in a trace definition, the event information is gathered by the trace. If filters have been defined for the event class (for example, if you are only interested in the events for LoginName = 'foo') in the trace definition, the filters are applied and the trace event information is passed to a queue. From the queue, the trace information is written to a file or it can be used by SMO in applications, such as SQL Server Profiler.

Creating a Server-Side Trace Using T-SQL Stored Procedures

If you have used SQL Profiler before, you know that creating a trace using SQL Profiler is very easy. Creating a trace using T-SQL system stored procedures requires some extra effort because it uses internal IDs for events and data column definitions. Fortunately, the sp_trace_setevent article in SQL Server Books Online (BOL) documents the internal ID number for each event and each data column. You need four stored procedures to create and start a server-side trace:

  • sp_trace_create: Creates a trace definition. The new trace will be in a stopped state.

  • sp_trace_setevent: Once you define trace using sp_trace_create, this stored procedure adds or removes an event or event column to a trace. sp_trace_setevent may be executed only on existing traces that are stopped (whose status is 0). An error is returned if this stored procedure is executed on a trace that does not exist or whose status is not 0.

  • sp_trace_setfilter: Applies a filter to a trace. sp_trace_setfilter may be executed only on existing traces that are stopped. SQL Server returns an error if this stored procedure is executed on a trace that does not exist or whose status is not 0.

  • sp_trace_setstatus: Modifies the current state of the specified trace.

Now you'll create a server-side trace. This trace will capture the events Audit Login and SQL:StmtStarting. It will capture the data columns SPID, DatabaseName, TextData, and HostName for the Audit Login event. It will capture the data columns ApplicationName, SPID, TextData, and DatabaseName for the SQL: StmtStarting event.

You want to capture the trace data for the application SQL Server Management Studio - Query only. Save the trace data in a file located on some remote share. The maximum file size should be 6MB and you need to enable file rollover so that another file is created when the current file becomes larger than 6MB. You also want the server to process the trace data, and you want to stop the trace at a certain time.

Note

Server-side traces are much more efficient than client-side tracing with SQL Server Profiler. Defining server-side traces using stored procedures is a bit hard but there is an easy way to do it, which we will look at soon.

Open the sample SQLTrace-ServerSide in the Chapter 13 folder found on this book's Web site at www.wrox.com. Open the file CreateTrace.sql, which is shown here:

 -- Create a Queue declare @rc int declare @TraceID int declare @maxfilesize bigint declare @DateTime datetime set @maxfilesize = 10 set @DateTime = '2006-06-05 14:00:00.000' --------------------- -- The .trc extension will be appended to the filename automatically. -- If you are writing from remote server to local drive, -- please use UNC path and make sure server has write access to your network share exec @rc = sp_trace_create @traceid = @TraceID output ,@options = 2 ,@tracefile = N'\\cipher\trace\ServerSideTrace' ,@maxfilesize = @maxfilesize ,@stoptime = @Datetime ,@filecount = NULL if (@rc != 0) goto error -- Set the events declare @on bit set @on = 1 exec sp_trace_setevent  @traceid = @TraceID ,@eventid = 14 ,@columnid = 8 ,@on = @on exec sp_trace_setevent @TraceID, 14, 1, @on exec sp_trace_setevent @TraceID, 14, 35, @on exec sp_trace_setevent @TraceID, 14, 12, @on exec sp_trace_setevent @TraceID, 40, 1, @on exec sp_trace_setevent @TraceID, 40, 10, @on exec sp_trace_setevent @TraceID, 40, 35, @on exec sp_trace_setevent @TraceID, 40, 12, @on -- Set the Filters declare @intfilter int declare @bigintfilter bigint exec sp_trace_setfilter  @traceid = @TraceID ,@columnid = 10 ,@logical_operator = 1 ,@comparison_operator = 6 ,@value = N'SQL Server Management Studio - Query' -- Set the trace status to start exec sp_trace_setstatus @traceid = @TraceID, @status = 1 -- display trace id for future references select TraceID=@TraceID goto finish error: select ErrorCode=@rc finish: go 

Now we'll take a look at this stored procedure. The @traceid parameter returns an integer that you must use if you want to modify the trace, stop or restart it, or look at its properties.

The second parameter, @options, lets you specify one or more trace options. A value of 1 tells the trace to produce a rowset and send it to Profiler. You can't use this option value if you capture to a server-side file. Typically, only Profiler-created traces have this option value. Traces that use the sp_trace_create stored procedure should never have an option value of 1, because this value is reserved for Profiler-defined traces. Because the value for the @options parameter is a bitmap, you can combine values by adding them together. For example, if you want a trace that enables file rollover and shuts down SQL Server if SQL Server can't write to the trace file, the option value is 6 (4 + 2). Note that not all option values can be combined with the others (option value 8 by definition doesn't combine with any other option value). In this case, the option value 2 means that when the trace file reaches the size specified by the value in the parameter @maxfilesize, the current trace file is closed and a new file is created. All new records will be written to the new file. The new file will have the same name as the previous file, but an integer will be appended to indicate its sequence. For details about other @option values, please refer to the sp_trace_setevent article in SQL Server Books Online (BOL).

In the @tracefile parameter, you can specify where you want to store the trace results. You can specify either a local directory (such as N 'C:\MSSQL\Trace\trace.trc') or a UNC to a share or path (N'\\Servername\Sharename\Directory\trace.trc'). The extension .trc will be added automatically, so you don't need to specify that. Note that you cannot specify the @tracefile if you set the @option value to 8; in that case, the server will store the last 5MB of trace information.

You can specify the maximum size of the trace file before it creates another file to add the trace data using the @maxfilesize parameter, in MB. In this case we have specified 10MB, which means that once the trace file size exceeds 10MB, SQL Trace will create another file and start adding data there. We recommend using this option because if you create one big file, it's not easy to move it around, and if you have multiple files, then you can start looking at the older files while trace is writing to the new file. In addition, if disk space issues arise while gathering the trace data, you can move files to different drives or servers.

You can optionally specify the trace stop time using the @stoptime parameter, which is of the datetime type.

The @filecount parameter is used to specify the maximum number of trace files to be maintained with the same base filename. Please refer to Books Online for a detailed description of this parameter.

Now let's see how to set up the events and choose the data columns for those events. The stored procedure sp_trace_setevent will do that job. The first parameter is the traceid, which you got from the sp_trace_create stored procedure. The second parameter, @eventid, is the internal ID of the event you are trying to trace. The first call of the stored procedure specifies 14, which is the Audit Login event. In the third parameter you have to specify which data column you want to capture for the event indicated. In this case, we have set @columnid to 8, which is the data column HostName. You have to call this stored procedure for each data column you want for a particular event. We called this stored procedure multiple times for @eventid 14 because we want multiple data columns. The last parameter is @ON, which is a bit parameter used to specify whether you want to turn the event on or off. As mentioned earlier, the sp_trace_setevent article in SQL Server Books Online (BOL) documents the internal ID number for each event and each data column.

Once the event is established you want to set the filter on it. You use the stored procedure sp_trace_ setfilter to set the filter on a particular event and the data column. The article sp_trace_setfilter in BOL documents the internal ID number for the @comparison_operator and @logical_operator parameters. In this case, you want only the trace generated by the application name SQL Server Management Studio - Query.

To start the trace you have to use the stored procedure sp_trace_setstatus. You can specify the trace ID you want to take action on with the option 0, 1, or 2. Because we want to start the trace, we have specified 1. If you want to stop, specify 0. If you specify 2, it closes the specified trace and deletes its definition from the server.

You're all set to run the server-side trace. Note that we have specified the @datetime option to stop the trace. You have to change the datetime value as your needs dictate. Make sure that if you specify the UNC path for the trace file, the SQL Server service account has write access to the share. Run the script now.

It seems like plenty of work to get these internal IDs right when you create the server-side trace. Fortunately, there is an easy way to create the server-side trace using SQL Server Profiler, as you'll see in a moment.

Note

You can define all the events, data columns, filters, filenames (you have to select the option to save to the file, as you cannot store to a table when you create a server-side trace) and size using SQL Server Profiler and then click Run. After that, select FileExportScript Trace Definition For SQL Server (2005 or 2000) and save the script. Now you have the script to create the server-side trace. You may have to check the @maxfilesize to ensure that it has the correct value if you have changed something other than the default, which is 5MB.

When you define the server-side trace, you cannot store the trace result directly into the table. You have to store it into the file; later you can use a function, discussed next, to put the trace data into a table.

Retrieving the Trace Metadata

Now that you have defined the trace, you also need to understand how to get the information about the trace. There are built-in functions you can use to do that. The function fn_trace_getinfo (trace_id) is used to get the information about a particular trace. If you do not know the trace_id, specify DEFAULT as the function argument and it will list all the traces.

Open the script GetTraceDetails.sql and run it. Be sure to change the trace_id parameter value to whatever trace_id you got when you ran the script CreateTrace.sql. Figure 13-3 shows the output.

image from book
Figure 13-3

In Figure 13-3, the Property 1 row contains the @options parameter value. A trace with a Property 1 value of 1 is most likely a trace started from Profiler. The Property 2 row contains the trace filename, if any. The Property 3 row contains the maximum file size, which is 10MB in this case; and the Property 4 row contains the stop time, which has some value for this trace. The Property 5 row shows the trace's status - in this case 1, which means that Trace is running.

The function Fn_trace_geteventinfo() shows you the events and data columns that a particular trace captures, but the function returns the data with the event and data column IDs, instead of a name or explanation, so you must track down their meaning. The function Fn_trace_getfilterinfo() returns information about a particular trace's filters:

 SELECT * FROM fn_trace_geteventinfo (2) 

Retrieving Data from the Trace File

There are two ways you can retrieve the trace data from the file: using the function fn_trace_gettable or with SQL Server Profiler. The function fn_trace_gettable is a Table Valued Function, so you can read directly from the file using this function and insert data into a table to analyze:

 SELECT * FROM fn_trace_gettable ('\\cipher\trace\ServerSideTrace.trc' , DEFAULT) 

You can also use SELECT INTO in this query to store the result in a table. We like to put the trace data into a table because then you can write a T-SQL statement to query the data. For example, the TextData column is created with the ntext data type. You can alter the data type to nvarchar(max) so that you can use the string functions. You should not be using the ntext or text data types anyway, because they will be deprecated in a future SQL Server release; use nvarchar(max) or varchar(max) instead. Note that even though the trace is running, you can still read the data from the file to which Trace is writing. You don't have to stop the trace for that. The only gotcha in storing the trace data into a table is that the EventClass value is stored as an int value and not as a friendly name. We have provided a script, EventClassID_Name.sql, that creates a table and inserts the eventclassid and its name in that table. You can then use this table to get the event class name when you analyze the trace result stored in the table. You can write a query like the following to do that, assuming that you have stored the trace result in the table TraceResult:

 SELECT ECN.EventClassName, TR.* FROM TraceResult TR LEFT JOIN EventClassIdToName ECN ON ECN.EventClassID = TR.EventClass 

SQL Server Profiler

SQL Server Profiler is a rich interface to create and manage traces and analyze and replay trace results. SQL Server Profiler shows how SQL Server resolves queries internally. This enables you to see exactly what Transact-SQL statements or multi-dimensional expressions are submitted to the server and how the server accesses the database or cube to return result sets.

SQL Server 2005 added plenty of new features in Profiler, including the following:

  • Profiling of Microsoft SQL Server 2005 Analysis Services (SSAS). SQL Server Profiler now supports capturing and displaying events raised by SSAS.

  • Rollover trace files. SQL Server Profiler can replay one or more collected rollover trace files continuously and in order.

  • Saving the traced Showplan as XML. Showplan results can be saved in an XML format, which can be loaded later for graphical Showplan display in SQL Server Management Studio without the need for an underlying database. SQL Server Profiler will also display a graphical representation of Showplan XML events at the time they are captured by SQL Server Profiler.

  • Save trace results as XML. Trace results can be saved in an XML format in addition to the standard save formats of ANSI, Unicode, and OEM. Results saved in this fashion can be edited and used as input for the replay capability of SQL Server Profiler.

  • Aggregate view. Users can choose an aggregate option and select a key for aggregation. This enables users to see a view that shows the column on which the aggregation was performed, along with a count of the number of rows that make up the aggregate value.

  • Correlation of Trace events to Performance Monitor counters. SQL Server Profiler can correlate Performance Monitor counters with SQL Server or SSAS events. Administrators can select from a predefined set of Performance Monitor counters and save them at specified time intervals while also collecting a SQL Server or SSAS trace.

  • New extensibility standard. SQL Server Profiler uses an XML-based definition that enables SQL Server Profiler to more easily capture events from other types of servers and programming interfaces.

If you do not know how to create a trace using Profiler, please refer to the BOL topic "Using SQL Server Profiler." Here I explain how to use Showplan XML and how to correlate the trace events to Performance Monitor counters. I also discuss how to capture the trace for replay and how to replay the trace.

You can read the trace file created using a T-SQL stored procedure with SQL Profiler. To use SQL Profiler to read the trace file, just go to the File menu and open the trace file you are interested in.

Note

In SQL Server 2005, the server reports the duration of an event in microseconds (one millionth of a second) and the amount of CPU time used by the event in milliseconds (one thousandth of a second). In SQL Server 2000, the server reported both duration and CPU time in milliseconds. In SQL Server 2005, the SQL Server Profiler graphical user interface displays the Duration column in milliseconds by default, but when a trace is saved to either a file or a database table, the Duration column value is written in microseconds. If you want to display the duration column in microseconds in SQL Profiler, go to ToolsOptions and select the option "Show values in Duration column in microseconds (SQL Server 2005 only)."

Showplan XML

In SQL Server 2005 you can get the query plan in an XML document. You can use this document later to generate the graphical query plan. Showplan output in XML format can be moved from one computer to another and thus rendered on any computer, even on computers where SQL Server is not installed. Showplan output in XML format can also be programmatically processed using XML technologies, such as XPath, XQuery, and so on. XML Showplan processing is supported in SQL Server 2005, which contains a built-in query evaluation engine for XPath and XQuery.

You can generate XML Showplan output using the following means:

  • Selecting Display Estimated Execution Plan or Include Actual Execution Plan from the query editor toolbar in SQL Server Management Studio

  • Using the Transact-SQL Showplan SET statement options SHOWPLAN_XML and STATISTICS XML

  • Selecting the SQL Server Profiler event classes Showplan XML, Showplan XML for Query Compile, and Showplan XML Statistics Profile for tracing

  • Using the sys.dm_exec_query_plan dynamic management view

XML Showplans are returned in the nvarchar (max) data type for all of these methods, except when you use sys.dm_exec_query_plan. XML Showplans are returned in the xml data type when you use this dynamic management view.

You can visit http://schemas.microsoft.com/sqlserver/2004/07/showplan/showplanxml.xsd for the XML Showplan schema or you can look in the directory where SQL Server is installed, \Program Files\Microsoft SQL Server\90\Tools\Binn\schemas\sqlserver\2004\07\showplan.

We explain how to read the query plan in Chapter 14. Figure 13-4 shows what a query plan looks like in SQL Profiler when you choose the Showplan XML event.

image from book
Figure 13-4

If you right-click on the Showplan XML event, you will see a menu item Extract Event Data. This will save the query plan with a .sqlplan extension. You can later open that file with SQL Server Management Studio or Profiler and it will display the graphical plan exactly as shown in Figure 13-4. You can also use FileExportExtract SQL Server Event in SQL Profiler to achieve the same results.

When you set up the trace using Profiler, and if you choose Showplan XML or Showplan Statistics Profile or Showplan XML for Query Compile, a tab will show up in the Trace Properties dialog, shown in Figure 13-5.

image from book
Figure 13-5

You can also see in Figure 13-5 that there is a Deadlock XML option to store the deadlock graph in an XML document, which you can view later in SQL Management Studio or Profiler. This option is enabled only if you choose the Deadlock Graph event.

You can also use SET SHOWPLAN_XML ON before you execute the query, which will give you an estimated execution plan in XML without executing it. You can also use SET STATISTICS XML ON, which will give you an execution plan in XML format, as shown in Figure 13-6. You can click on the link in the XML Showplan and it will open an XML editor within SQL Server Management Studio.

image from book
Figure 13-6

If you want to see the graphical execution plan from this XML document, you can save this XML document with a .sqlplan extension. Open that file in SQL Server Management Studio and you will get the graphical execution plan. See Figure 13-7 for the graphical execution plan generated from the XML document.

image from book
Figure 13-7

Note

When the Showplan XML event class is included in a trace, the amount of overhead will significantly impede performance. Showplan XML stores a query plan that is created when the query is optimized. To minimize the overhead incurred, limit the use of this event class to traces that monitor specific problems for brief periods of time, and be sure to use the data column filter based on specifics you are going to trace.

Correlating a Trace with Windows Performance Log Data

In SQL Server 2005, a new feature was added to correlate the trace data with Performance Monitor log data based on the StartTime and EndTime data columns in the SQL trace file. If you have taken Trace and Performance Monitor data at the same time, you can relate the events that happened in SQL Server with the server activities such as processor time, disk activity, and memory usage. Figures 13-8 and 13-9 show an example of correlating trace and Performance Monitor log data.

image from book
Figure 13-8

image from book
Figure 13-9

To bring up the performance data after you open a trace file, click FileImport Performance Data. That will bring up the dialog box shown in Figure 13-8. You can select the performance counters you are interested in and then click OK. That will bring the performance counters inside the Profiler to correlate the SQL Server activity during a specific time, as shown in Figure 13-9. You can move the red vertical bar to select a particular time you are interested in to see what was happening during that time in SQL Server.

Note

If you look at the peak value for a performance counter - for example, avg. disk queue length - that will bring up whatever query SQL Server was executing at the time. However, that doesn't mean that the query caused the disk queue length to increase exactly at that time. It's possible that the query started a little earlier, and is now requesting a lot of data from disk, and that may be causing the average disk queue length to shoot up. In short, be careful before you jump to conclusions and make sure you have looked at the whole picture.

Replaying a Trace

Replay is the capability to save a trace and replay it later. This functionality enables you to reproduce the activity captured in a trace. When you create or edit a trace, you can save the trace to replay it later. Be sure to choose the predefined template called TSQL_Replay when you create the trace using SQL Profiler. SQL Server needs specific events and data columns to be captured in order to replay the trace later. If you miss those events and data columns, SQL Server will not replay the trace. Trace replay supports debugging by using the Toggle Breakpoint and the Run to Cursor options on the SQL Server Profiler Replay menu. These options especially improve the analysis of long scripts because they can break the replay of the trace into short segments so they can be analyzed incrementally.

The following types of trace are ignored when you replay the trace:

  • Traces that contain transactional replication and other transaction log activity. These events are skipped. Other types of replication do not mark the transaction log so they are not affected.

  • Traces that contain operations that involve globally unique identifiers (GUID). These events are skipped.

  • Traces that contain operations on text, ntext, and image columns involving the bcp utility, the BULK INSERT, READTEXT, WRITETEXT, and UPDATETEXT statements, and full-text operations. These events are skipped.

  • Traces that contain session binding: sp_getbindtoken and sp_bindsession system stored procedures. These events are skipped.

  • SQL Server Profiler does not support replaying traces collected by Microsoft SQL Server version 7.0 or earlier.

In addition, some requirements must be met in order to replay the trace on the target server:

  • All logins and users contained in the trace must be created already on the target and in the same database as the source.

  • All logins and users in the target must have the same permissions they had in the source.

  • All login passwords must be the same as those of the user who executes the replay. You can use the Transfer Login task in SSIS to transfer the logins to the target server on which you want to replay the trace.

  • The database IDs on the target ideally should be the same as those on the source. However, if they are not the same, matching can be performed based on the database name if it is present in the trace, so make sure that you have the DatabaseName data column selected in the trace.

  • The default database on the target server for a login should be the same as on the source when the trace was taken.

  • Replaying events associated with missing or incorrect logins results in replay errors, but the replay operation continues.

Performance Considerations When Using Trace

SQL Server tracing incurs no overhead unless it captures an event, and most events need very few resources. Profiler becomes expensive only if you trace more than 100 event classes and capture all the data from those events. Normally, you will see a maximum of 5 percent to 10 percent overhead if you capture everything. Most of the performance hit is because of a longer code path; the actual resources that the trace needs in order to capture event data aren't particularly CPU-intensive. In addition, to minimize the performance hit, you can define all your traces as server-side traces and avoid the overhead of producing rowsets to send to the Profiler client.

Event Notification

Event notifications are special database objects that send messages to the Service Broker service (see Chapter 8 for details on the Service Broker) with information regarding server or database events. Event notifications can be programmed against many of the same events captured by SQL Trace, but not all. Unlike creating traces, event notifications can be used to perform an action inside an instance of SQL Server 2005 in response to events. Later in this chapter you'll see an example that shows how to subscribe to those events and take actions if needed.

To subscribe to an event, you have to create the Service Broker queue that will receive the details regarding the events. In addition, a queue requires the Service Broker service in order to receive the message. Then you need to create an event notification. You can create a stored procedure and activate it when the event message is in the queue to take a certain action. This example assumes you know how the Service Broker works, so be sure to read Chapter 8 if you don't already know about the Server Broker.

Note

Event notifications are created at the server or database level.

We will create an event notification in a database whereby you will be notified when a new table is created. Open the project EventNotification using SQL Server Management Studio. Open the CreateDatabase.sql script. This script will create a database called StoreEvent for the example. Run this script.

Next, open the CreateQueue.sql script, shown here:

 USE StoreEvent GO --CREATE QUEUE to receive the event details. IF OBJECT_ID('dbo.NotifyQueue') IS NULL CREATE QUEUE dbo.NotifyQueue WITH STATUS = ON     ,RETENTION = OFF GO --create the service so that when event happens --server can send the message to this service. --we are using the pre-defined contract here. IF NOT EXISTS(SELECT * FROM sys.services WHERE name = 'EventNotificationService') CREATE SERVICE EventNotificationService ON QUEUE NotifyQueue ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]) IF NOT EXISTS(SELECT * FROM sys.routes WHERE name = 'NotifyRoute') CREATE ROUTE NotifyRoute WITH SERVICE_NAME = 'EventNotificationService', ADDRESS = 'LOCAL'; GO 

This script creates a queue in the StoreEvent database to store the event data when a table is created in the StoreEvent database. It creates a Service Broker service EventNotificationService such that SQL Server can send the message when a subscribed event happens. The route NotifyRoute will help route the message to a local SQL server instance. Run this script.

Now you need to create the event notification. Open the script CreateEventNotification.sql, shown here:

 USE StoreEvent GO CREATE EVENT NOTIFICATION CreateTableNotification ON DATABASE FOR CREATE_TABLE TO SERVICE 'EventNotificationService', 'current database' ; 

This script creates an event notification called CreateTableNotification to notify you when a table is created in the StoreEvent database.

Note that messages are sent from one service to another, as discussed in Chapter 8. In this case, you have created the target end of the service, which is EventNotificationServer; the initiator end of the service is SQL Server itself.

When a table is created in the StoreEvent database, you will get the message in the queue NotifyQueue, so create a table and run the following script to see what's in the queue:

 SELECT CAST(message_body AS xml) FROM NotifyQueue 

Here is what the XML message in the queue looks like:

 <EVENT_INSTANCE>   <EventType>CREATE_TABLE</EventType>   <PostTime>2006-06-11T21:53:14.463</PostTime>   <SPID>56</SPID>   <ServerName>CIPHER</ServerName>   <LoginName>REDMOND\ketanp</LoginName>   <UserName>dbo</UserName>   <DatabaseName>StoreEvent</DatabaseName>   <SchemaName>dbo</SchemaName>   <ObjectName>TestTable1</ObjectName>   <ObjectType>TABLE</ObjectType>   <TSQLCommand>     <SetOptions ANSI_NULLS=" ON" ANSI_NULL_DEFAULT=" ON" ANSI_PADDING=" ON" QUOTED_IDENTIFIER=" ON" ENCRYPTED=" FALSE" />     <CommandText>CREATE TABLE TestTable1 (col1 int, col2 varchar(100), col3 xml) </CommandText>   </TSQLCommand> </EVENT_INSTANCE> 

You can take some action with this event if you create a stored procedure and have it activated when a message arrives in the queue. Refer to Chapter 8 for details on stored procedure activation.

You create the server-wide event in the same way. For a full list of the events for which you can be notified, you can query the sys.event_notification_event_types view. Please refer to the script Metadata_EventNotification.sql to get the catalog view list that stores the metadata about event notifications.

Note that you can also be notified for grouped events. For example, if you want to be notified when a table is created, altered, or dropped, you don't have to create three separate event notifications. You can use the group event called DDL_TABLE_EVENTS and just create one event notification to achieve the same thing. Another example is related to monitoring all the locking events using the event group TRC_LOCKS. When you create an event notification with this group, you can be notified for the following events: LOCK_DEADLOCK, LOCK_DEADLOCK_CHAIN, LOCK_ESCALATION, and DEADLOCK_GRAPH.

Please refer to the BOL topic "DDL Event Groups for Use with Event Notifications" for all the event groups.

Event notifications can be used to do the following:

  • Log and review changes or activity occurring on the database or server.

  • Perform an action in response to an event in an asynchronous, rather than synchronous, manner.

Event notifications can offer a programming alternative to DDL triggers and SQL Trace.



Professional SQL Server 2005 Administration
Professional SQL Server 2005 Administration (Wrox Professional Guides)
ISBN: 0470055200
EAN: 2147483647
Year: 2004
Pages: 193

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net