Typical Purposes


In Chapter 1, we discussed the uses of stored procedures, but in addition to these, stored procedures have many more purposes:

  • Encapsulating logic

  • Improving performance

  • Easing administration

  • Abstracting the underlying base tables away from developers

  • Simplifying code updates

  • Improving database security

  • Improving data validation

  • Executing code at SQL Server startup

Stored procedures help to separate SQL code from the client applications. They eliminate the embedded T-SQL statements (also known as Inline T-SQL) and the surrounding logic from the application written in a compiled language, such as VB or C#, by placing the T-SQL statements into stored procedures within the SQL Server. The T-SQL statements within the compiled application are replaced with calls to the stored procedures.

Encapsulating Logic

Encapsulating logic is an important aspect of stored procedures. It refers to placing all the commands necessary to perform an activity together, and exposing that group of commands as a single executable unit. Some important points regarding encapsulating logic are:

  • Values passed to it through predefined parameters

  • The output is produced in a consistent predefined format

  • Setting values or dependencies is not required in the calling routine

  • The routine is callable from anywhere within the application

  • The routine can be called from multiple applications

Stored procedures help in implementing encapsulated logic, such as routines of code that we execute from different areas of an application, for example, a routine to create an order.

First, we need to create a hypothetical table for invoices. This table contains information on the customer invoices and the data payments received for them.

     CREATE TABLE dbo.apress_Invoice     (       InvoiceID           CHAR(5) PRIMARY KEY,       OrderID             INT REFERENCES Orders(OrderId) NOT NULL,       InvoiceDate         DATETIME NOT NULL,       DueDate             DATETIME NOT NULL,       PaymentReceivedDate DATETIME NULL     ) 

Next, we insert some data into this invoice table for our testing purposes. We insert dummy invoices that came several months before the current date, and the due date which is one month after the invoice date.

     INSERT apress_Invoice     SELECT '00001', 10643, DATEADD(MM,-5,GETDATE()),       DATEADD(MM,-4,GETDATE()),NULL     INSERT apress_Invoice     SELECT '00002', 10692, DATEADD(MM,-4,GETDATE()),       DATEADD(MM,-3,GETDATE()),NULL     INSERT apress_Invoice     SELECT '00003', 10702, DATEADD(MM,-3,GETDATE()),       DATEADD(MM,-2,GETDATE()),NULL 

We can now create our procedure that encapsulates the logic surrounding the creation of orders for our application.

First, we specify what information is needed to create a new order. As we have not defaulted the parameters to NULL (using @EmployeeID INT = NULL), we stipulate that these parameters must always be passed to create a new order, excluding any that will cause an error.

     CREATE PROCEDURE CreateNewOrder       @CustomerID    NVARCHAR(5),       @EmployeeID    INT,       @RequiredDate  DATETIME     AS       SET NOCOUNT ON 

Next, we issue a SELECT statement that counts the number of records that exist in the invoice table for the specified customer with a due date of 2 months (or more) before the current date, and have no payment received date (the number of invoices where the payment is more than 2 months overdue).

     DECLARE @UnpaidInvoices INT,             @NewOrderID INT     SELECT @UnpaidInvoices = COUNT(*)     FROM apress_Invoice i     INNER JOIN Orders o ON i.OrderID = o.OrderID     WHERE DueDate <= DATEADD(MM,-2,GETDATE())     AND PaymentReceivedDate IS NULL     AND o.CustomerID = @CustomerID 

Next, if the number of unpaid invoices is greater than 2, then instead of adding a new order, we pass an error back to our client application specifying that the order cannot be created, and the reason behind it.

     IF @UnpaidInvoices > 2       BEGIN         RAISERROR('This order cannot proceed as the customer has too many     unpaid invoices',16,1)       RETURN 1     END 

Otherwise, we create the order as requested, and return the newly created order information back to the client application.

     INSERT Orders(CustomerID, EmployeeID, OrderDate, RequiredDate)      SELECT @CustomerID, @EmployeeID, GETDATE(), @RequiredDate      SELECT OrderID, CustomerID, EmployeeID,             OrderDate, RequiredDate      FROM dbo.Orders 

This procedure can be called from anywhere within our application, and we can be sure that the business rules surrounding the order creation process will be applied, no matter where we call it. For example, we request an invoice creation from within a client application by executing the stored procedure with the following parameters:

     EXEC CreateNewOrder 'ANATR',6,'20030610' 

We receive the new order information back, as expected:

 11080 ANATR 6 2003-01-02 11:51:35.247 2003-06-10 00:00:00.000 

This is because the business rules for this particular customer are met. However, now we execute the stored procedure with a different set of parameters for a different customer:

     EXEC CreateNewOrder 'ALFKI',6,'20030610' 

We get a very different result, as this particular customer violates the business rules specified within our stored procedure:

 Server: Msg 50000, Level 16, State 1,Procedure CreateNewOrder, Line 21 This order cannot proceed as the customer has too many unpaid invoices 

Encapsulation provides us with obvious benefits, such as one location to maintain the code and another to debug it when a problem is discovered. However, ideally, stored procedures should be specific to the function as our aim is not code minimization but performance. We will look at this in more detail in the Generic Stored Procedure section.

Improving Performance

By moving individual SQL statements and the surrounding data logic into stored procedure, we can improve the performance of our application by reducing the data sent over the network. To do this, we need to examine the data needed by our application's front end for processing or displaying, and then move the process of obtaining this information into a stored procedure, which is executed on the SQL Server. This can reduce the number of network round trips involved, by keeping the conversations between the client application and SQL Server short.

For example, suppose we need to create some application code that shows our organizational structure with the CEO at the top, and all the direct reports shown underneath this position (with appropriate indenting). There are a couple of ways we can approach such a problem; as you will see the two methods we demonstrate are two rather different approaches for reaching the same result.

For example, we use the following table for storing the details of employees:

     USE Northwind     GO     CREATE TABLE EmployeeHierarchy       (         EmployeeID INT PRIMARY KEY,         FirstName VARCHAR(255),         LastName VARCHAR(255),         ManagerID INT REFERENCES Employees(EmployeeID)       ) 

Before we go any further, let's populate this structure with some sample data for this example. We can use the data located within the Northwind database to fill our table.

     INSERT EmployeeHierarchy     SELECT TOP 1       EmployeeID,       FirstName,       LastName,       NULL     FROM Employees e1     ORDER BY NEWID()     WHILE @@ROWCOUNT<>0     BEGIN       INSERT EmployeeHierarchy       SELECT TOP 1         EmployeeID,         FirstName,         LastName,         (SELECT TOP 1 EmployeeID         FROM dbo.EmployeeHierarchy e2         WHERE e1.EmployeeID <> e2.EmployeeID         ORDER BY NEWID())       FROM Employees e1       WHERE e1.EmployeeID NOT IN           (SELECT EmployeeID FROM EmployeeHierarchy)     END 

The first INSERT statement within this script inserts a random employee as our organization's boss (an employee who has a NULL in their ManagersID column). TOP 1 used in conjunction with ORDER BY NEWID() gives us a random row as every time this function is called a different value is generated, which gives us an unpredictable order (and TOP 1 picks off the first row from that unpredictable order).

     INSERT EmployeeHierarchy     SELECT TOP 1       EmployeeID,       FirstName,       LastName,       NULL     FROM Employees e1     ORDER BY NEWID() 

Next we loop around while there are rows being affected, and on each pass of the loop we INSERT an employee into our EmployeeHierarchy table with a random employee as their manager. This is done when the employee doesn't already exist within our EmployeeHierarchy table, and where the manager is not the same employee as the current employee being inserted.

     WHILE @@ROWCOUNT<>0     BEGIN       INSERT EmployeeHierarchy       SELECT TOP 1         EmployeeID,         FirstName,         LastName,         (SELECT TOP 1 EmployeeID         FROM dbo.EmployeeHierarchy e2         WHERE e1.EmployeeID <> e2.EmployeeID         ORDER BY NEWID())       FROM Employees e1       WHERE e1.EmployeeID NOT IN           (SELECT EmployeeID FROM EmployeeHierarchy)     END 

Okay, now we have our test data let's assume that we also have a VB application for displaying this information. This application may carry out the following steps – in this example we'll actually be using VB.NET code.

Further, let's assume that we also have a VB.NET application for displaying these details:

Now, we examine what this piece of code does. First, we define the objects we need and define a connection string for connecting to our SQL Server using Integrated Security.

     Private Sub ShowManagerID(ByVal ManagerID As Integer, _                               ByVal Level As Integer)     Dim cnSQL As New SqlConnection("Server=.;Integrated Security=SSPI;"& _                                    "Database=HRExmaple")     Dim cmdSQL As New SqlCommand()     Dim drSQL As SqlDataReader     Dim CurrentID As Integer 

Next, we define an SQLCommand object. The command associated with this command object is a SQL Query which will retrieve all the child rows for the specified ManagerID. Don't worry if you're not too familiar with the exact details of the code – the SELECT statement should give you a good idea of what's going on here.

     With cmdSQL       .Connection = cnSQL       .CommandText = "SELECT EmployeeID, FirstName, LastName " & _                      "FROM Employees WHERE Managerpara">Next, we open a connection and execute this query.

     cnSQL.Open()     drSQL = cmdSQL.ExecuteReader 

In a While loop, we loop around every employee who reports to the specified manager. We execute this again in a nested fashion to retrieve all employees who work for the current employee, and display the output with a suitable number of spaces:

     While drSQL.Read       Console.WriteLine( Space(Level) & _                         drSQL("FirstName") & _                         drSQL("LastName"))       ShowManagerID(drSQL("EmployeeID"), Level + 1)     End While     End Sub 

In this example, we can see that there may be many round trips. If the organization has 1000 employees, it would translate into 1000 round trips, not to mention the opening and closing of 1000 DataReader objects which will likely impact on our application performance.

Important

You should use a tool like SQL Profiler to examine the volume of statements sent by the application to the SQL Server if you plan to deploy your application over a slow link, especially if the application is not using stored procedures.

An alternative approach is to examine what the client application really wants. In our case, it's a structure from which it can display the employee chart, and then to push the logic involved in creating this structure back to the SQL Server.

For example, we can use the following stored procedure for creating the employee chart:

     CREATE PROCEDURE GetOrgChart     AS       SET NOCOUNT ON       CREATE TABLE #OrgChart         (           EmployeeID INT,           EmpLevel   INT,           FirstName  VARCHAR(20),           LastName   VARCHAR(20)         )       DECLARE @Level INT       SELECT @Level=1       -- Insert our Boss       INSERT #OrgChart       SELECT TOP 1              e.EmployeeID, @Level, e.FirstName, e.LastName       FROM EmployeeHierarchy e       WHERE ManagerID IS NULL       WHILE @@Rowcount>0       BEGIN         SELECT @Level=@Level+1         INSERT #OrgChart         SELECT e.EmployeeID, @Level, e.FirstName, e.LastName         FROM  EmployeeHierarchy e         INNER JOIN #OrgChart oc2           ON e.ManagerID = oc2.EmployeeId         LEFT OUTER JOIN #OrgChart oc           ON e.EmployeeID = oc.EmployeeId         WHERE oc.EmployeeId IS NULL       END       SELECT * FROM #OrgChart ORDER BY EmpLevel 

This stored procedure retrieves all the employees from our Employee table and adds a Level value, which can be used by our client application to construct the employee chart. Therefore, within our client application, we can code the following to retrieve this data:

     Private Sub ShowOrgChart()     Dim cnSQL As New SqlConnection("Server=.;Integrated Security=SSPI;"& _                                    "Database=Northwind")     Dim cmdSQL As New SqlCommand()     Dim drSQL As SqlDataReader     With cmdSQL       .Connection = cnSQL       .CommandText = "GetOrgChart"       .CommandType = CommandType.StoredProcedure     End With     cnSQL.Open()     drSQL = cmdSQL.ExecuteReader     While drSQL.Read       Console.WriteLine( Space(drSQL("EmpLevel")) & _                          drSQL("FirstName") & _                          drSQL("LastName"))     End While     drSQL.Close()     cnSQL.Close()     End Sub 

This code executes our stored procedure once and loops through the result set. Only one request will be made to SQL Server irrespective of the number of employees in our organization.

All of this can be achieved in a number of different ways. However, it demonstrates how a simple change in the location of certain application logic can lead to significant increases in network efficiency.

Important

Ensure that you use the SET NOCOUNT ON option within your stored procedure. This prevents SQL Server from sending status information to the client application after the execution of each statement in the stored procedure.

Easing Administration

If our application is deployed in a full-scale SQL Server production environment, then the DBA will look after the database component of the application and ensure that the performance of the database falls within acceptable benchmarks. The number, physical size, and distribution of rows in various tables change over a period, thus affecting the performance of queries, effectiveness of existing indexes, and the decisions made by SQL Server about the execution plan.

If the queries are located within the stored procedures, then the DBA can easily examine the existing queries and optimize indexing to support them by adding query hints or redesigning a query, which can be done independent of the compiled application code (the front end). However, if the SQL statements are embedded within our compiled application code, they cannot be improved unless the DBA accesses the application's source code, makes the changes, recompiles it, and then redeploys it at the user's end, using tools such as the SQL Profiler.

Thus, if the DBA is unable to access the code to solve the problem, then it will come back to the developer for resolution, and solving production performance issues can be complex and time consuming. So, make it easier for the DBA to inspect your SQL code.

Note

I roughly estimate, from experience, that 85% of database performance problems or more are caused by inappropriate database design or sub-optimal SQL code, and the other 15% relate to configuration issues. If you restrict your DBA to optimizing configuration issues, you will find that they have little ability to maintain the performance of the database environment.

Performance Tuning

Another important reason for allowing the DBA to access the SQL script is that performance tuning cannot be done effectively in the development environment. Performance tuning isn't about making the individual statements execute quicker but balancing resources to ensure that every critical activity meets response time requirements

It is easy to speed up individual SELECT statements by just creating a bunch of indexes that cover everything that the queries need. However, this will affect other operations on the database, namely data modification. In simple words, our development environment should match the production environment dynamics in terms of size, volume, load, and hardware, each of which is essential for effective performance tuning.

Simplifying Code Deployment

As a developer, this approach also offers some deployment benefits, especially if we use client-server architecture. We may have deployed our application code to hundreds or thousands of users. Any change in the query or surrounding data logic will require recompilation of the application and redeployment to all user machines.

If the change in the application coincides with a structural change within the database, we will have to ensure that all users receive the update at the same time. This is necessary to avoid the existing copy of the application from ‘breaking down’ due to the change in the database. This can be a complex issue with a large numbers of users spanning multiple locations. However, we can reduce the need to redeploy the compiled application by coding the data logic into the stored procedures, which will enable handling structural changes by changing only the stored procedure, as all user applications share them.

Important

The deployment issues discussed are relevant to client-server style applications. Many applications developed recently use n-tier architecture while locating application code on centralized servers in an organization. This also helps mitigate the deployment issue, as code has to be deployed only once to the application servers and all users will receive the updated version.

Improving Database Security

SQL Server allows us to select various granular levels of security at the schema level. This helps to ensure that only authorized users can SELECT, INSERT, UPDATE, or DELETE the information from a table (and execute other DDL commands).

While planning application security, many developers miss the fact that security within SQL Server is simply an authorization mechanism, and not a validation mechanism. In other words, when we apply security within SQL Server we simply grant or deny the user the ability to utilize a resource. We do not specify the valid use of a resource.

If we do not use stored procedures, and instead grant permissions to a user to modify rows within a given table, we cannot dictate how those permissions should be used by that user. For example, if we want a user to have the rights for adding, deleting, and modifying customer records, we can issue the following command:

     GRANT SELECT, INSERT, UPDATE, DELETE ON Customers TO Fred 

This allows our database user Fred the right to SELECT, INSERT, UPDATE, and DELETE rows within our Customers database. These permissions will be required when Fred works on the front-end application developed for our Customers database, but do not restrict Fred's ability to exploit those permissions. For example, Fred can connect to the database using a query tool and execute DELETE all customers from the database.

     DELETE Customers 

Therefore instead of granting permissions directly to our database users, building stored procedures on which the EXECUTE permission has been granted to our database users (owner of the stored procedure) will be a better approach. The stored procedures can thus ensure that our users do not accidentally do anything that will severely damage our database information by carrying out the normal database application tasks in a controlled manner.

Consider the following stored procedure:

     CREATE PROCEDURE DeleteCustomer @CustomerID INT     AS       SET NOCOUNT ON       DELETE Customers       WHERE CustomerID=@CustomerID 

Fred can still accidentally delete customers by passing a customerID to this procedure. However, he cannot delete all our records with one statement, which he will have to do by passing every customerID individually to the stored procedure. Now, if this downside is also unacceptable to us, we can use a status flag to indicate a deleted record instead of physically deleting records from the database:

     CREATE PROCEDURE DeleteCustomer @CustomerID INT     AS       SET NOCOUNT ON       UPDATE Customers       SET Delete=1       WHERE CustomerID=@CustomerID 

This will allow us to undo accidental DELETEs. We can change this stored procedure without affecting our client application (as long as we ensure that the stored procedures retrieving customer information exclude the rows with Deleted = 1) as it is abstracted from our client application

Improving Data Validation

Security and validation go hand in hand. Security refers to preventing users from gaining access to rows for executing statements, while validation is concerned with the interpretation of the logic contained in those statements.

If we assume that our users have been granted access to INSERT rows into an Order Details table, this permission alone does not restrict ‘what’ they can insert. For example, the following INSERT statement is valid:

     INSERT [Order Details](CustomerID, ProductID, Quantity)     VALUES(2, 102, 1000) 

However, this should not happen if there are only 25 widgets available, and we should be able to enter an order only if there is enough quantity.

Therefore, we can create an alternative stored procedure instead. First, a transaction begins and the stored procedure compares the available stock with the desired quantity. An update lock hint is used to ensure that no other process changes the quantity of this product, until our transaction is complete.

     CREATE PROCEDURE AddNewOrder      @CustomerID AS INT,      @ProductID AS INT,      @Quantity AS INT     AS      SET NOCOUNT ON     BEGIN TRANSACTION     IF (SELECT StockOnHand FROM .Stock WITH (UPDLOCK)         WHERE ProductID=@ProductID) >=@Quantity     BEGIN 

If there is enough stock, then the order is created, the stock is reduced, and the transaction is committed.

       INSERT ORDERS(CustomerID, ProductID, Quantity)       VALUES(@CustomerID, @ProductID, @Quantity)       UPDATE STOCK         SET StockOnHand = StockOnHand - @Quantity       WHERE ProductID = @ProductID       COMMIT TRANSACTION 

If there is not enough stock, an error is generated and the transaction is rolled back:

     ELSE     BEGIN       RAISERROR("There is not enough stock to add this order",16,1)       ROLLBACK TRANSACTION     END 

If our users add orders using this stored procedure and have the ability to INSERT into the Orders table, then we can be sure that orders will not be created when there is no stock available.

Constraints

Constraints, such as CHECK and FOREIGN KEY, protect our data and hence should be used wherever possible. However, while these provide some protection at the lowest level, there are still some benefits of the validation logic in a stored procedure.

First, constraints are evaluated during the course of the modification transaction. A violation of a constraint causes the transaction to be aborted and rolled back. While this is not so much of a problem for short transactions, it can be a significant waste of resource for long ones. We can check to see if the parameters are valid before starting any costly modification activity by validating the parameters before commencing any data modification statement.

For example, if our inventory system requires the ability to reduce the cost of our products uniformly by a fixed amount, our SQL code will look like:

     CREATE PROCEDURE UpdateAllPrices @PriceReduction money     AS     UPDATE Products       SET UnitPrice=UnitPrice-@PriceReduction 

Now, our Products table has the following CHECK constraint on the UnitPrice column:

     CONSTRAINT [CK_Products_UnitPrice] CHECK ([UnitPrice] >= 0) 

This check ensures that the UnitPrice is over or equal to zero. If we execute this procedure with a price reduction of $10 resulting in a UnitPrice of less than 0, we will receive the following error message:

     Server: Msg 547, Level 16, State 1, Line 1     UPDATE statement conflicted with COLUMN CHECK constraint 'CK_Products_UnitPrice'.     The conflict occurred in database 'Northwind', table 'Products', column 'UnitPrice'.     The statement has been terminated. 

Unfortunately, this error condition is determined only when the condition is encountered, before which hundreds or thousands of rows could have been changed. All the rows that have been correctly modified must be rolled back, as all this occurs within the context of a transaction (either an explicit or an implicit transaction is defined by SQL Server, as in this example).

Therefore, while it is important to have the CHECK constraint, we can improve performance by doing some validation beforehand. After making a slight modification to our code, we have a procedure that validates the change before starting the transaction:

     ALTER PROCEDURE UpdateAllPrices @PriceReduction money     AS     IF (Select Count(*) FROM Products         WHERE UnitPrice-@PriceReduction<=0) >0       BEGIN         RAISERROR('Price change will result in some free products',16,1)       END     ELSE       BEGIN         UPDATE Products         SET UnitPrice=UnitPrice-@PriceReduction       END 

This stops the transaction from starting if it is known to fail, and also has the added benefit of allowing us to define more friendly error messages for interpretation by the calling application:

 Server: Msg 50000, Level 16, State 1, Line 8 Price change will result in some free products 

Why not Build Validation in the Front-End Application?

Building validation logic into the front-end application is a common approach and is often used for performance benefits. Obviously, the load on the database server is reduced if we check the data for validity based on rules built into the application, as the database server is touched only when the modification is known to succeed. The problem with this approach and is security.

We are leaving the database exposed by building the validation logic into the front-end application. Using these methods, there is nothing to stop our users from opening up a connection to SQL Server by using a tool other than our application, such as Query Analyzer, Microsoft Access, or something similar and then issuing the command:

     DELETE Customers 

As all the validation will be handled in the front-end application, this command will be executed and the Customers database will be deleted.

SQL Server has the ability to ‘tie’ the database to specific applications by using an application role. Application roles are assigned a secret password that the developers can compile into their executable code, and the password can be passed to SQL Server when the application runs, providing the assigned permissions to the specific connection opened from that application. This prevents users from using other applications for establishing a connection with all permissions. Therefore, using application roles in conjunction with front-end validation of data is an acceptable approach for single purpose databases. Refer to SQL Server Books Online for more information on application roles.

The ability to integrate applications is becoming increasingly desirable within an enterprise environment. Users are not expected to re key data into a number of different applications anymore as the back-end processes do this. Ideally, these related applications would share the same data source so that there would be no need to distribute data between multiple systems. However, this is a difficult task, especially when we deal with applications from different vendors.

In addition, integration becomes less robust if data validation is performed in the front-end application, as the application validation rules are unknown at the back end, which is where the integration engine clips in.

In large enterprise applications that use multi-tier architectures, it is common to use a ‘middle ground’ approach and provide an integration point with a set of APIs that external applications can use. These APIs contain the validation logic needed for integration and are exposed publicly, separate from the application front end.

Triggers

Triggers are a special type of stored procedure that we can use to enforce validation and auditing. Note that we will discuss about triggers in detail in Chapter 7.

Triggers and stored procedures have a couple of major differences that we should consider while selecting the most appropriate method:

Stored Procedures

Triggers

User access to the base table

Not required.

Required.

Passing in parameters

Parameters can be passed.

Parameters cannot be passed.

Transactions

Multiple statements that are executed within a stored procedure do not participate within a common transaction unless explicitly defined. This can be done with the BEGIN TRAN, COMMIT TRAN, and ROLLBACK TRAN statements.

All statements executed in a trigger occur within a common transaction. In addition, transaction performs the modification to the base table that caused the trigger to be fired.

Ability of a user to bypass the stored procedures or triggers while performing validation on a base table

Stored Procedures can be bypassed if the user has permissions to access the base table.

Triggers cannot be bypassed by normal users. Object owners can disable a trigger.

Triggers have the advantage that we can be sure that their logic is not bypassed. They are great for auditing purposes, so that we can be sure that every change on a given table is audited by the trigger. This is especially important if we are not using stored procedures to make changes to the table, or if we have many stored procedures that make changes to a given table. A trigger ensures that no code that makes modifications is forgotten about, thereby bypassing auditing requirements.

However, if we are not allowing direct access to base tables and have a manageable number of stored procedures that make changes to a given table, we may find it beneficial to use stored procedures, rather than triggers, to validate application logic. This is because triggers are executed within the context of a user transaction. This impact may be significant for AFTER triggers (refer to Chapter 7 for details), as the logic is checked after the data modification is made. The transaction is rolled back and the changes are undone if any validation problem is found. This impact is less significant for BEFORE triggers, as the changes can be checked before the modification takes place.

Important

Essentially, if you need users to be able to make changes directly to a base table (for example, when they are using a tool that doesn't support stored procedures), then you need to use triggers to provide validation over and above what you can achieve with constraints.

Executing Code at SQL Server Startup

You can use the master.dbo.sp_procoption system stored procedure to specify the name of a stored procedure that must be executed as soon as SQL Server starts up For example, if you run an auditing procedure from within T-SQL, which captures information to a trace file. We can ensure that the auditing routine is started every time SQL Server is started with the sp_procoption system procedure, so that we do not lose important auditing information just because we forgot to begin the auditing process after a system restart.

Important

A procedure must exist within the master database for setting it to start up automatically.

For example, assume that we have a stored procedure that audits logon and logoff events to the C:\AuditTrace.trc file. If we want to be sure that we captured all logons and logoffs that occur on our server, we need to set this procedure to execute automatically with the SQL Server startup.

     CREATE PROCEDURE dbo.AuditAccess     AS       SET NOCOUNT ON       DECLARE @TraceID INT,               @maxfilesize BIGINT       SET @maxfilesize = 5       EXEC sp_trace_create @TraceID output, 0, N'C:\AuditTrace.trc',                  @maxfilesize, NULL 

The details of the commands used in this procedure can be found in SQL Server Books Online. We create a trace file named c:\AuditTrace.trc in the first section of code, where user access to our server will be recorded.

Next, we add all the trace events that correspond to the activities that we wish to audit. Each event must be added using a separate call to the sp_trace_setevent stored procedure. See the definition of the sp_trace_setevent stored procedure in SQL Server Books Online to find the corresponding event definition for each parameter value.

       DECLARE @on bit       SET @on = 1       EXEC sp_trace_SETevent @TraceID, 14, 1, @on       EXEC sp_trace_SETevent @TraceID, 14, 6, @on     ...       EXEC sp_trace_SETevent @TraceID, 15, 1, @on       EXEC sp_trace_SETevent @TraceID, 15, 6, @on     ... 

When we have set all the events that we want to audit, we set the trace status to 1, which corresponds to starting the trace:

     EXEC sp_trace_SETstatus @TraceID, 1 

To set this procedure to start automatically, we issue the sp_procoption stored procedure specifying our stored procedure name, the startup option and either ON or OFF depending on whether we want it to execute on startup or not. For example:

     EXEC sp_procoption 'dbo.AuditAccess','startup','ON' 

As we are using this only as an example and do not really want an SQL Trace to start every time we restart our server (especially in a test environment), we should disable this option now using the following command:

     EXEC sp_procoption 'dbo.AuditAccess','startup','OFF' 

However, if you want to have a continuous audit running on your server, you can use this as a template and expand it to meet your own requirements.




SQL Server 2000 Stored Procedures Handbook
SQL Server 2000 Stored Procedures Handbook (Experts Voice)
ISBN: 1590592875
EAN: 2147483647
Year: 2005
Pages: 100

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net