Optimizing a Database Control Strategy to Meet Business Requirements


Protecting your SQL Server does not just mean you are worried about individuals who are seeking to maliciously change or delete your data. It also includes protections from accidental or inadvertent changes that can disrupt your database server or compromise the integrity of your data.

One of the main duties of a DBA is to ensure that any changes that are made to the database server are thoroughly tested and will not damage the environment. This means that all of the following types of changes are properly tested:

  • Schema and other code changes

  • Configuration changes to SQL Server or the Windows host system

  • Patches and service packs by software vendors

To ensure that all changes will not adversely affect your database server, you need to develop and enforce change control procedures that apply to all of your servers.

A well-run server environment includes servers in at least these three environments:

  • Development   A development environment is a place in which you can apply any changes to code or configurations and determine what the effects are to your existing servers. This is where your developers build code and test new stored procedures and functions, but it can also be where new versions of a third-party application are installed or upgraded. Changes made here result in a decision of whether to propagate those changes to the quality assurance environment.

  • Quality assurance (QA)   This is the middle environment where any changes from the development environment are promoted and then tested. The purpose of the QA environment is twofold. First, the production environment is re-created in QA. This allows testing on a copy of the production environment.

  • The second purpose is to apply the changes from the development environment and to ensure that all the changes that are needed are applied. Since any particular change could involve dozens or hundreds of objects (table, views, stored procedures, functions, and more), it is important to ensure that a complete deployment takes place. This environment also separates the large number of changes that may have been made to the development environment from the small number that will be deployed to production.

  • Production   The production environment is the set of servers that your users connect to and retrieve data from. These servers store the official data of record and are the servers that you are most concerned with protecting. These are the servers that will ultimately receive the changes.

There might also be two additional environments in some large installations. These aid in ensuring that changes are thoroughly tested before being applied to production environments.

Preproduction   The preproduction environment is usually an exact copy of the production environment, in terms of configuration and size. This means that load testing can be carried out in addition to functional testing to ensure that the production environment will operate as planned.

Production support   The production support environment is usually reserved for your support people to verify that problems reported in the production environment or data changes that must be made can be tested. The production support environment is usually a copy of the production environment that is refreshed fairly often, usually daily.

No matter how many different systems you have in your environment, you must ensure that all changes to these systems are properly controlled, which in most cases means having documentation and a way of tracking changes.

Picking a Version Control System

Version control systems (VCSs) are most often used by developers for tracking the changes that they make to code modules. DBAs can also benefit from a VCS for tracking the changes to schemas, permissions, and server settings. Version control systems are also called Source Code Control systems (SSC) by some vendors.

The most common VCS in Microsoft environments is Microsoft’s Visual SourceSafe, but numerous other systems work with Visual Studio or the SQL Server 2005 Business Intelligence Development Studio, such as the Sourcegear Vault. A good VCS will allow you to separate the various files into projects, control access to files, and easily check code out and in with the development tools you are using.

Tip 

Be sure that you do not allow DBAs or developers to make changes and save them interactively using Management Studio. They can make the changes, but a script should be generated and stored in the VCS.

It does not matter which VCS you choose, but you need to ensure that everyone working with SQL Server uses the same system and all development changes are made to files controlled by this system. It is important to ensure that the code is organized so that developers and DBAs working with code can easily find files and will not misfile their changes. You also need to be sure the VCS is backed up just like any other critical set of files.

One of the main advantages of using a VCS over just having a backup copy of your code is that you can roll back to or find any version of the code. This allows you to easily see the changes made to the object over time, which in turn allows you to quickly deploy an old version of code if it is discovered that the changes in the current version do not work as expected.

Tip 

With the addition of the CLR environment to SQL Server 2005, you can write objects using any .NET-compliant language, which means you will have more complex code, necessitating the use of a VCS.

Identifying All Database Objects Related to a Particular Deployment

Even if all of your code is contained in a VCS system, how can you determine which code changes need to be migrated for a particular deployment? In most database environments, hundreds or thousands of objects will be coded. These will include tables, views, stored procedures, assemblies, indexes, permission scripts, and more.

Often you will have a particular set of enhancements and features that you have been asked to migrate to your production environment. The complete list of objects needed to implement these features must be bundled together and migrated through your test environments to production as one unit.

As humans we have memories that are highly fallible; you cannot depend on your DBAs and developers to remember all the changes that they have made for a particular deployment. This means you must develop a method of tracking the changes that you have made during your development cycle.

These changes need to be tracked throughout the development cycle, and you can do this in a few ways:

  • Document changes at meetings.   Having a regular development meeting with your development team helps ensure that everyone is aware of the changes being made by others. By documenting the changes that each member is working on, you will have a good idea of which objects need to be included in your deployment.

  • Use copies in your VCS.   Most VCS systems allow you to make a copy of a file that is linked to the original file. By keeping copies of all changes for a deployment in a separate folder of your VCS, you can easily see which objects need to be included in the deployment.

  • Use an automated tool.   A number of tools can detect the differences between two databases and generate a script to bring the databases into sync. Red Gate Software’s SQL Compare and Embarcadero’s Change Manager are two such tools.

However you choose to track changes, as long as you have some system, you will be able to gather together all the code needed for a deployment and use it to move the changes through your test environment to your production environment.

Warning 

Be sure that only the changes you have made to your test environment are applied to the production environment. If you discover problems in your test environment, rebuild it from the production environment, and then reexecute your deployment on the test environment to be sure you have included all changes.

Verifying Change Control Procedures Are Being Followed

Having a set of environments for controlling change does not protect your production environment unless you can ensure that proper procedures are followed when promoting changes between environments. This means you have a systematic method of moving changes between environments. The following sections will examine a few of the more common methods of ensuring that your processes are being followed.

Using Access Control

One of the main ways you can ensure that your change control procedures are being followed is by using access control to prevent unauthorized users from making changes. If you are the only person who can make changes to the production servers, then you can be sure that you are following the procedures.

SQL Server includes a number of ways that you can control access to your servers. Predefined roles inside SQL Server allow you to give limited access to individuals and allow them to perform only certain changes.

As an example, you can add a junior DBA to the db_ddladmin role in a development database and allow the junior DBA to make schema changes as needed. You can limit your developers to SELECT, INSERT, UPDATE, DELETE, and EXECUTE rights on objects, thereby controlling the changes made to the database. Exercise 9.2 walks you though adding a user to this role.

Exercise 9.2: Adding a User to a Database Role

image from book

In this exercise, we will be adding Dan, a senior developer, to a predefined database role so that he can make schema changes as required by other developers.

  1. Start SQL Server Management Studio by clicking Start image from book All Programs image from book SQL Server 2005 image from book SQL Server 2005 Management Studio. Enter your credentials to connect to your server.

  2. Expand the server in the Object Explorer, then expand the Databases folder, and then select the AdventureWorks database. Expand the Security folder and the Roles folder, and right-click the db_ddladmin role, as shown here.

    image from book

  1. Clicking the Properties menu item opens the dialog box shown here. Click the Add button.

    image from book

  2. In the dialog box that opens, type Dan (or the name of a user on your server), and click OK. The name should appear in the db_ddladmin properties, as shown here. This user can now make changes to your schema.

    image from book

  3. To test this, click OK, and then click the New Query button. In the query window, right-click and select Connection image from book Change Connection, enter the credentials for the user you added to the role earlier, and click Connect.

  4. Type the following code into the query window:

     CREATE TABLE MyTestTable (TestID INT)

  5. Click the Execute button, and you should receive a success message, indicating that this user now has the permissions to create objects.

image from book

Using DDL Triggers

One new feature in SQL Server 2005 is the addition of Data Definition Language (DDL) triggers that respond to various events and commands that change the functionality and structure of the server.

In prior SQL Server versions, Data Manipulation Language (DML) triggers responded to INSERT, UPDATE, and DELETE statements, allowing you to execute code when one of those events took place on a server. This enabled custom responses to data changes that could range from business rule implementation to auditing logs.

DDL triggers work in a similar fashion, but they respond to CREATE, ALTER, and DROP statements used to change your schema, add logins, and more. Instead of showing the data values before and after the statement execution, the DDL trigger raises an event that allows you to execute code in response to that event.

Warning 

DDL triggers in SQL Server 2005 can return a result set, but this functionality will be removed in future versions. Therefore, you should not build code that expects a result set from a trigger.

You can use these DDL triggers to enforce your change control procedures. One of the most common implementations of a DDL trigger is to roll back any changes of a particular type.

There are two scopes of DDL triggers: database level and server level. Each of these operates on events taking place at a different scope of command, as the names imply. You can use both of these types of triggers to control the changes that take place by auditing or preventing unauthorized changes.

Server-level DDL triggers are created and stored in the master database, which means that if a master database is restored, these triggers will exist on that server. These triggers work on server-level structures such as logins, endpoints, databases, and server authorizations.

Database-level DDL triggers are stored in each individual database and apply only to the structures inside the database. This also means that if you move a database to a new server, any DDL triggers contained inside that database will be moved to the new server. Database-scoped DDL triggers operate on the following structures:

  • Tables

  • Views

  • Functions

  • Stored procedures

  • Synonyms

  • DML triggers

  • Assemblies

  • Users

  • Certificates

  • Schemas

  • Service Broker queue events

As mentioned previously, you can use DDL triggers either to prevent unauthorized changes or to audit any changes that are made to server structures. You will examine how to accomplish both of these tasks in the next section. Although we focus on database-level objects in that section, the procedure and idea is the same for server-level objects.

One of the common uses for DDL triggers on a production server is to prevent unauthorized changes to your server. To do this, you create a DDL trigger that is specific to the commands you are trying to prevent and perform a rollback of those events. For example, if you wanted to prevent anyone from altering a table, you would create a DDL trigger for the ALTER TABLE event in which you issued a ROLLBACK statement. (We discuss this further in the following exercise.) You could also log the event for tracking purposes and display a message for users so they understand why the command is failing. This prevents them from issuing the same command over and over.

If you needed to make an authorized change to one of the tables in your database, you would need to disable the DDL trigger to allow the statement to succeed. In this way, you would be consciously choosing to make a change, such as during a deployment.

Tip 

Use a checklist whenever you deploy code to a server to be sure you don’t forget to undo changes, such as disabling a DDL trigger, when you are finished with your deployment.

In Exercise 9.3, you will create a DDL trigger designed to prevent any new tables from being created.

Exercise 9.3: Creating a DDL Trigger

image from book

In this exercise, you will create a DDL trigger on the CREATE TABLE statement.

  1. Type the following code into a query window in the database where you want to create the trigger. We will use the AdventureWorks database for this exercise.

     CREATE TRIGGER ddl_preventtables ON DATABASE FOR CREATE_TABLE AS ROLLBACK PRINT 'You cannot add tables to this database' RETURN

  2. Click the Execute button. The trigger should compile and give you a success message.

  3. Now you will try to create a new table. Type the following code in the query window, and highlight it:

     CREATE TABLE MyNewTable ( MyID int)

  4. Click the Execute button. You should receive the results shown here.

    image from book

  5. These results show the output of the PRINT command in the trigger as well as the result of the rollback, which ended the transaction and prevented the creation of the table.

  6. Issue the following command to remove this trigger:

     DROP TRIGGER ddl_preventtables ON DATABASE

image from book

The other purpose for a DDL trigger is in auditing changes, something you might choose to do on a test or development environment. By recording data from the event every time a particular command is executed, you can get a record of the changes that have occurred on that system.

Auditing changes requires that you have a place to store the audit information since you never know who or when a change will be made. If you have a table called AdminAuditLog that is specifically designed to store information, you can create a trigger like the following:

 CREATE TRIGGER ddl_auditchanges ON DATABASE FOR CREATE_TABLE, ALTER_TABLE AS INSERT AdminAuditLog  SELECT EVENTDATA().value(    '(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]',    'nvarchar(max)')    , getdate() RETURN

This trigger will insert the text of the command executed along with the current time into your audit table. Now if you try to create the table from Exercise 9.3, it will be created. If you select all the columns from your audit table, you will receive the results shown in Figure 9.1.

image from book
Figure 9.1: Results of auditing with a DDL trigger

You can create similar triggers that respond to the CREATE, ALTER, and DROP statements for other objects, such as views, stored procedures, functions, schemas, assemblies, and more. If you created a comprehensive auditing solution for your development environments, you could use this as a list of objects, and even gather the code, for a deployment.

The downside of using auditing is that every object change is captured. This means that a developer who creates 50 versions of a stored procedure as they debug it and make changes to fix problems will end up logging 50 rows for this one object. Someone would need to go through the audit log and determine the actual list of objects, discarding duplicates.

Tip 

Be wary of using the last version of code in your audit log in deployments. It is possible that the code in the VCS is different from that in your log. Always use your VCS as the authoritative source for code.

Using Migration Scripts

One of the best ways you can ensure a smooth deployment is to use migration scripts to migrate the code. Since you will need to perform the deployment at least twice, once to QA and once to production, we recommend you have a repeatable process. This means you use the same method of deploying code to QA to deploy to production. Using scripts instead of notes and steps for a DBA to follow reduces the chance of human error.

Warning 

Be sure you are aware of dependencies between objects, such as creating an assembly before creating the stored procedure that uses the assembly. Your scripts should create objects in the proper order. Most automated tools will generate scripts that ensure dependencies are accounted for.

All the scripts you use should come from your version control system to ensure that you have the correct version of code for each object. Even if you choose to use an automated tool to build your script, be sure that the script generated is stored in the version control system if it is to be the final version of code used.

Warning 

Be careful that you do not move objects unrelated to the deployment. It is likely that your development system contains objects for future deployments or even objects that are unused and need to be discarded. Remove these from any scripts for this deployment.

No matter which method you use to identify the items needed for a deployment, ultimately testing determines whether you have succeeded or not, and testing may even be necessary for you to actually find all the files needed for a deployment. You can audit all changes, tightly control who has access to make changes, and hold all the meetings that you want and document everything, but at some point there will be a missing object necessary for some feature you are deploying. Only a thorough testing plan will uncover these missing items.



MCITP Administrator. Microsoft SQL Server 2005 Optimization and Maintenance Study Guide (70-444)
MCITP Administrator: Microsoft SQL Server 2005 Optimization and Maintenance (Exam 70-444) Study Guide
ISBN: 0470127457
EAN: 2147483647
Year: 2004
Pages: 146

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net