Design Patterns

for RuBoard

As I said at the top of the chapter, recognizing and using design patterns is just as beneficial in Transact-SQL as it is in other languages. Although most published design patterns are geared toward OOP languages, there are many that are equally useful in Transact-SQL. Recognizing OOP design patterns in a non- OOP language such as Transact -SQL requires a keen eye and the ability to think abstractly.

There are a number of parallels between OOP design patterns and the patterns one may find in Transact-SQL. There are also numerous patterns that only apply to set-oriented languages such as Transact-SQL. I'll discuss a few of each in this section, but the list is by no means complete. When you review someone else's Transact-SQL, try to identify the design patterns at work in it. Recognizing design pattern applications helps make code more readable and more modular.

Iterator

It's common to need to perform a complex operation (one not easily accomplished with a single T-SQL statement) on every member in a series of like objects. We may want to call a stored procedure, perform a calculation, or execute some dynamic T-SQL on each element in a collection of similar elements. The Iterator pattern provides a template for addressing this situation.

Design Patterns defines an Iterator pattern as one whose stated intent is: "Provides a way to access the elements of an aggregate object sequentially without exposing its underlying representation." [4] Not coincidentally, a synonym for Iterator is Cursor. Providing a sequential access path to the rows in a table without exposing the details of how it retrieves those rows is exactly what a cursor does. It allows the T-SQL developer to focus on the rows she seeks, not on how to find them. Listing 3-17 presents an example:

[4] Gamma, Erich, et al. Design Patterns . Reading, MA:Addison-Wesley, 1995. Page 257.

Listing 3-17 An example of the Iterator pattern.
 DECLARE customerlist CURSOR FOR SELECT CompanyName FROM Customers FOR READ ONLY DECLARE @CompanyName varchar(40) OPEN customerlist FETCH customerlist INTO @CompanyName WHILE (@@FETCH_STATUS=0) BEGIN   EXEC CalcCompanyTaxes @CompanyName   FETCH customerlist INTO @CompanyName END CLOSE customerlist DEALLOCATE customerlist 

By definition, a design pattern is also an idiomatic form. The Iterator pattern is no exception: The previous example code is the idiomatic form of cursor use in Transact-SQL. Although there are other ways to structure the loop or to dispose of the cursor, this is the most natural and straightforward form.

Beyond being an idiom, though, Iterator is a design pattern that experienced Transact-SQL developers recognize immediately, and one they use often. It provides a standardized method of iterating through the objects in a list of similar objects, regardless of what they are.

Once a cursor is defined, the Transact-SQL code and the database operations necessary to furnish it with data are unimportant to the code that uses it. The FETCH command doesn't care about the details behind the cursorits only concern is the cursor itself. It requests data and the cursor delivers it.

To see how generally applicable this pattern is, consider the example in Listing 3-18:

Listing 3-18 Executing a command against every object in a database.
 DECLARE tables CURSOR FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES FOR READ ONLY DECLARE @table sysname OPEN tables FETCH tables INTO @table WHILE (@@FETCH_STATUS=0) BEGIN   EXEC sp_help @table   FETCH tables INTO @table END CLOSE tables DEALLOCATE tables 

This code lists detailed info for the tables in a database. Note how similar in form it is to the earlier example. This form is the design pattern. Even though the specifics change from one application of the pattern to the next , the form is the same. Experienced T-SQL developers can take one look at this code and know exactly what it does. They immediately begin looking for the variable parts of the patternthe sections that change from application to applicationto see what it is the pattern is being applied to do.

NOTE

Because a cursor can be returned as an output parameter from a stored procedure, it's possible to completely insulate the code that uses it from the code that defines it. This further abstracts the notion of an Iterator facility in Transact-SQL and resembles the GoF Iterator pattern even more closely.


In addition to being an idiomatic form, the Transact-SQL Iterator pattern also contains some idiomatic forms of its ownthree to be exact. First, notice the declaration of the cursor. There are numerous ways to declare a cursor. For example, you could declare a local cursor variable, then assign a cursor definition to it. This would work, but would be a needless deviation. Remember: When we deviate from established conventions, we want to do so for good reason so that developers reading our code in the future will know what to take note of and what not toso that they won't be needlessly distracted by inconsequential differences.

Second, notice the loop. We discussed the idiomatic form of a T-SQL WHILE loop earlier. Certainly, the looping behavior could be accomplished through other meansthrough the use of a GOTO, for examplebut again this would be a deviation from what's most natural for no apparent reason. So the loop itself is an idiomatic form. Lastly, notice the cleanup code for the cursor. The call to CLOSE is immediately followed by a call to DEALLOCATE. Why? Couldn't you just deallocate the cursor? Wouldn't it be automatically closed? Well, yes, in fact, it would. However, this is neither the most natural approach nor the most common among experienced developers. Experienced developers clean up after themselves , and they close things they open. CLOSE counteracts OPEN and DEALLOCATE reverses DECLARE, so including both keeps the code symmetrical as well as logical.

Intersector

As a set-oriented language, set operations are something that Transact-SQL is particularly good at. The most common of these is set intersection. In SQL, set intersection happens through joins. The members in one set (a table or view) are compared with those in another, and the elements that exist in both are returned as the intersection.

The Intersector pattern represents a template for Transact-SQL set intersection. It features an ANSI-style inner join to determine the values two tables have in common. Listing 3-19 presents an example:

Listing 3-19 The Intersector pattern at work.
 SELECT c.CompanyName, o.OrderID FROM Customers c INNER JOIN Orders o ON (c.CustomerID=o.CustomerID) 

(Results abridged)

 CompanyName                              OrderID ---------------------------------------- ----------- Alfreds Futterkiste                      10643 Alfreds Futterkiste                      10692 Alfreds Futterkiste                      10702 Alfreds Futterkiste                      10835 Alfreds Futterkiste                      10952 Alfreds Futterkiste                      11011 Ana Trujillo Emparedados y helados       10308 Ana Trujillo Emparedados y helados       10625 Ana Trujillo Emparedados y helados       10759 Ana Trujillo Emparedados y helados       10926 Antonio Moreno Taquera                  10365 Antonio Moreno Taquera                  10507 Antonio Moreno Taquera                  10535 Antonio Moreno Taquera                  10573 

Pretty straightforward, isn't it? Although this code is definitely of the elementary variety, it's important to be able to recognize the pattern on sight. When you see it in code, you should immediately think: Okay, we're looking for the rows in the first table with matching values in the second one. Pattern recognitionthat's the secret to interpreting complex code.

Of course, numerous variations of the Intersector pattern exist. Using outer joins and theta joins, we can get well beyond simple set intersections. That said, they all follow the same pattern. The join condition may be different, but the question we're answering with each type of join is the same: Which rows in one table relate (or don't relate) to those in the other?

Note that there are other ways to implement set intersection and its variants. However, there's just one that's idiomaticthe one demonstrated in Listing 3-19. Although you can join tables using WHERE clause conditions, it's inadvisable because certain types of joins (outer joins) can actually return incorrect results when expressed in the WHERE clause (this has to do with associative clauses and join order; see The Guru's Guide to Transact-SQL for more info). The most natural approach to constructing a join is the one taken in Listing 3-19.

Qualifier

Beyond returning data, qualifying (or filtering) data is probably the most common thing Transact-SQL coders do. Qualifying data amounts to filtering the rows returned by a query based on the values in a column or columns . In SQL, data is usually qualified using the WHERE clause of the SELECT statement (Listing 3-20):

Listing 3-20 The Qualifier pattern.
 SELECT * FROM Customers WHERE Country='Mexico' 

This is a very basic example, but the pattern is still in evidence. Whether the WHERE clause is a simple one like the one in Listing 3-20 or a more complex expression featuring compound clauses and subqueries, the pattern is the same: This is the way you qualify the rows in a result set using Transact-SQL.

Note that there are other ways of filtering a result set. For example, you could put your filter criteria in the HAVING clause instead of the WHERE clause, like this (Listing 3-21):

Listing 3-21 An unnatural filter.
 SELECT City, COUNT(*) AS NumberInCity FROM Customers GROUP BY City HAVING City LIKE 'A%' 

The problem with the code in Listing 3-21 is that it needlessly makes use of a HAVING clause when a WHERE clause would do. The WHERE approach is the more idiomatic, or natural, of the two. Other developers looking at the code may attach some significance to the fact that HAVING is used here instead of WHERE, when in fact there's no good reason for it. Here's the query restated to use WHERE (Listing 3-22):

Listing 3-22 The idiomatic form of the Qualifier pattern.
 SELECT City, COUNT(*) AS NumberInCity FROM Customers WHERE City Like 'A%' GROUP BY City 

The purpose of a HAVING clause is to filter the query after the result set has been gathered (for example, based on the value of an aggregate function). It's unnecessary in Listing 3-21, and, in fact, SQL Server translates the HAVING clause to a WHERE clause internally. If you compare the execution plan for the query in Listing 3-21 with the one in Listing 3-22, you'll see that they're the same. If SQL Server didn't perform this optimization and the table contained a large number of rows, performance would likely suffer markedly because all rows would be gathered from the table before the filter was applied.

Executor

For all the power of Transact-SQL, it's fairly common to need to construct dynamic T-SQL and execute it from a stored procedure or batch. Particularly when you need to parameterize the name of an object or a columnsomething Transact-SQL itself doesn't normally allowsometimes you simply have no alternative. The Executor pattern provides a template for creating and executing dynamic T-SQL strings (Listing 3-23):

Listing 3-23 The Executor pattern.
 DECLARE @s int, @sql nvarchar(128) DECLARE spids CURSOR FOR SELECT spid FROM master..sysprocesses WHERE spid<>@@SPID AND net_address<>'' FOR READ ONLY OPEN spids FETCH spids INTO @s WHILE (@@FETCH_STATUS=0) BEGIN   SET @sql='KILL '+CAST(@s AS varchar)   EXEC sp_executesql @sql   FETCH spids INTO @s END CLOSE spids DEALLOCATE spids 

This example opens a cursor against the sysprocesses pseudo-table and creates dynamic T-SQL statements that terminate every user connection except the current one. Notice that I say user connection. We distinguish between user connections and system connections by examining the net_address column in sysprocesses. A system connection has an empty net_address column. We keep from attempting to terminate the current connection (this isn't allowed, anyway) by checking the automatic variable @@SPID.

You could put any valid T-SQL code you wanted in the @sql variable. The sp_executesql procedure would attempt to execute it regardless. You could also loop on other conditions and across other data sources. For example, you could loop through the objects in the current database and construct a dynamic T-SQL command to do something with them.

Note the use of sp_executesql. You could use EXEC() here instead. I encourage the use of sp_executesql because it's more flexible and will perform well in a larger number of scenarios than EXEC(). For one thing, sp_executesql supports parameterized queries. This encourages plan reuse and will lead to generally better performance. The example in Listing 3-23 doesn't happen to have any parameters (only valid search parameters are allowed; you can't substitute object names or connection IDs, for example), but if it did, sp_executesql would be a better choice than EXEC(). sp_executesql can also return a result code from the dynamic T-SQL call. If the dynamic code raises an error with a severity of 11 or more, sp_executesql will return the error number in its result code.

So, even though EXEC() would work equally well in this particular instance, sp_executesql is the approach most often taken by experienced T-SQL developers for the reasons I've listed. This makes it more idiomatic than the EXEC() approach and is the reason I've used it to build the pattern example.

Conveyor

The Conveyor pattern provides a mechanism for returning a code or result up through a call stack. Say, for example, that you have three procedures: ProcA, ProcB, and ProcC. ProcA calls ProcB, and ProcB calls ProcC. When ProcC runs, an unexpected problem occurs, and you want to convey that info back up the chain to ProcA. How do you do this? The Conveyor pattern shows how (Listing 3-24):

Listing 3-24 The Conveyor pattern.
 CREATE PROC ProcC AS IF OBJECT_ID('no_exist') IS NOT NULL   SELECT * FROM no_exist ELSE   RETURN(-1) GO CREATE PROC ProcB AS DECLARE @res int EXEC @res=ProcC RETURN(@res) GO CREATE PROCEDURE ProcA AS DECLARE @res int EXEC @res=ProcB SELECT @res GO EXEC ProcA 

(Results)

 ----------- -1 

Note the way that we use stored procedure result codes to pass the original return code from procedure to procedure. This works great for integers, but what if you want to return an error message instead of a code? The pattern still holds (Listing 3-25):

Listing 3-25 The Conveyor pattern can convey data of any type.
 USE tempdb GO DROP PROC ProcA, ProcB, ProcC GO CREATE PROC ProcC @Msg varchar(128) OUT AS IF OBJECT_ID('no_exist') IS NOT NULL   SELECT * FROM no_exist ELSE   SET @Msg='Table doesn''t exist' GO CREATE PROC ProcB @Msg varchar(128) OUT AS EXEC ProcC @Msg OUT GO CREATE PROCEDURE ProcA AS DECLARE @Msg varchar(128) EXEC ProcB @Msg OUT SELECT @Msg GO EXEC ProcA 

(Results)

 --------------------------------------------------------------------------- Table doesn't exist 

In this application of the pattern, we simply use output parameters to convey the message up the call stack to the topmost procedure. Because we can use virtually any data type here (including a cursor), we have the flexibility to return any information we want.

A final application of the pattern would be to transmit a real error code up through the chain to the original calling routine. Listing 3-26 presents an example:

Listing 3-26 The Conveyor pattern can convey errors as well as messages.
 CREATE PROC ProcC AS   DECLARE @err int   IF @@TRANCOUNT=0     ROLLBACK TRAN -- Error, we're not in a tran   SET @err=@@ERROR   RETURN(@err) GO CREATE PROC ProcB AS DECLARE @res int EXEC @res=ProcC RETURN(@res) GO CREATE PROCEDURE ProcA AS DECLARE @res int EXEC @res=ProcB SELECT @res GO EXEC ProcA 

(Results)

 Server: Msg 3903, Level 16, State 1, Procedure ProcC, Line 4 The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION. ----------- 3903 

Regardless of the specific application, the Conveyor pattern provides a mechanism for conveying information along a chain of stored procedures. In this sense, Conveyor is similar to GoF's Chain of Responsibility pattern.

Restorer

The Restorer pattern provides a mechanism for cleaning up resource utilization when an error condition occurs. Restoring the operating environment is particularly important in the middle of a transaction. In order to avoid orphaning a transaction, it's crucial that you properly handle error conditions when a transaction is active. Orphaned transactions can hold locks and block other connections from working. Here's an implementation of the Restorer pattern (Listing 3-27):

Listing 3-27 The Restorer pattern cleans up the environment when trouble strikes.
 CREATE PROC ProcR AS DECLARE @err int BEGIN TRAN Update Customers SET City = 'Dallas' SELECT 1/0 -- Force an error SET @err=@@ERROR IF @err<>0 BEGIN   ROLLBACK TRAN   RETURN(@err) END COMMIT TRAN GO DECLARE @res int EXEC @res=ProcR SELECT @res 

(Results)

 Server: Msg 8134, Level 16, State 1, Procedure ProcR, Line 8 Divide by zero error encountered. ----------- 8134 

The key pieces of this pattern are the caching of the error code, and the IF block that reacts to nonzero errors. We cache @@ERROR because it's reset by the next successful statement execution. Once we've cached it, we check the saved value and roll back the active transaction if an error has occurred. We could just as easily have cleaned up other types of resources (Listing 3-28):

Listing 3-28 The Restorer pattern cleans up the environment when trouble strikes.
 CREATE PROC ProcR AS DECLARE @err int CREATE TABLE ##myglobal (c1 int) INSERT ##myglobal DEFAULT VALUES SELECT 1/0 -- Force an error SET @err=@@ERROR IF @err<>0 BEGIN   DROP TABLE ##myglobal   RETURN(@err) END DROP TABLE ##myglobal GO DECLARE @res int EXEC @res=ProcR SELECT @res 

(Results)

 Server: Msg 8134, Level 16, State 1, Procedure ProcR, Line 9 Divide by zero error encountered. ----------- 8134 

Here we drop a global temporary table when an error occurs. There are a number of different types of cleanup we could perform here, the most important of which is transactional cleanup. Apply the Restorer pattern to avoid orphaned transactions and unnecessary blocking.

A variation of Restorer applies the pattern proactively to clean up problems that may have been inherited by a block of code, but were not caused by it. An example is presented in Listing 3-29:

Listing 3-29 You can apply the Restorer pattern proactively.
 CREATE PROC ProcR AS IF @@TRANCOUNT<>0  -- Rollback old transactions before starting another   ROLLBACK TRAN DECLARE @err int BEGIN TRAN Update Customers SET City = 'Dallas' SELECT 1/0 -- Force an error SET @err=@@ERROR IF @err<>0 BEGIN   ROLLBACK TRAN   RETURN(@err) END COMMIT TRAN GO DECLARE @res int EXEC @res=ProcR SELECT @res 

(Results)

 Server: Msg 8134, Level 16, State 1, Procedure ProcR, Line 12 Divide by zero error encountered. ----------- 8134 

Notice the first ROLLBACK in the procedure. This ROLLBACK occurs when the procedure detects an active transaction (@@TRANCOUNT<>0) when it first starts. Because this is considered an error, the procedure rolls back the open transaction (a single call to ROLLBACK rolls back all active transactions for a connection, regardless of nesting) before starting another. In this sense, it's implementing the Restorer pattern proactivelyto clean up after others who may have left the environment in a transitive state. Coding this sort of logic into your applications is especially important when connection pooling is being used with SQL Server (very common with Web servers). Because one virtual connection can leave an open transaction that can affect later users of the same physical connection, it's important that your code knows how to protect itself from rogue transactions and other undesirable remnants.

TIP

Transact-SQL's error-handling constructs are far from airtight. They don't always work the way you might expect them to or the way that they should work. For example, there are plenty of errors that are severe enough to terminate the current command batch. When they occur, they prevent any error-handling code that might follow them (even in a stored procedure) from being reached. So, even if your code checks @@ERROR and calls ROLLBACK when a problem occurs, there are errors that will prevent it from ever being executed. This is probably the most common cause of orphaned transactions and is the reason the code you write should check for an orphaned transaction before beginning another.


Prototype

According to Design Patterns , the intent of the Prototype pattern is to "specify the kinds of objects to create using a prototypical instance, and create new objects by copying the prototype." [5] In other words, you start with a template or prototypical instance of an object, and use an implementation of the Prototype pattern to replicate it in cookie- cutter fashion.

[5] Ibid. Page 117.

The most obvious Transact-SQL implementation of the Prototype pattern is the SELECT INTO construct. Because it places the result set of a SELECT statement into a table of its own, SELECT INTO can be used to easily replicate the contents of a table or view, as shown in Listing 3-30:

Listing 3-30 A Transact-SQL implementation of the Prototype pattern.
 SELECT * INTO NewCustomers FROM Customers 

Because SELECT INTO has all the power of a plain SELECT statement, you can change the prototype in transit by specifying a column list, WHERE clause criteria, or even a GROUP BY or HAVING clause. This gives the Transact-SQL implementation of the pattern much more flexibility than most OOP implementations and leverages the power of SQL as a set-oriented language. For example, you can supply a false WHERE clause to create an empty copy of the table (Listing 3-31):

Listing 3-31 An implementation of Prototype that creates an empty copy.
 SELECT * INTO NewCustomers FROM Customers WHERE 0=1 

Here we clone the table's structure, but omit its data. Using a false WHERE clause with SELECT INTO is an easy way to replicate the structure of a table without incurring the expense or log activity of copying its data. Another variation of the pattern allows you to specify new data when cloning the table, like this (Listing 3-32):

Listing 3-32 This T-SQL Prototype pattern implementation supplies new data during the copy.
 SELECT IDENTITY(int, 1,1) AS CustNo, * INTO NewCustomers FROM Customers 

You could also specify new columns, columns from other tables or views (via a join), constants, or functions. The possibilities are endless. The bottom line here is that you should be able to recognize the Prototype pattern on sight and know that SELECT INTO is the most common implementation of it in Transact-SQL.

Singleton

The purpose of the Singleton pattern is to ensure that only one instance of a class exists at any given time and to provide an access path into that instance. In relational database terms, this could have a couple of different implementations. Strictly speaking, the equivalent to an object-oriented class, in an RDBMS, is a table. An instance of the class would be a row in the table, so an obvious implementation of Singleton would be to ensure that the table has just one row, like this (Listing 3-33):

Listing 3-33 A Transact-SQL implementation of the Singleton design pattern.
 USE tempdb GO DROP TABLE LastCustNo GO CREATE TABLE LastCustNo (LastCustNo int) GO INSERT LastCustNo VALUES (1) GO CREATE TRIGGER LastCustNoInsert ON LastCustNo FOR INSERT AS IF (SELECT COUNT(*) FROM LastCustNo)>1 BEGIN    RAISERROR('You may not insert more than one row into this table',16,10)    ROLLBACK TRAN END GO INSERT LastCustNo VALUES (2) -- Fails because of trigger GO SELECT * FROM LastCustNo 

Thanks to the trigger, only one row is allowed to exist in the table at any one time. If you attempt to insert a row, and the table already contains at least one row, an error is raised and the transaction is rolled back. Of course, BULK INSERT (with trigger execution disabled) can skirt this, but, short of intentionally circumventing it, our trigger-based Singleton implementation is pretty foolproof.

Note the use of IF (SELECT COUNT(*) FROM LastCustNo)>1 to determine whether the table already has a row. Why do we test >1 instead of =1? Very simple. With the exception of INSTEAD OF triggers, Transact-SQL triggers run after an operation has completed, but before that operation has been committed to the database. This means that, from the perspective of the trigger, the LastCustNo table appears to have two rows in it until we roll back the transaction. This also prevents us from using the EXISTS() predicate to test the table for rows, as in IF EXISTS(SELECT * FROM LastCustNo). Because newly inserted rows appear to the trigger to be in the table until we roll back the transaction, testing EXISTS() would prevent a row from being inserted into the table, even if the table was empty prior to the insertion attempt.

Another application of Singleton in relational database terms is in the use of primary and unique key constraints to prevent multiple instances of the same row. In other words, if we equate a table to a class and rows to instances (objects) of that class, applying Singleton to prevent multiple instances of a row amounts to adding a primary key or a unique key constraint to the table.

From a SQL Server application standpoint, a possible use of the Singleton pattern is in keeping multiple instances of an application from connecting to the server. For example, if you already have one instance of the Check Writer program connected to the database, you may not want to allow another to connect until the first one disconnects.

There are a couple of SQL Server facilities that come in handy in situations like this. The first one is the application locks facility. SQL Server allows you to use its lock manager to manage resources outside the server. In this case, you could take out an application lock when your app started, then release it when the app closed. By taking out the lock in Exclusive mode, you'd prevent another instance of the app from running until you released it. Here's some code:

 DECLARE @res int BEGIN TRAN EXEC @res = sp_getapplock @Resource = 'Check Writer',             @LockMode = 'Exclusive' -- Return to your app -- Then execute this when the app exits EXEC @res = sp_releaseapplock @Resource = 'Check Writer' ROLLBACK TRAN 

The problem with this approach is the potential for it to keep a transaction open for an extended period of time. Generally speaking, you should not leave a transaction open for long periods of time or when a user is being prompted for input. Because sp_getapplock requires an open user transaction in order for it to take out the lock we've requested , it may not be the best tool to use in this situation. Let's look at a better one:

 IF EXISTS(SELECT * FROM master..sysprocesses WHERE context_info=0x123456)   RAISERROR('You can run only one copy of this application at a time',20,1)     WITH LOG ELSE   SET CONTEXT_INFO 0x123456 

This code uses the SET CONTEXT_INFO command to plug a user-defined value into sysprocesses on start-up. Then, each time the app starts, it checks for this value. If the value is there, a connection already exists with the special token, so the app raises an error that terminates its own connection. If the value is not there, the routine puts it there and proceeds to load the application.

This technique is one way of ensuring that only one instance of an application connects to SQL Server at a time, but there are others. For example, an application could set the host name session variable and query sysprocesses for it when it loaded. There are a number of different ways of implementing Singleton in Transact-SQL.

Other Patterns

A number of other object-oriented design patterns have correspondents or loose equivalents in the realm of Transact-SQL. For example, a SQL Server view object approximates the GoF's Composite pattern, and a view with an INSTEAD OF trigger implements something resembling the Fa §ade pattern: It provides a unified interface to a set of interfaces in a subsystem. In this analogy, the "set of interfaces" is the set of tables referenced by the view and the Transact-SQL code needed to update them. This code is implemented by the INSTEAD OF trigger. The user updates the data returned by the view as though it came from a single table, hence the view is the unified interface. The INSTEAD OF trigger takes the updates made to the view and dispatches them to the appropriate underlying tables. The whole of the implementationthe view, the trigger, and the underlying tablesrepresents an approximation of the Fa §ade pattern. When you see a view with an INSTEAD OF trigger attached to it, it should be obvious that its designer intended to provide a unified interface to something more complexthat he or she intended to insulate you from the details associated with updating the view's underlying tables.

Another example of an object-oriented design pattern approximation in Transact-SQL is the Chain of Responsibility pattern. I mentioned the Chain of Responsibility pattern earlier in the discussion of the Conveyor pattern. Here we'll get into a little more detail. Design Patterns describes Chain of Responsibility as a pattern that avoids "coupling the sender of a request to its receiver by giving more than one object a chance to handle the request." [6] It goes on to say that to implement the pattern, you must "Chain the receiving objects and pass the request along the chain until an object handles it." [7] Think about this. What construct in Transact-SQL most closely implements the described behavior? Triggers. More specifically , nested triggers and multiple triggers per table. Triggers that perform operations that cause other triggers to fire implement the chaining behavior. The same could be said for multiple triggers on the same table. Let's say that you defined multiple INSERT triggers on a particular table and that each of these triggers was responsible for validating a different column in a newly inserted row. The insert "request," as it were, would, from a functional standpoint, be passed from trigger to trigger, with the triggers firing in no particular order. In either case, with multiple triggers on one table or with nested triggers, if any one of the triggers decides to reject the insert and roll back the transaction, all bets are offthe entire operation is aborted then and there. This implements the part of the pattern that calls for passing the request "along the chain until an object handles it." Although the analogy is not exact, the pattern is indeed there if you look for it.

[6] Ibid. Page 223.

[7] Ibid.

Command is another GoF pattern with a correspondent in the world of Transact-SQL. Its Design Patterns definition is: "Encapsulates a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations." [8] See if you can figure this one out. What SQL Server element does this most sound like? Think about the "queue or log requests" capability and the "undoable operations." Right! The Transact-SQL implementation of the GoF Command pattern is the transaction! SQL Server's transaction log is the facility in which "requests" (changes to data) are logged. The change itself is the request, and operations are undoable by virtue of the fact that you can roll back the transaction.

[8] Ibid. Page 233.

There are other parallels between the GoF patterns and techniques commonly used to build Transact-SQL applications. Knowing, using, and recognizing common design patterns is just as important in Transact-SQL as it is in any other language. Pattern recognition helps make the overall layout of an application simpler and easier to understand, and pattern use helps keep a program modular and easier to extend.

for RuBoard


The Guru[ap]s Guide to SQL Server[tm] Stored Procedures, XML, and HTML
The Guru[ap]s Guide to SQL Server[tm] Stored Procedures, XML, and HTML
ISBN: 201700468
EAN: N/A
Year: 2005
Pages: 223

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net