Transactions

Like a batch (discussed in Chapter 11), a user-declared transaction typically consists of several SQL commands that read and update the database. However, a batch is basically a client-side concept; it controls how many statements are sent to SQL Server for processing at once. A transaction, on the other hand, is a server-side concept. It deals with how much work SQL Server will do before it considers the changes committed. A multistatement transaction doesn't make any permanent changes in a database until a COMMIT TRANSACTION statement is issued. In addition, a multistatement transaction can undo its changes when a ROLLBACK TRANSACTION statement is issued.

Both batches and transactions can contain multiple commands. In fact, transactions and batches can have a many-to-many relationship. A single transaction can span several batches (although that's a bad idea from a performance perspective), and a batch can contain multiple transactions. The following examples illustrate these possibilities. Note that you can use TRAN as an abbreviation for TRANSACTION.

The first script is one transaction spread across four batches:

 BEGIN TRAN INSERT authors VALUES (etc.) GO SELECT * FROM authors GO UPDATE publishers SET pub_id = (etc.) GO COMMIT TRAN GO 

This second script is one batch that contains two transactions:

 BEGIN TRAN INSERT authors VALUES (etc.) SELECT * FROM authors COMMIT TRAN BEGIN TRAN UPDATE publishers SET pub_id = (etc.) INSERT publishers VALUES (etc.) COMMIT TRAN GO 

Now let's look at a simple transaction in a little more detail. The BEGIN TRAN and COMMIT TRAN statements cause the commands between them to be performed as a unit:

 BEGIN TRAN INSERT authors VALUES (etc.) SELECT * FROM authors UPDATE publishers SET pub_id = (etc.) COMMIT TRAN GO 

Transaction processing in SQL Server ensures that all commands within a transaction are performed as a unit of work—even in the presence of a hardware or general system failure. Such transactions are considered to have the ACID properties (atomicity, consistency, isolation, and durability). (For more information about the ACID properties, see Chapter 2.)

Explicit and Implicit Transactions

By default, SQL Server treats each statement—whether it's dispatched individually or as part of a batch—as independent and immediately commits it. Logically, the only kind of statement that can be committed is a statement that changes something in the database, so talking about committing a SELECT statement is really meaningless. When we look at transaction isolation levels later in this section, you'll see that it's important in some cases to include SELECTs inside larger transactions. However, when we talk about transactions, we're generally talking about data modification statements (INSERT, UPDATE, and DELETE). So any individual data modification statement by itself is an implicit transaction. No matter how many rows are modified, they will either all be modified or none of them will be. If a system failure occurs while a million-row table is being updated, when SQL Server recovers, it will roll back the part of the update that's already been written to disk, and the state of the database will be as if the update never happened.

If you want a transaction to include multiple statements, you must wrap the group of statements within BEGIN TRANSACTION and COMMIT TRANSACTION or ROLLBACK TRANSACTION statements. Most of the examples in this section will deal with transactions containing multiple statements, which are called explicit transactions.

You can also configure SQL Server to implicitly start a transaction by using SET IMPLICIT_TRANSACTIONS ON or by turning on the option globally using sp_configure 'user options', 2. More precisely, you take the previous value for the user options setting and OR it with (decimal) 2, which is the mask for IMPLICIT_TRANSACTIONS.

For example, if the previous value were (decimal) 8, you'd set it to 10 because 8|2 is 10. (The symbol used here is the vertical bar, not a forward or backward slash.) If bit operations such as 8|2 are somewhat foreign to you, let SQL Server do the work. You can issue a SELECT 8|2 in SQL Query Analyzer, and the value returned will be 10. (But be careful not to assume that you just add 2 to whatever is already there—for example, 10|2 is 10, not 12.)

If implicit transactions are enabled, all statements are considered part of a transaction and no work is committed until and unless an explicit COMMIT TRAN (or synonymously, COMMIT WORK) is issued. This is true even if all the statements in the batch have executed: you must issue a COMMIT TRANSACTION in a subsequent batch before any changes are considered permanent. The examples in this book assume that you have not set IMPLICIT_TRANSACTIONS on and that any multistatement transaction must begin with BEGIN TRAN.

Error Checking in Transactions

One of the most common mistakes that developers make with SQL Server is to assume that any error within a transaction will cause the transaction to automatically roll back. This is an understandable misconception because we're always talking about transactions being atomic. However, when we talk about a failure causing a transaction to be rolled back, it usually means a system failure. If the whole system stops functioning, SQL Server must be restarted and the automatic recovery process will roll back any incomplete transactions. However, for many errors encountered in a transaction, only the current statement will be aborted—the rest of the statements in the transaction can continue to be processed and even committed. Here's an example:

 USE PUBS BEGIN TRAN UPDATE authors SET state = 'FL' WHERE state = 'KS' UPDATE jobs SET min_lvl = min_lvl - 10 COMMIT TRAN SELECT * FROM authors WHERE state = 'FL' 

The transaction contains two UPDATE statements. The first UPDATE changes the state value in one row to 'FL'. The second UPDATE attempts to subtract 10 from all the min_lvl values in the jobs table, but there is a CHECK constraint on the min_lvl column that specifies that min_lvl must be greater than or equal to 10. Most of the values in the jobs table are greater than 20, so subtracting 10 would not violate the constraint. However, there is one row with a value of 10, and subtracting 10 would cause a constraint violation. Because of that one conflicting row, the entire statement is aborted and no rows in jobs are updated, but the transaction is not rolled back. The SELECT statement after the COMMIT shows that we still have one author with a value for state of 'FL'.

Your multistatement transactions should check for errors by selecting the value of @@ERROR after each statement. If a nonfatal error is encountered and you don't take action on it, processing moves to the next statement. Only fatal errors cause the batch to be automatically aborted and the transaction rolled back. Here's what the preceding batch would look like with the appropriate error checking:

 USE PUBS BEGIN TRAN UPDATE authors SET state = 'FL' WHERE state = 'KS' IF @@ERROR <> 0 BEGIN ROLLBACK TRAN GOTO ON_ERROR END UPDATE jobs SET min_lvl = min_lvl - 10 IF @@ERROR <> 0 BEGIN ROLLBACK TRAN GOTO ON_ERROR END COMMIT TRAN ON_ERROR: SELECT * FROM authors WHERE state = 'FL' 

In this batch, we're checking for errors after each of the data modification statements. Just rolling back the transaction if an error is found is not sufficient, however, because in most cases a ROLLBACK TRAN does not change the flow of control in the execution of your statements. The ROLLBACK TRAN would undo any completed modifications in the current transaction and make the appropriate entries in the transaction log. However, program flow would continue to the next line. In this case, with no GOTO, if the first UPDATE had an error we would still continue to the second UPDATE; if the second UPDATE had an error, we would still continue and try to execute the COMMIT TRAN. Since we would have already executed a ROLLBACK TRAN, there would no longer be an open transaction to commit and we would get an error for the COMMIT TRAN statement.

TIP


A query that finds no rows meeting the criteria of the WHERE clause is not an error. Nor is a searched UPDATE statement that affects no rows. @@ERROR returns a 0 (meaning no error) in either case. If you want to check for "no rows affected," use @@ROWCOUNT, not @@ERROR.

Syntax errors always cause the entire batch to be aborted before execution even begins, and references to objects that don't exist (for example, a SELECT from a table that doesn't exist) cause a batch to stop execution at that point. However, just because a batch has been aborted doesn't mean that the transaction has been rolled back. You've seen that transactions can span batches. If you receive an error and you're not sure whether your transaction has been rolled back, you can check the status of the transaction using the system function @@TRANCOUNT. If @@TRANCOUNT is greater than 0, there is an active transaction and you can execute a ROLLBACK TRAN. I'll discuss @@TRANCOUNT in more detail later in this section. Typically, you'll work out syntax errors and correct references to objects before you put an application into production, so these won't be much of an issue.

A syntax error on a batch dynamically built and executed using EXECUTE('string') can never be caught until execution. You can think of this EXECUTE('string') construct as a way of nesting batches. If the string consists of multiple statements, a syntax or object reference error will abort the inner batch only. Processing proceeds to the next statement in the outer batch. For example, consider the following script:

 DECLARE @tablename sysname SET @tablename = 'authours' EXECUTE ('USE pubs SELECT * FROM ' + @tablename + ' PRINT ''Inner Batch Done'' ') PRINT 'Outer Batch Done' 

This batch won't have an error detected at parse time, so SQL Server will attempt to execute it. The DECLARE and the SET will be executed successfully, but the nested batch inside the EXECUTE will have a problem. There's an error in the SELECT because the table name authours doesn't exist in the pubs database. The inner batch will generate an error during its name resolution phase, so the third statement in the inner batch (PRINT) will not be executed. However, the outer batch will continue and the message "Outer Batch Done" will be printed.

Errors that occur in a production environment are typically resource errors. You probably won't encounter these much, especially if you've done sufficient testing to make sure that your configuration settings and environment are appropriate. Out-of-resource errors occur when a database file fills up, when there isn't enough memory to run a procedure, when a database is off line and unavailable, and so on. For these types of fatal errors, conditional action based on @@ERROR is moot because the batch will automatically be aborted.

The following are among the most common errors that you need to be concerned about:

  • Lack of permissions on an object
  • Constraint violations
  • Duplicates encountered while trying to update or insert a row
  • Deadlocks with another user
  • NOT NULL violations
  • Illegal values for the current datatype

Here's an example that creates three tables, two of which have a primary/foreign key relationship. Table b has a foreign key reference to table a; any INSERT into b should fail if the value inserted doesn't already exist in table a. A procedure then tries to insert into two of the tables. The procedure is written to perform error checking, with a branch to perform a rollback if any error is encountered. Since no inserts have been done into table a, any insert into b should fail:

 CREATE TABLE a ( a char(1) primary key) CREATE TABLE b ( b char(1) references a) CREATE TABLE c ( c char(1)) GO CREATE PROC test as BEGIN TRANSACTION INSERT c VALUES ('X') IF (@@ERROR <> 0) GOTO on_error INSERT b VALUES ('X') -- Fails reference IF (@@ERROR <> 0) GOTO on_error COMMIT TRANSACTION RETURN(0) on_error: ROLLBACK TRANSACTION RETURN(1) 

This simple procedure illustrates the power of Transact-SQL. The system function @@ERROR can return a value for the connection after each statement. A value of 0 for @@ERROR means that no error occurred. Given the data the user provided, when the procedure test is executed, the INSERT statement on table b will fail with a foreign key violation. The error message for that type of failure is error 547, with text such as this:

 INSERT statement conflicted with COLUMN FOREIGN KEY constraint 'FK__b__b__723BFC65'. The conflict occurred in database 'pubs', table 'a', column 'a' 

Consequently, @@ERROR would be set to 547 following that INSERT statement. Therefore, the statement IF (@@ERROR <> 0) evaluates as TRUE, and execution follows the GOTO to the on_error: label. Here, the transaction is rolled back. The procedure terminates with a return code of 1 because the RETURN(1) statement was used. Had the branch to on_error: not been followed (by virtue of @@ERROR not being 0), the procedure would have continued line-by-line execution. It would have reached COMMIT TRANSACTION, and then it would have returned value 0 and never made it all the way to the on_error: section.

As with most programming languages, with Transact-SQL you can return a status code from a procedure to indicate success or failure (or other possible outcomes); those return status codes are checked from the calling routines. However, you have to remember to check the status after executing the procedures. Merely using EXEC test won't directly provide information about whether the procedure performed as expected. A better method is to use a local variable to examine the return code from that procedure:

 DECLARE @retcode int EXEC @retcode=test 

Following execution of the test procedure, the local variable @retcode has the value 0 (if no errors occurred in the procedure) or 1 (if execution branched to the on_error: section).

The SET XACT_ABORT option was added to SQL Server to force any error—not just a fatal error—to terminate the batch. Here's another way to ensure that nothing is committed if any error is encountered:

 CREATE PROC test AS SET XACT_ABORT ON BEGIN TRANSACTION INSERT c VALUES ('X') INSERT b VALUES ('X') -- Fails reference COMMIT TRANSACTION GO EXEC test GO SELECT * FROM c 

Here's the output:

 (0 rows affected) 

Note that the name of the XACT_ABORT option is a bit of a misnomer because the current batch, not simply the transaction, is immediately aborted if an error occurs, just as it is when a fatal resource error is encountered. This has consequences that might not be immediately apparent. For example, if you issued two transactions within one batch, the second transaction would never be executed because the batch would be aborted before the second transaction got a chance to execute. More subtly, suppose we want to use good programming practice and check the return status of the procedure above. (Note that even though it doesn't explicitly do a RETURN, every procedure has a return status by default, with 0 indicating SUCCESS.)

We could write a batch like this:

 DECLARE @retcode int EXEC @retcode=test SELECT @retcode 

Yet there's a subtle but important problem here. If the procedure has an error, the SELECT @RETCODE statement will never be executed: the entire batch will be aborted by virtue of the SET XACT_ABORT statement. This is why I recommend checking @@ERROR instead of using SET XACT_ABORT. Checking @@ERROR after each statement is a bit more tedious, but it gives you finer control of execution in your procedures.

Unfortunately, error handling in SQL Server 2000 can be somewhat messy and inconsistent. For example, there's no way to install a routine that means "Do this on any error" (other than SET XACT_ABORT, which aborts but doesn't let you specify the actions to be performed). Instead, you must use something similar to the preceding examples that check @@ERROR and then do a GOTO.

In addition, there's currently no easy way to determine in advance which errors might be considered fatal so that the batch can be aborted and which errors are nonfatal so that the next statement can be executed. In most cases, an error with a severity level of 16 or higher is fatal and the batch will be aborted. Syntax that refers to nonexistent functions are level-15 errors, yet the batch is still aborted. Although you can use @@ERROR to return the specific error number, no function such as @@SEVERITY is available to indicate the error's severity level. Instead, you must subsequently select from the sysmessages table to see the severity level of the last error. To further complicate matters, some level-16 errors aren't fatal, including the constraint violation error encountered in the first error-checking example. Table 12-1 lists the most common nonfatal level-16 errors. Note the %.*s placeholders in the error messages: SQL Server substitutes the actual names being referenced when it issues the error message.

Admittedly, the rules are hardly consistent, and this area is ripe for some attention in future releases. As you write your procedures, keep in mind that they could be automatically aborted due to an unexpected fatal error, or you could cause them to abort with a nonfatal error.

Table 12-1. Common nonfatal level-16 errors.

Level-16 Error Error Message
515 Attempt to insert the value NULL into column '%.*s', table '%.*s'; column doesn't allow nulls. %s fails.
544 Attempt to insert explicit value for identity column in table '%.*s' when IDENTITY_INSERT is set to OFF.
547 Attempt to execute %s statement conflicted with %s %s constraint '%.*s'. The conflict occurred in database '%.*s', table '%.*s'%s%.*s%s.
550 Attempt to insert or update failed because the target view either specifies WITH CHECK OPTION or spans a view that specifies WITH CHECK OPTION, and one or more rows resulting from the operation didn't qualify under the CHECK OPTION constraint.

Transaction Isolation Levels

The isolation level at which your transaction runs determines your application's sensitivity to changes made by others; consequently, it also determines how long your transaction needs to hold locks to potentially protect against changes made by others. SQL Server 2000 offers four isolation-levels:

  • READ UNCOMMITTED (dirty read)
  • READ COMMITTED (default—READ COMMITTED is equivalent to the term CURSOR STABILITY, which is used by several other products such as IBM DB/2)
  • REPEATABLE READ
  • SERIALIZABLE

Your transactions will behave differently depending on which isolation level is set. The saying "Not to decide is to decide" applies here, because every transaction has an isolation level whether you've specified it or not. You should understand the levels and choose the one that best fits your needs.

NOTE


The syntactical options listed above correspond to the SQL standard isolation levels, which I discussed in detail in Chapter 3. In this section, we'll look at information that wasn't covered earlier. See Chapter 3 if you need a refresher.

When you use the READ UNCOMMITTED option, keep in mind that you should deal with inconsistencies that might result from dirty reads. Because share locks aren't issued and exclusive locks of other connections aren't honored (that is, the data pages can be read even though they're supposedly locked), it's possible to get some spurious errors during execution when you use READ UNCOMMITTED. A moment later, reexecuting the same command is likely to work without error.

Nothing illustrates the differences and effects of isolation levels like seeing them for yourself with some simple examples. To run the following examples, you'll need to establish two connections to SQL Server. Remember that each connection is logically a different user, even if both connections use the same login ID. Locks held by one connection affect the other connection even if both are logged in as the same user. In case you can't run these examples, the SQL scripts and output are shown here.

EXAMPLE 1

This example shows a dirty read. To test these examples yourself, you have to run the commands for connection 1 in one SQL Query Analyzer window and the commands for connection 2 in a second window. The commands for each connection will be listed side by side, and it is important to execute the batches for each connection at the appropriate time relative to the batches for the other connection. That is, the batches should be executed in the order of the "batch number" listed in the leftmost column. Following is the code and the output (in bold):

 Batch Connection 1                        Connection 2 ----- ----------------------------------- ------------------------------- 1                                         USE PUBS                                           GO 2                                         USE PUBS                                           GO 3                                         -- First verify there's only                                           -- one author named 'Smith'                                            SELECT au_lname FROM authors                                           WHERE au_lname='Smith'                                           GO                                           RESULTS                                           au_lname                                           --------------------------                                           Smith                                                          (1 row affected)                                                          SET TRANSACTION ISOLATION LEVEL                                            READ UNCOMMITTED                                           GO 4     BEGIN TRAN         UPDATE authors          SET au_lname='Smith'       -- Give other connection chance to       -- read uncommitted data       GO       (23 rows affected)         5                                         SELECT au_lname FROM authors                                           WHERE au_lname='Smith'                                                      au_lname                                           --------------------------                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                           Smith                                                      (23 rows affected)                                                      IF (@@ROWCOUNT > 1)                                            PRINT 'Just read uncommitted                                            data !!'                                                      Just read uncommitted data !!          6     -- UNDO the changes       -- to the authors table       ROLLBACK TRAN       GO      7                                         -- Now check again for authors                                           -- of name 'Smith'.                                           -- Find only one now, because                                           -- other connection did a rollback.                                           SELECT au_lname FROM authors                                           WHERE au_lname='Smith'                                                      au_lname                                           --------------------------                                           Smith                                                      (1 row affected) 

These batches, running simultaneously, illustrate that by setting the isolation level to READ UNCOMMITTED, you can read data that logically never existed (a dirty read). The update to Smith was never committed by connection 1, yet the other connection read it. Not only did the connection read the updated data, but it did so immediately and didn't have to wait for the exclusive lock of the first connection updating the data to be released.

In this example, concurrency is high; however, consistency of the data isn't maintained. By changing the isolation level back to the default (READ COMMITTED), the same batches would never see more than one row containing Smith. But with READ COMMITTED, the connection reading the data must wait until the updating transaction is done. This is the trade-off: higher consistency is achieved (the uncommitted rows aren't seen by others), but concurrency is reduced.

EXAMPLE 2

This example illustrates the semantic differences between selecting data under the default isolation level of READ COMMITTED (cursor stability) and under the isolation level of REPEATABLE READ.

 Batch Connection 1         Connection 2 ----- -------------------- ----------------------------------- 1     USE PUBS       GO              2                                         USE PUBS                                           GO           3                                         UPDATE authors SET au_lname='Doe'                                           -- Make sure no Smith or Jones                                           (23 rows affected)                                                      -- Change the transaction isolation                                           -- level and then verify there are                                           -- no authors named 'Smith'                                           SET TRANSACTION ISOLATION LEVEL                                            READ COMMITTED                                           GO                                           BEGIN TRAN                                           SELECT au_lname FROM authors                                           WHERE au_lname='Smith'                                           GO                                           (0 rows affected)                                4     UPDATE authors             SET au_lname='Smith'       GO       (23 rows affected)               5                                         SELECT au_lname FROM authors                                           WHERE au_lname='Smith'                                           GO                                                      au_lname                                           ------------------------                                           Smith                                           Smith                                           Smith                                                                                      Smith                                           Smith                                                      (23 rows affected)                                                                  COMMIT TRAN                                           GO                                                      -- Now do the same thing,                                           -- but with REPEATABLE READ                                            isolation                                                      SET TRANSACTION ISOLATION LEVEL                                             REPEATABLE READ                                           GO                                                         BEGIN TRAN                                           SELECT au_lname FROM authors                                           WHERE au_lname='Smith'                                           GO                                           au_lname                                           ------------------------                                           Smith                                           Smith                                           Smith                                                                                      Smith                                           Smith                                                      (23 rows affected)          6     UPDATE authors SET au_lname='Jones'       GO       (query will block)             7                                         SELECT au_lname FROM authors                                           WHERE au_lname='Smith'                                           GO                                           au_lname                                           ------------------------                                           Smith                                           Smith                                           Smith                                                                                      Smith                                           Smith                                                      (23 rows affected)                                                                 COMMIT TRAN                                           GO          8     -- Now notice that query       -- has completed       (23 rows affected)      9                                         -- Now notice that Jones updates                                           -- have been done                                           SELECT au_lname FROM authors                                           GO                                                      au_lname                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                           Jones                                                      (23 rows affected) 

As you can see, when the isolation level was set to READ COMMITTED, selecting the same data twice within a transaction yielded totally different results: first no Smiths were found, and then 23 of them appeared. If this were a banking transaction, these results would be unacceptable. For applications that are less sensitive to minor changes, or when business functions guard against such inconsistencies in other ways, this behavior might be acceptable.

After changing the isolation level to REPEATABLE READ, we got the same result (no Joneses) with both SELECT statements. Immediately after the transaction was committed, the other connection updated all the names to Jones. So when we again selected for Jones (immediately after the transaction was committed), the update took place—but at some cost. In the first case, the other connection was able to do the update as soon as the first SELECT was processed. It didn't have to wait for the second query and subsequent transaction to complete. In the second case, that second connection had to wait until the first connection completed the transaction in its entirety. In this example, concurrency was significantly reduced but consistency was perfect.

EXAMPLE 3

This example illustrates the semantic differences between selecting data under the isolation level of REPEATABLE READ and under the isolation level of SERIALIZABLE.

 Batch Connection 1                           Connection 2 ----- -------------------------------------- ---------------------------------- 1     USE PUBS       GO        2                                            USE PUBS                                              GO        3                                            SET TRANSACTION ISOLATION LEVEL                                              REPEATABLE READ                                              GO                                   BEGIN TRAN                                 SELECT title FROM titles                         WHERE title_id LIKE 'BU%'                         GO                         title                                              -----------------------------------                                         The Busy Executive's Database Guide                         Cooking with Computers:                         Surreptitious Balance Sheets                         You Can Combat Computer Stress!                         Straight Talk About Computers                                                4     INSERT titles VALUES                         ('BU2000', 'Inside SQL Server 2000',         'popular_comp', '0877', 59.95, 5000,         10, 0, null, 'Sep 10, 2000')                 GO                                           (1 row(s) affected)                                                            5                       -- execute the same SELECT again                         SELECT title FROM titles                         WHERE title_id LIKE 'BU%'                         GO                         title                         -----------------------------------                         Inside SQL Server 2000                         The Busy Executive's Database Guide                         Cooking with Computers:                         Surreptitious Balance Sheets                         You Can Combat Computer Stress!                         Straight Talk About Computers                                                   COMMIT TRAN                                    -- Now do the same thing,                         -- but with SERIALIZABLE isolation                                             SET TRANSACTION ISOLATION LEVEL                         SERIALIZABLE                         GO                                    BEGIN TRAN                         SELECT title FROM titles                         WHERE title_id LIKE 'BU%'                         GO                                    title                         ------------------------------------                         Inside SQL Server 2000                         The Busy Executive's Database Guide                         Cooking with Computers:                         Surreptitious Balance Sheets                         You Can Combat Computer Stress!                         Straight Talk About Computers                                                6     INSERT titles VALUES ('BU3000',                 'Itzik and His Black Belt SQL Tricks',         'popular_comp', '0877', 39.95,                        10000, 12, 15000, null,                         'Sep 15 2000')                                 GO                                               (query will block)                                                                7                        SELECT title FROM titles                         WHERE title_id LIKE 'BU%'                         GO                                    title                         -----------------------------------                         Inside SQL Server 2000                         The Busy Executive's Database Guide                         Cooking with Computers:                         Surreptitious Balance Sheets                         You Can Combat Computer Stress!                         Straight Talk About Computers                                    COMMIT TRAN                         GO        8     -- Now notice that query       -- has completed       (1 row affected)      9                        -- Now notice that the second                         -- new title has been inserted                         -- after the COMMIT TRAN                         SELECT title FROM titles                        WHERE title_id LIKE 'BU%'                         GO                                    title                         -----------------------------------                         Inside SQL Server 2000                         Itzik and His Black Belt SQL Tricks                         The Busy Executive's Database Guide                         Cooking with Computers:                         Surreptitious Balance Sheets                         You Can Combat Computer Stress!                         Straight Talk About Computers 

As you can see, when the isolation level was set to REPEATABLE READ, phantoms were possible. Another connection could add new data so that reissuing the same SELECT within a transaction would return more rows the second time. In this case, Inside SQL Server 2000 just mysteriously appeared; it's the phantom row.

After changing the isolation level to SERIALIZABLE, we got the same result (five titles) with both SELECT statements in the second connection. Immediately after the transaction committed, the first connection could complete the insert of a new book with the title Itzik and His Black Belt SQL Tricks. So when we again selected all the titles (immediately after the transaction was committed), the INSERT had taken place—but again, at some cost. In the first case, the first connection was able to do the INSERT as soon as the second connection's first SELECT was processed. It didn't have to wait for the second SELECT and for the transaction to commit. In the second case, that first connection had to wait until the second connection completed the transaction in its entirety. Just as in Example 2, concurrency was significantly reduced but consistency was perfect.

Another way to look at the difference between these isolation levels is to see what modification operations are possible once you've selected data within a transaction. In ISOLATION LEVEL REPEATABLE READ, the actual rows read can't be modified or deleted. However, new rows can be added to the range of data specified in the WHERE clause of the SELECT statement. All data read is locked so that no modifications can occur. However, since INSERT is adding new data, the locks on the existing data do not interfere. In ISOLATION LEVEL SERIALIZABLE, once you have selected data within a transaction, no modifications are possible to that data, or to the covered range of data, by any other connection. SERIALIZABLE locks ranges of data, including data that didn't actually exist, so that INSERTs are not allowed.

Other Characteristics of Transactions

In addition to isolation levels, it's important to understand the following characteristics of transactions.

First, a single UPDATE, DELETE, or INSERT/SELECT statement that affects multiple rows is always an atomic operation; it must complete without error, or else it will be automatically rolled back. For example, if you performed a single UPDATE statement that updated all rows but one row failed a constraint, the operation would be terminated with no rows updated. Because there's no way to do error checking for each row within such a statement, any error rolls back the statement. However, the batch isn't aborted. Execution proceeds to the next available statement in the batch. This will occur even in INSERT operations that are based on a SELECT statement (INSERT/SELECT or SELECT INTO).

Alternatively, a single UPDATE statement that updates hundreds of rows could fail to complete because of a system failure, possibly unrelated to SQL Server. (For example, the power might go out and your UPS system might not arrive until next Tuesday.) Because SQL Server considers the single UPDATE to be an atomic unit of work, when SQL Server comes back up and runs recovery on the database, it will be as if the UPDATE had never happened. If SQL Server doesn't know for sure that a transaction committed, the transaction will be rolled back. Even if 999,999 rows out of a million had been updated before the failure, we're left with nothing because the transaction is an all-or-nothing operation.

Like the statements mentioned earlier, modifications made by a trigger are always atomic with the underlying data modification statement. For example, if an update trigger attempts to update data but fails, the underlying data modification operation is rolled back. Both operations must succeed or neither will.

It might not be obvious, but Example 2 (the REPEATABLE READ example) shows that a transaction can affect pure SELECT operations in which no data is modified. The transaction boundaries define the statements between which a specified isolation level will be assured. Two identical SELECT statements might behave differently if they were executed while wrapped by BEGIN TRAN/COMMIT TRAN with an isolation level of REPEATABLE READ than they would behave if they were executed together in the same batch, but not within a transaction.

Nested Transaction Blocks

It's syntactically acceptable for blocks of BEGIN TRANSACTION followed by COMMIT TRAN or ROLLBACK TRAN to be nested within other such blocks. You can also do this kind of nesting with calls to nested stored procedures. However, the semantics of such a formulation might not be what you'd expect if you think that the transactions are truly nested: they aren't. But the behavior is reasonable and predictable. A ROLLBACK TRAN rolls back all levels of the transaction, not only its inner block. A COMMIT TRAN does nothing to commit the transaction if the statement isn't part of the outermost block. If the COMMIT TRAN is part of the outermost block, it commits all levels of the transaction.

So the behavior of a COMMIT or a ROLLBACK isn't too orthogonal when the transaction blocks are nested. Only the outermost COMMIT is able to commit the transaction, but any ROLLBACK will roll back the entire transaction at all levels. If this weren't true, a ROLLBACK in an outer transaction wouldn't be able to perform its job because the data would have already been committed. This behavior allows stored procedures (and triggers) to be executed automatically and in a predictable way without your needing to check the transaction state. A nested stored procedure that does a ROLLBACK TRAN will roll back the entire transaction, including work done by the top-level procedure.

In addition, the blocks of BEGIN TRAN and COMMIT TRAN or ROLLBACK TRAN are determined only by what actually executes, not by what's present in the batch. If conditional branching occurs (via IF statements, for example) and one of the statements doesn't execute, the statement isn't part of a block.

A common misconception about ROLLBACK TRAN, which I briefly discussed earlier, is that it changes the flow of control, causing, for example, an immediate return from a stored procedure or a batch. However, flow of control continues to the next statement, which, of course, could be an explicit RETURN. ROLLBACK affects only the actual data; it doesn't affect local variables or SET statements. If local variables are changed during a transaction or if SET statements are issued, those variables and options do not revert to the values they had before the transaction started. Variables, including table variables, and SET options aren't part of transaction control.

The system function @@TRANCOUNT will return the depth of executed BEGIN TRAN blocks. You can think of the behavior of COMMIT TRAN and ROLLBACK TRAN in this way: ROLLBACK TRAN performs its job for all levels of transaction blocks whenever @@TRANCOUNT is 1 or greater. A COMMIT TRAN commits changes only when @@TRANCOUNT is 1.

NOTE


You'll see several examples of nesting transaction blocks. For illustration, the value of @@TRANCOUNT is shown in the comments. If this discussion isn't totally clear to you, you should work through these examples by hand to understand why the value of @@TRANCOUNT is what it is, why it changes, and why the behavior of COMMIT TRAN or ROLLBACK TRAN acts in a way that depends on the @@TRANCOUNT value.

Executing a BEGIN TRAN statement always increments the value of @@TRANCOUNT. If no transaction is active, @@TRANCOUNT is 0. Executing a COMMIT TRAN decrements the value of @@TRANCOUNT. Executing a ROLLBACK TRAN rolls back the entire transaction and sets @@TRANCOUNT to 0. Executing either a COMMIT TRAN or a ROLLBACK TRAN when there's no open transaction (@@trancount is 0) results in error 3902 or 3903, which states that the COMMIT or ROLLBACK request has no corresponding BEGIN TRANSACTION. In this case, @@trancount is not decremented, so it can never go below 0.

Other errors, not of your making, can be fatal as well—for example, running out of memory, filling up the transaction log (if you don't have it set to autogrow), or terminating because of a deadlock. Such errors cause an open transaction to automatically roll back. If that occurs, @@trancount will return 0, signaling that no open transaction exists.

The following incidents cause fatal errors:

  • The system runs out of resources.
  • The log runs out of space.
  • Deadlock conditions exist.
  • Protection exceptions occur (that is, sufficient permissions are unavailable on an object).
  • A stored procedure can't run (for example, it's not present or you don't have privileges).
  • The maximum nesting level of stored procedure executions has been reached.

You can't plan precisely for all of these situations, and even if you could, a new release could likely add another one. In a production environment, fatal errors should be few and far between. But as is true in nearly all programming, it's up to you to decide whether to try to plan for every conceivable problem and then deal with each one specifically or whether to accept the system default behavior. In SQL Server, the default behavior would typically be to raise an error for the condition encountered and then, when the COMMIT TRAN or ROLLBACK TRAN is executed, raise another error that says it has no corresponding BEGIN TRANSACTION.

One possibility is to write some retry logic in your client application to deal with a possible deadlock condition. For the other error conditions, you might decide to go with the default error behavior. If your applications are deployed with proper system management, such errors should be nonexistent or rare. You could also easily check @@TRANCOUNT before executing a ROLLBACK TRAN or COMMIT TRAN to ensure that a transaction to operate on is open.

The nesting of transaction blocks provides a good reason to not name the transaction in a ROLLBACK statement. If, in a rollback, any transaction other than the top-level transaction is named, error 6401 will result:

 Cannot roll back XXX. No transaction or savepoint of that name was found. 

It's fine to name the transaction in the BEGIN TRAN block, however. A COMMIT can also be named, and it won't prompt an error if it's not paired in the top-level branch because it's basically a NO-OP in that case anyway (except that it decrements the value returned by @@trancount).

One reason for naming your BEGIN TRAN statements is to allow your COMMIT and ROLLBACK statements to be self-documenting. ROLLBACK TRAN doesn't require a transaction name, but you can include one as a way of emphasizing that you are rolling back the entire transaction.

The following batch will result in an error. The statement ROLLBACK TRAN B will fail with error 6401. The subsequent ROLLBACK TRAN A will succeed because it's the top-level transaction block. Remember, though, that we'd get the same behavior if we simply used ROLLBACK TRAN instead of ROLLBACK TRAN A.

 -- To start with, verify @@TRANCOUNT is 0 SELECT @@TRANCOUNT BEGIN TRAN A -- Verify @@TRANCOUNT is 1 SELECT @@TRANCOUNT -- Assume some real work happens here BEGIN TRAN B -- Verify @@TRANCOUNT is 2 SELECT @@TRANCOUNT -- Assume some real work happens here ROLLBACK TRAN B -- @@TRANCOUNT is still 2 because the previous ROLLBACK -- failed due to error 6401 SELECT @@TRANCOUNT -- Assume some real work happens here ROLLBACK TRAN A -- This ROLLBACK succeeds, so @@TRANCOUNT is back to 0 SELECT @@TRANCOUNT 

The following example is arguably an improvement over the previous attempt. The first ROLLBACK will execute and the second ROLLBACK will fail with error 3903:

 Server: Msg 3903, Level 16, State 1 The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION. 

The first ROLLBACK did its job, and there is no open transaction:

 -- To start with, verify @@TRANCOUNT is 0 SELECT @@TRANCOUNT BEGIN TRAN A -- Verify @@TRANCOUNT is 1 SELECT @@TRANCOUNT -- Assume some real work happens here BEGIN TRAN B -- Verify @@TRANCOUNT is 2 SELECT @@TRANCOUNT -- Assume some real work happens here ROLLBACK TRAN -- Notice the tran is unnamed but works -- That ROLLBACK terminates transaction. @@TRANCOUNT is now 0. SELECT @@TRANCOUNT -- The following ROLLBACK will fail because there is no open -- transaction (that is, @@TRANCOUNT is 0) ROLLBACK TRAN -- @@TRANCOUNT does not go negative. It remains at 0. SELECT @@TRANCOUNT 

If you syntactically nest transactions, be careful not to nest multiple ROLLBACK statements in a way that would allow more than one to execute. To illustrate that COMMIT behaves differently than ROLLBACK, note that the following example will run without error. However, the statement COMMIT TRAN B doesn't commit any changes. It does have the effect of decrementing @@trancount, however.

 -- To start with, verify @@TRANCOUNT is 0 SELECT @@TRANCOUNT BEGIN TRAN A -- Verify @@TRANCOUNT is 1 SELECT @@TRANCOUNT -- Assume some real work happens here BEGIN TRAN B -- Verify @@TRANCOUNT is 2 SELECT @@TRANCOUNT -- Assume some real work happens here COMMIT TRAN B -- The COMMIT didn't COMMIT anything, but does decrement -- @@TRANCOUNT -- Verify @@TRANCOUNT is back down to 1 SELECT @@TRANCOUNT -- Assume some real work happens here COMMIT TRAN A -- closes the transaction. -- Since there's no open transaction, @@TRANCOUNT is again 0. SELECT @@TRANCOUNT 

In summary, nesting transaction blocks is perfectly valid, but you must understand the semantics. If you understand when @@trancount is incremented, decremented, and set to 0 and you know the simple rules for COMMIT TRAN and ROLLBACK TRAN, you can pretty easily produce the effect you want.

Savepoints

Often, users will nest transaction blocks only to find that the resulting behavior isn't what they want. What they really want is for a savepoint to occur in a transaction. A savepoint provides a point up to which a transaction can be undone—it might have been more accurately named a "rollback point." A savepoint doesn't "save" or commit any changes to the database—only a COMMIT statement can do that.

SQL Server allows you to use savepoints via the SAVE TRAN statement, which doesn't affect the @@TRANCOUNT value. A rollback to a savepoint (not a transaction) doesn't affect the value returned by @@TRANCOUNT, either. However, the rollback must explicitly name the savepoint: using ROLLBACK TRAN without a specific name will always roll back the entire transaction.

In the first nested transaction block example shown previously, a rollback to a transaction name failed with error 6401 because the name wasn't the top-level transaction. Had the name been a savepoint instead of a transaction, no error would have resulted, as this example shows:

 -- To start with, verify @@TRANCOUNT is 0 SELECT @@TRANCOUNT BEGIN TRAN A -- Verify @@TRANCOUNT is 1 SELECT @@TRANCOUNT -- Assume some real work happens here SAVE TRAN B -- Verify @@TRANCOUNT is still 1. A savepoint does not affect it. SELECT @@TRANCOUNT -- Assume some real work happens here ROLLBACK TRAN B -- @@TRANCOUNT is still 1 because the previous ROLLBACK -- affects just the savepoint, not the transaction SELECT @@TRANCOUNT -- Assume some real work happens here ROLLBACK TRAN A -- This ROLLBACK succeeds, so @@TRANCOUNT is back to 0 SELECT @@TRANCOUNT 



Inside Microsoft SQL Server 2000
Inside Microsoft SQL Server 2000
ISBN: 0735609985
EAN: 2147483647
Year: 2005
Pages: 179
Authors: Kalen Delaney

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net