Transactions

Like a batch, a user -declared transaction typically consists of several SQL commands that read and update the database. However, a batch is basically a client-side concept, which controls how many statements are sent to SQL Server for processing at once. A transaction, on the other hand, is a server-side concept that deals with how much work SQL Server will do before it considers the changes committed. A multistatement transaction doesn't make any permanent changes in a database until a COMMIT TRANSACTION statement is issued. In addition, a transaction can undo its changes when a ROLLBACK TRANSACTION statement is issued.

Both batches and transactions can contain multiple commands. In fact, transactions and batches can have a many-to-many relationship. A single transaction can span several batches (although that's bad from a performance perspective), and a batch can contain multiple transactions. The following examples illustrate these possibilities. Note that you can use TRAN as an abbreviation for TRANSACTION.

The first script is one transaction spread across four batches:

 BEGIN TRAN INSERT authors VALUES (etc.) GO SELECT * FROM authors GO UPDATE publishers SET pub_id= (etc.) GO COMMIT TRAN GO 

This second script is one batch that contains two transactions:

 BEGIN TRAN INSERT authors VALUES (etc.) SELECT * FROM authors COMMIT TRAN BEGIN TRAN UPDATE publishers SET pub_id= (etc.) INSERT publishers VALUES (etc.) COMMIT TRAN GO 

Now let's look at a simple transaction in a little more detail. The BEGIN TRAN and COMMIT TRAN statements cause the commands between them to be performed as a unit:

 BEGIN TRAN INSERT authors VALUES (etc.) SELECT * FROM authors UPDATE publishers SET pub_id= (etc.) COMMIT TRAN GO 

Transaction processing in SQL Server assures that all commands within a transaction are performed as a unit of work ”even in the presence of a hardware or general system failure. Such transactions are referred to as having the ACID properties (atomicity, consistency, isolation, and durability). (For more information about the ACID properties, refer to the section on transaction processing in Chapter 2.)

Explicit and Implicit Transactions

By default, SQL Server treats each statement ”whether dispatched individually or as part of a batch ”as independent and immediately commits it. Logically, the only kind of statement that can be committed is a statement that changes something in the database, so talking about committing a SELECT statement is really meaningless. When we look at transaction isolation levels later in this section, we'll see that it's important in some cases to include SELECTs inside larger transactions. However, for the most part when we talk about transactions, we're talking about data modification statements (INSERT, UPDATE, and DELETE). If you want multiple statements to be part of a transaction, you must wrap the group of statements within BEGIN TRANSACTION and COMMIT or ROLLBACK TRANSACTION statements.

You can also configure SQL Server to implicitly start a transaction by using SET IMPLICIT_TRANSACTIONS ON or by turning the option on globally using sp_configure 'user options', 2 . More precisely, take the previous value for the 'user options' setting and OR it with (decimal) 2, which is the mask for IMPLICIT_TRANSACTIONS.

For example, if the previous value were (decimal) 8, you'd set it to 10, because 82 is 10. (The symbol used here is the vertical bar, not a forward or backward slash.) If bit operations like 82 are somewhat foreign to you, let SQL Server do the work. You can issue a SELECT 82 in Query Analyzer, and the value returned will be 10. (But be careful not to assume that you just add 2 to whatever is already there ”for example, 102 is 10, not 12.)

If implicit transactions are enabled, all statements are considered part of a transaction and no work is committed until and unless an explicit COMMIT TRAN (or synonymously , COMMIT WORK) is issued. This is true even if all the statements in the batch have executed: you must issue a COMMIT TRANSACTION in a subsequent batch before any changes are considered permanent. Throughout the examples in this book, we assume that you have not set IMPLICIT_TRANSACTIONS on and that any multistatement transaction must begin with BEGIN TRAN.

Error Checking in Transactions

One of the most common mistakes that developers make with SQL Server is to assume that any error within a transaction will cause the transaction to automatically roll back. If conditional action results from a possible error, your multistatement transactions should check for errors by selecting the value of @@ERROR after each statement. If a nonfatal error is encountered and you don't take action on it, processing moves on to the next statement. Only fatal errors cause the batch to be automatically aborted.

TIP
Neither a query that finds no rows meeting the criteria of the WHERE clause nor a searched UPDATE statement that affects no rows is an error, and @@ERROR returns a 0 (meaning no error) for either case. If you want to check for "no rows affected," use @@ROWCOUNT, not @@ERROR.

Syntax errors always cause the entire batch to be aborted before execution even begins, and references to objects that don't exist (for example, a SELECT from a table that doesn't exist) cause a batch to stop execution at that point. Typically, you'll work out syntax errors and correct references to objects before you put an application into production, so these won't be much of an issue.

However, a syntax error on a batch dynamically built and executed using EXECUTE(' string ') can't be caught until execution. You can think of this EXECUTE(' string ') construct as a way of nesting batches. If the string consists of multiple statements, a syntax or object reference error will abort the inner batch only. Processing proceeds to the next statement in the outer batch. For example, consider the following script:

 DECLARE @tablename sysname  SET @tablename = 'authours' EXECUTE ('USE pubs SELECT * FROM ' + @tablename + ' PRINT "Inner Batch Done") PRINT "Outer Batch Done" 

This batch won't have an error detected at parse time, so SQL Server will attempt to execute it. The DECLARE and the SET will be executed successfully, but the nested batch inside the EXECUTE will have a problem. There's an error in the SELECT, because the table name authours doesn't exist in the pubs database. The inner batch will generate an error during its parsing phase, so the third statement in the inner batch (PRINT) will not be executed. However, the outer batch will continue, and the message "Outer Batch Done" will be printed.

Errors that occur in a production environment are typically resource errors that you probably won't encounter much, especially if you've sufficiently tested to make sure that your configuration settings and environment are appropriate. Out-of-resource errors occur when the system runs out of locks, when there isn't enough memory to run a procedure, when a database runs out of room, and so on. For these types of fatal errors, conditional action based on @@ERROR is moot because the batch will automatically be aborted.

The following errors are among the most common ones that you need to be concerned about:

  • Lack of permissions on an object.
  • Constraint violations.
  • Duplicates encountered while trying to update or insert a row.
  • Deadlocks with another user.
  • NOT NULL violations.
  • Illegal values for the current datatype.

Following is a SQL Server Internet newsgroup posting from a user who encountered a nonfatal execution error and assumed that the entire transaction should have been automatically aborted. When that didn't happen, the user assumed that SQL Server must have a grievous bug. Here's the posting:

Using SQL 6.5 in the example below, the empty tables b and c each get an insert within one tran. The insert into table c is fine, but the insert to b fails a DRI reference, but the table c still has a row in it. Isn't this a major bug?! Should the tran not have been rolled back implicitly!?

”Keith

 create table a (a char(1) primary key) create table b (b char(1) references a) create table c (c char(1)) go create proc test as begin transaction insert c values ('X') Insert b values ('X')  --Fails reference commit transaction go exec test go select * from c  --Returns  'X'  !! 

The statements, however, are performing as expected; the bug exists in the way the user wrote the transaction. The transaction hasn't checked for errors for each statement and unconditionally commits at the end. So even if one statement fails with a nonfatal execution error (in this case, a foreign key violation occurred), execution proceeds to the next statement. Ultimately, the COMMIT is executed, so all the statements without errors are committed. That's exactly what the procedure has been told to do. If you want to roll back a transaction if any error occurs, you must check @@ERROR or use SET XACT_ABORT.

Here's an example of the procedure rewritten to perform error checking, with a branch to perform a rollback if any error is encountered:

 CREATE PROC test as BEGIN TRANSACTION INSERT c VALUES ('X')     IF (@@ERROR <> 0) GOTO on_error INSERT b VALUES ('X')  -- Fails reference     IF (@@ERROR <> 0) GOTO on_error     COMMIT TRANSACTION RETURN(0) on_error: ROLLBACK TRANSACTION RETURN(1) 

This simple procedure illustrates the power of Transact -SQL. The system function @@ERROR can return a value for the connection after each statement. A value of 0 for @@ERROR means no error occurred. Given the data the user provided, the INSERT statement on table b will fail with a foreign key violation. The error message for that type of failure is error 547, with text such as this:

 INSERT statement conflicted with COLUMN FOREIGN KEY constraint  'FK__b__b__723BFC65'. The conflict occurred in database 'pubs',  table 'a', column 'a' 

Consequently, @@ERROR would be set to 547 following that INSERT statement. Therefore, the statement IF (@@ERROR <> 0) evaluates as TRUE, and execution follows the GOTO to the on_error: label. Here, the transaction is rolled back. The procedure terminates with a return code of 1 because the RETURN(1) statement was used. Had the branch to on_error: not been followed (by virtue of @@ERROR not being 0), the procedure would have continued line-by-line execution. It would have reached COMMIT TRANSACTION, and then it would have returned value 0 and never made it all the way to the on_error: section.

As you do with most programming languages, make sure that a status is returned from a procedure to indicate success or failure (or other possible outcomes ) and that those return status codes are checked from the calling routines. In Keith's original code, an "EXEC test" invoked the procedure. This approach is perfectly legal and could be done equally well with a procedure that returns 0 for SUCCESS and 1 for FAILURE. However, merely using "EXEC test" won't directly provide information about whether the procedure performed as expected. A better method is to use a local variable to examine the return code from that procedure:

 DECLARE @retcode int EXEC @retcode=test 

Following execution of the test procedure, the local variable @retcode has the value 0 (if no errors occurred in the procedure) or 1 (if execution branched to the on_error: section).

The SET XACT_ABORT option was added to SQL Server in version 6.5 to help users like Keith. If you set this option, any error ”not just a fatal error (equivalent to checking @@ERROR <> 0 after every statement) ”will terminate the batch. Here's another way to ensure that nothing is committed if any error is encountered:

 CREATE PROC test AS SET XACT_ABORT ON BEGIN TRANSACTION INSERT c VALUES ('X') INSERT b VALUES ('X')  -- Fails reference COMMIT TRANSACTION GO EXEC test GO SELECT * FROM c 

The output:

 (0 rows affected) 

Note that the name of the XACT_ABORT option is a bit of a misnomer because the current batch, not simply the transaction, will be immediately aborted if an error occurs, just as it is when a fatal resource error is encountered. This has consequences that might not be immediately apparent. For example, if you issued two transactions within one batch, the second transaction would never be executed because the batch would be aborted before the second transaction got a chance to execute. More subtly, suppose we want to use good programming practice and check the return status of the procedure above. (Note that even though it doesn't explicitly do a RETURN(), every procedure has a return status by default, with 0 indicating SUCCESS.)

We could write a batch like this:

 DECLARE @retcode int EXEC @retcode=test SELECT @retcode 

Yet there's a subtle but important problem here. If the procedure has an error, the SELECT @RETCODE statement will never be executed: the entire batch will be aborted by virtue of the SET XACT_ABORT statement. This is why we recommend checking @@ERROR instead of using SET XACT_ABORT. Checking @@ERROR after each statement is a bit more tedious , but it gives you finer control of execution in your procedures.

No doubt, error handling will be improved in future releases. Unfortunately, error handling in SQL Server 7 can be somewhat messy and inconsistent. For example, there's no way to install a routine that means "Do this on any error" (other than SET XACT_ABORT, which aborts but doesn't let you specify the actions to be performed). Instead, you must use something similar to the preceding examples that check @@ERROR and then do a GOTO.

In addition, there's currently no easy way to determine in advance which errors might be considered fatal so that the batch can be aborted vs. which errors are nonfatal so that the next statement can be executed. In most cases, an error with a severity level of 16 or higher is fatal and the batch will be aborted. Syntax that refers to nonexistent functions are level-15 errors, yet the batch is still aborted. Although you can use @@ERROR to return the specific error number, no function such as @@SEVERITY is available to indicate the error's severity level. Instead, you must subsequently select from the sysmessages table to see the severity level of the last error. To further complicate matters, some level-16 errors aren't fatal. Table 10-1 below lists the most common nonfatal level-16 errors. Note the %.*s placeholders in the error messages: SQL Server will substitute the actual names being referenced when it issues the error message.

Admittedly, the rules are hardly consistent, and this area is ripe for some attention in future releases. As you write your procedures, keep in mind that they could be automatically aborted due to an unexpected fatal error, or you could cause them to abort with a nonfatal error.

Table 10-1. Common nonfatal level-16 errors.

Error Error Message
515 Attempt to insert the value NULL into column '%.*s' , table '%.*s' ; column doesn't allow nulls. %s fails.
544 Attempt to insert explicit value for identity column in table '%.*s' when IDENTITY_INSERT is set to OFF.
547 Attempt to execute %s statement conflicted with %s %s constraint '%.*s' . The conflict occurred in database '%.*s' , table '%.*s'%s%.*s%s .
550 Attempt to insert or update failed because the target view either specifies WITH CHECK OPTION or spans a view that specifies WITH CHECK OPTION, and one or more rows resulting from the operation didn't qualify under the CHECK OPTION constraint.

Transaction Isolation Levels

The isolation level at which your transaction runs determines your application's sensitivity to changes made by others; consequently, it also determines how long your transaction needs to hold locks to potentially protect against changes made by others. SQL Server 7 offers four isolation-level behaviors:

  • READ UNCOMMITTED (dirty read)
  • READ COMMITTED (default ”READ COMMITTED is equivalent to the term CURSOR STABILITY, which is used by several other products such as IBM DB/2)
  • REPEATABLE READ
  • SERIALIZABLE

Your transactions will behave differently depending on which isolation level is set. The saying "Not to decide is to decide" applies here, because every transaction has an isolation level whether you've specified it or not. You should understand the levels and choose the one that best fits your needs.

NOTE
The syntactical options listed above correspond to the SQL standard isolation levels, which were discussed in detail in Chapter 3 in the section about the transaction manager. In this section, we'll only look at information that wasn't covered earlier. See Chapter 3 if you need a refresher.

When using the READ UNCOMMITTED option, keep in mind that you should deal with inconsistencies that might result from dirty reads. Because share locks aren't issued and exclusive locks of other connections aren't honored (that is, the data pages can be read, even though they're supposedly locked), it's possible to get some spurious errors during execution when using the READ UNCOMMITTED option. A moment later, reexecuting the same command is likely to work without error.

To shield your end users from such errors, your client applications using isolation level 0 (dirty read) should be prepared to retry due to spurious 605, 606, 624, or 625 errors. One of these errors might get raised to falsely indicate that the database is inconsistent. In such a case, what frequently happens is that an update or insert rolls back along with its page allocations , so you read data pages that no longer exist and logically never did. Ordinarily, locking will prevent such inconsistencies, but READ UNCOMMITTED takes some shortcuts and the errors can still occur. The retry logic you use should be identical to the logic that you would use to retry on a deadlock condition (error 1205).

NOTE
Of course, you don't have to add retry logic, but without it the command will be aborted and your end user might see a scary and confusing message. Adding a good error handler can make your client application behave much better by enabling an automatic retry, and the user will never know when such an error is raised.

Nothing illustrates the differences and effects of isolation levels like seeing them for yourself with some simple examples. To run the following examples, you'll need to establish two connections to SQL Server. Remember that each connection is logically a different user, even if both connections use the same login ID. (Locks made by one connection affect the other connection, even if both are logged in as the same user.) In case you can't run these examples, the SQL scripts and output are shown here.

EXAMPLE 1

This example shows a dirty read. A simple command file on the companion CD, run_isolation1.cmd, spawns two simultaneous Osql.exe sessions, running the scripts isolation_cnx1.sql and isolation_cnx2.sql. The command file assumes that you're connecting to SQL Server on your local workstation. If not, you'll have to edit the RUN_ISOLATION1.CMD file as follows: change the argument after the /S flag from a single dot (meaning local server) to the name of the SQL Server machine.

Following are the scripts and the output (in bold) as will be contained in the output files ISOLATION_CNX1.OUT and ISOLATION_CNX2.OUT after the command file has been executed.

NOTE
The procedures sem_set and sem_wait provide a simple synchronization mechanism between the two connections. They do nothing more than set a value and then poll for a value ”a handy technique. You can create these procedures by running the SEMAPHORE.SQL script, which is included on the companion CD.

But this mechanism won't always do the job as it does in this case. For example, if you put set_sem within a transaction, it could be aborted or rolled back. Also, for illustrative purposes, simple delays are sometimes used here to coordinate the two scripts. Relying on timing like this creates the potential for a race condition, and it's not how you'd want to program a production application.

 -- ISOLATION_CNX2.SQL USE pubs GO EXEC sem_wait 1    -- Wait until other connection says can start BEGIN TRAN     UPDATE authors SET au_lname='Smith'     -- Give other connection chance to read uncommitted data     WAITFOR DELAY "000:00:20" ROLLBACK TRAN EXEC sem_set 0     -- Tell other connection done    (23 rows affected) -- ISOLATION_CNX1.SQL -- Illustrate Read Uncommitted (aka "Dirty Read") -- USE pubs GO EXEC sem_set 0    -- Clear semaphore to start GO -- First verify there's only one author named 'Smith' SELECT au_lname FROM authors WHERE au_lname='Smith' GO au_lname -------------------------- Smith (1 row affected) SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED -- Signal other connection that it's OK to update, and then wait  -- for it to proceed. (Obvious possible race condition here; -- this is just for illustration.) EXEC sem_set 1 -- Wait 10 secs for other connection to update WAITFOR DELAY "000:00:10" -- Check again for authors of name 'Smith'.  Now find 23 of them, -- even though the other connection doesn't COMMIT the changes. SELECT au_lname FROM authors WHERE au_lname='Smith' au_lname -------------------------- Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith Smith (23 rows affected) IF (@@ROWCOUNT > 0)     PRINT 'Just read uncommitted data !!'   Just read uncommitted data !! -- Now the other connection will roll back its changes: EXEC sem_wait 0 -- Now check again for authors of name 'Smith'. -- Find only one now, because other connection did a rollback. SELECT au_lname FROM authors WHERE au_lname='Smith' au_lname -------------------------- Smith (1 row affected) 

The previous scripts, running simultaneously , illustrate that by setting the isolation level to READ UNCOMMITTED, you can read data that logically never existed (a dirty read). The update to Smith was never committed by the isolation_cnx2.sql script, yet the other connection read it. Not only did the connection read the update, but it did so immediately and didn't have to wait for the exclusive lock of the second connection updating the data to be released.

In this example, concurrency is high; however, consistency of the data isn't maintained . By changing the isolation level back to the default (READ COMMITTED), the same scripts would never see more than one row containing Smith . But with READ COMMITTED, the connection reading the data must wait until the updating transaction is done, and this is the trade-off: higher consistency is achieved (the uncommitted rows aren't seen by others), but concurrency is reduced.

EXAMPLE 2

This example illustrates the semantic differences between selecting data under the default isolation level of READ COMMITTED (cursor stability) and under the isolation level of REPEATABLE READ. Use the command file RUN_ISOLATION3.CMD to run these two scripts simultaneously. (As before, you'll need to edit the file if you don't have SQL Server on your local machine.) Output will be written to ISOLATION3_CNX1.OUT and ISOLATION3_ CNX2.OUT.

 -- ISOLATION3_CNX2.SQL USE pubs GO -- Wait until other connection says can start, then sleep 10 secs EXEC sem_wait 1 WAITFOR DELAY "000:00:10" GO UPDATE authors SET au_lname='Smith' GO (23 rows affected) EXEC sem_set 0    -- Tell other connection done with first part. EXEC sem_wait 1   -- Wait until other connection says can start,                    -- then sleep 10 secs. GO WAITFOR DELAY "000:00:10" GO UPDATE authors SET au_lname='Jones' GO (23 rows affected) -- ISOLATION3_CNX1.SQL -- Illustrate Read Repeatable/Serializable (aka level 2 & 3) USE pubs GO UPDATE authors SET au_lname='Doe' -- Make sure no Smith or Jones (23 rows affected) EXEC sem_SET 0   -- Clear semaphore to start GO -- First verify there are no authors named 'Smith' SELECT au_lname FROM authors WHERE au_lname='Smith' GO   au_lname (0 rows affected) SET TRANSACTION ISOLATION LEVEL READ COMMITTED GO -- Signal other connection it's OK to update, then wait for it to  -- proceed. (Obvious possible race condition here; this is just -- for illustration.) EXEC sem_SET 1 GO BEGIN TRAN SELECT au_lname FROM authors WHERE au_lname='Smith' GO au_lname  (0 rows affected) WAITFOR DELAY "000:00:15" SELECT au_lname FROM authors WHERE au_lname='Smith' au_lname                                   Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                      Smith                                     (23 rows affected) COMMIT TRAN GO EXEC sem_wait 0 EXEC sem_SET 1 GO -- Now do the same thing, but with SERIALIZABLE isolation SET TRANSACTION ISOLATION LEVEL SERIALIZABLE  GO BEGIN TRAN SELECT au_lname FROM authors WHERE au_lname='Jones' GO au_lname                                  (0 rows affected) -- Wait for other connection to have a chance to make and commit -- its changes WAITFOR DELAY "000:00:15" SELECT au_lname FROM authors WHERE au_lname='Jones' au_lname (0 rows affected) COMMIT TRAN GO -- Now notice that Jones updates have been done SELECT au_lname FROM authors WHERE au_lname='Jones' GO au_lname                                   Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                      Jones                                     (23 rows affected) EXEC sem_SET 0    -- Tell other connection done 

As you can see, when the isolation level was set to READ COMMITTED, selecting the same data twice within a transaction yielded totally different results: first no Smiths were found, and then 23 of them appeared. If this were a banking transaction, these results would be unacceptable. For applications that are less sensitive to minor changes, or when business functions guard against such inconsistencies in other ways, this behavior might be acceptable.

After changing the isolation level to REPEATABLE READ, we got the same result (no Joneses) with both SELECT statements. Immediately after the transaction was committed, the other connection updated all the names to Jones . So when we again selected for Jones (immediately after the transaction was committed), the update took place ”but at some cost. In the first case, the other connection was able to do the update as soon as the first SELECT was processed . It didn't have to wait for the second query and subsequent transaction to complete. In the second case, that second connection had to wait until the first connection completed the transaction in its entirety. In this example, concurrency was significantly reduced but consistency was perfect.

EXAMPLE 3

This example illustrates the semantic differences between selecting data under the isolation level of REPEATABLE READ and under the isolation level of SERIALIZABLE. Use the command file RUN_ISOLATION4.CMD to run these two scripts simultaneously. (Like before, you'll need to edit the file if SQL Server isn't on your local machine.) Output will be written to ISOLATION4_CNX1.OUT and ISOLATION4_CNX2.OUT.

 -- ISOLATION4_CNX2.SQL USE pubs SET QUOTED_IDENTIFIER OFF SET ANSI_DEFAULTS OFF go -- Wait until other connection says can start, then sleep 10 secs EXEC sem_wait 1  WAITFOR DELAY "000:00:10" INSERT titles VALUES ('BU2000', 'Inside SQL Server', 'popular_comp',          '0877', 49.95, 5000, 10, 0, null, '12/31/98') GO EXEC sem_set 0     -- Tell other connection done with first part. EXEC sem_wait 1    -- Wait until other connection says can start,                     -- then sleep 10 secs. GO WAITFOR DELAY "000:00:10" GO INSERT titles VALUES ('BU3000', 'Inside Visual InterDev',  'popular_comp',           '0877', 39.95, 10000, 12, 15000, null, '2/14/98') GO -- ISOLATION4_CNX1.SQL -- Illustrate Read Repeatable vs. Serializable (aka levels 2 & 3) -- USE pubs SET QUOTED_IDENTIFIER OFF SET ANSI_DEFAULTS OFF go EXEC sem_set 0    -- Clear semaphore to start GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO BEGIN TRAN SELECT title FROM titles WHERE title_id LIKE 'BU%' GO title                                                                 --------------------------------------------------------- The Busy Executive's Database Guide Cooking with Computers: Surreptitious Balance Sheets You Can Combat Computer Stress! Straight Talk About Computers -- Wait for other connection to make and commit its changes WAITFOR DELAY "000:00:15"  SELECT title FROM titles WHERE title_id LIKE 'BU%' title                                                      --------------------------------------------------------- Inside SQL Server The Busy Executive's Database Guide Cooking with Computers: Surreptitious Balance Sheets You Can Combat Computer Stress! Straight Talk About Computers COMMIT TRAN GO EXEC sem_wait 0 EXEC sem_set 1 GO -- Now do the same thing, but with SERIALIZABLE isolation SET TRANSACTION ISOLATION LEVEL SERIALIZABLE  GO BEGIN TRAN SELECT title FROM titles WHERE title_id LIKE 'BU%' GO title                                                            --------------------------------------------------------- Inside SQL Server The Busy Executive's Database Guide Cooking with Computers: Surreptitious Balance Sheets You Can Combat Computer Stress! Straight Talk About Computers -- Wait for other connection to have a chance to make and commit -- its changes EXEC sem_set 1 GO SELECT title FROM titles WHERE title_id LIKE 'BU%' title                                                             --------------------------------------------------------- Inside SQL Server The Busy Executive's Database Guide Cooking with Computers: Surreptitious Balance Sheets You Can Combat Computer Stress! Straight Talk About Computers COMMIT TRAN GO -- Now notice that the second  -- new title has been inserted after the COMMIT TRAN SELECT title FROM titles WHERE title_id LIKE 'BU%' GO title                                                           --------------------------------------------------------- Inside SQL Server Inside Visual InterDev The Busy Executive's Database Guide Cooking with Computers: Surreptitious Balance Sheets You Can Combat Computer Stress! Straight Talk About Computers 

As you can see, when the isolation level was set to REPEATABLE READ, phantoms were possible. Another connection could add new data so that reissuing the same SELECT within a transaction would return more rows the second time. (In this case, Inside SQL Server just mysteriously appeared; it's the phantom row.)

After changing the isolation level to SERIALIZABLE, we got the same result (five titles) with both SELECT statements. Immediately after the transaction committed, the other connection inserted a new book with the title Inside Visual InterDev . So when we again selected all the titles (immediately after the transaction was committed), the INSERT had taken place ”but again, at some cost. In the first case, the other connection was able to do the INSERT as soon as the first SELECT was processed. It didn't have to wait for the second SELECT and subsequent transaction to complete. In the second case, that second connection had to wait until the first connection completed the transaction in its entirety. Just like in Example 2, concurrency was significantly reduced but consistency was perfect.

Another way to look at the difference between these isolation levels is to see what modification operations are possible once you've selected data within a transaction. In ISOLATION LEVEL REPEATABLE READ, the actual rows read can't be modified or deleted. However, new rows can be added to the range of data specified in the WHERE clause of the SELECT statement. All data read is locked so that no modifications can occur. However, since INSERT is adding new data, the locks on the existing data do not interfere. In ISOLATION LEVEL SERIALIZABLE, once your transaction has selected data within a transaction, no modifications are possible to that data, or to the covered range of data, by any other connection. SERIALIZABLE locks ranges of data, including data that didn't actually exist, so that INSERTs are not allowed.

Additional Characteristics of Transactions

In addition to isolation levels, it's important to understand the following characteristics of transactions.

First, a single UPDATE, DELETE, or INSERT/SELECT statement that affects multiple rows is always an atomic operation and must complete without error, or it's automatically rolled back. For example, if you performed a single UPDATE statement that updated all rows, but one row failed a constraint, the operation would be terminated with no rows updated. Because there's no way to do error checking for each row within such a statement, any error rolls back the statement. However, the batch isn't aborted. Execution proceeds to the next available statement in the batch. This will occur even in INSERT operations that are based on a SELECT statement (INSERT/SELECT or SELECT INTO).

Alternatively, a single UPDATE statement that updates hundreds of rows could fail to complete because of a system failure, possibly unrelated to SQL Server. (For example, the power might go out, and your UPS system might not arrive until next Tuesday.) Because SQL Server considers the single UPDATE to be an atomic unit of work, when SQL Server comes back up and runs recovery on the database, it will be as if the UPDATE had never happened . If SQL Server doesn't know for sure that a transaction committed, it will be rolled back. Even if 999,999 rows out of a million had been updated before the failure, because a transaction is an all-or-nothing operation, we're left with nothing.

Like the statements mentioned earlier, modifications made by a trigger are always atomic with the underlying data modification statement. For example, if an update trigger attempts to update data but fails, the underlying data modification operation is rolled back. Both operations must succeed, or neither does.

It might not be obvious, but Example 2 (the REPEATABLE READ example) shows that a transaction can affect pure SELECT operations in which no data is modified. The transaction boundaries define the statements between which a specified isolation level will be assured. Two identical SELECT statements might behave differently if they were executed while wrapped by BEGIN TRAN/COMMIT TRAN with an isolation level of REPEATABLE READ than they would behave if they were executed together in the same batch, but not within a transaction.



Inside Microsoft SQL Server 7.0
Inside Microsoft SQL Server 7.0 (Mps)
ISBN: 0735605173
EAN: 2147483647
Year: 1999
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net