Lesson 1: Understanding the Available Transaction Isolation Levels


Lesson 1: Understanding the Available Transaction Isolation Levels

image from book

Estimated lesson time: 60 minutes

image from book

According to the SQL-99 standard, there are four transaction isolation levels:

  • Read committed

  • Read uncommitted

  • Repeatable read

  • Serializable

Each level protects against specific concurrency problems caused by access to the same data by multiple connections. Besides these four isolation levels, SQL Server 2005 also provides the following two additional isolation levels:

  • Snapshot

  • Read commited snapshot (which is a variation of the read committed isolation level rather than an entirely different level)

To set the active transacton isolation level for a connection, execute the SET TRANSACTION ISOLATION LEVEL statement. Note that the read committed snapshot isolation level cannot be set by this statement. How to enable this isolation level is covered later in this chapter. To retreive the active isolation level for the current connection, execute the DBCC USER-OPTIONS statement and examine the row labeled "isolation level."

Types of Concurrency Problems

There are several concurrency problems that can occur in a database management system when multiple users access the same data. The following is a short explanation of each concurrency problem.

Lost Update

A lost update can be interpreted in one of two ways. In the first scenario, a lost update is considered to have taken place when data that has been updated by one transaction is overwritten by another transaction, before the first transaction is either committed or rolled back. This type of lost update cannot occur in SQL Server 2005 because it is not allowed under any transaction isolation level.

The other interpretation of a lost update is when one transaction (Transaction #1) reads data into its local memory, and then another transaction (Transaction #2) changes this data and commits its change. After this, Transaction #1 updates the same data based on what it read into memory before Transaction #2 was executed. In this case, the update performed by Transaction #2 can be considered a lost update.

Dirty Read

If data that has been changed by an open transaction is accessed by another transaction, a dirty read has taken place. A dirty read can cause problems because it means that a data manipulation language (DML) statement accessed data that logically does not exist yet or will never exist (if the open transaction is rolled back). All isolation levels except for read uncommitted protect against dirty reads.

Non-Repeatable Read

If a specific set of data is accessed more than once in the same transaction (such as when two different queries against the same table use the same WHERE clause) and the rows accessed between these accesses are updated or deleted by another transaction, a non-repeatable read has taken place. That is, if two queries against the same table with the same WHERE clause are executed in the same transaction, they return different results. The repeatable read, serializable, and snapshot isolation levels protect a transaction from non-repeatable reads.

Phantom Reads

Phantom reads are a variation of non-repeatable reads. A phantom read is when two queries in the same transaction, against the same table, use the same WHERE clause, and the query executed last returns more rows than the first query. Only the serializable and snapshot isolation levels protect a transaction from phantom reads.

Using Locks to Solve Concurrency Problems

SQL Server uses locks stored in memory as a way to solve concurrency problems. There are several types of locks that are used. You can find more information about the available lock types in the article "Lock Compatibility" in SQL Server 2005 Books Online at http://msdn2.microsoft.com/en-us/library/aa213041(SQL.80).aspx. Following are the locks that you need to know about for this lesson:

  • Shared or S-locks Shared locks are sometimes referred to as read locks. There can be several shared locks on any resource (such as a row or a page) at any one time. Shared locks are compatible with other shared locks.

  • Exclusive or X-locks Exclusive locks are also referred to as write locks. Only one exclusive lock can exist on a resource at any time. Exclusive locks are not compatible with other locks, including shared locks.

  • Update or U-locks Update locks can be viewed as a combination of shared and exclusive locks. An update lock is used to lock rows when they are selected for update, before they are actually updated. Update locks are compatible with shared locks, but not with other update locks. Lesson 2, "Designing Transactions and Optimizing Locking," discusses update locks further.

All isolation levels always issue exclusive locks for write operations and hold the locks for the entire duration of the transaction. In the next sections, you will look at how shared locks are handled by the different isolation levels. To see which locks are currently being held, you can query the sys.dm_tran_locks dynamic management view or execute the sys.sp_lock system stored procedure.

Choosing the Correct Isolation Level

Which lock types are acquired by SQL Server depends on the active transaction isolation level. The type of isolation level used can significantly affect both the performance and the results of executed queries.

Read Committed Isolation Level

The read committed transaction isolation level is the default isolation level for new connections in SQL Server. This isolation level guarantees that dirty reads do not occur in your transaction. A row is considered dirty when it has been deleted, updated, or inserted by another transaction in another connection where the transaction has not yet been committed or rolled back. If your connection is using the read committed isolation level and SQL Server encounters a dirty row while executing a DML statement, it will wait until the transaction that currently owns the row has been committed or rolled back before continuing execution.

By default, there is no timeout for these waits. However, you can specify a timeout by executing the SET LOCK_TIMEOUT statement. This statement requires one parameter: the maximum number of milliseconds to wait. Note that if you set it to 0, it will not wait at all. Instead, you will immediately receive an error message, and the executing DML statement will be stopped and rolled back.

In the read committed isolation level, shared locks are acquired for read operations, but they are released as soon as they have been granted. The shared locks are not held for the duration of the transaction. The following is an example of the behavior of this isolation level. The rows in the following table show the order in which the statements are executed. Note that the ALTER DATABASE statements are used to make sure that the correct settings are used for the database. These ALTER DATABASE statements require that no connections exist against the database.

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION OFF; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT OFF; -- Table used in this example. CREATE TABLE Test.TestTran (Col INT NOT NULL); 

Open table as spreadsheet

Connection 1

Connection 2

 BEGIN TRAN;       INSERT Test.TestTran (Col)           VALUES (1); 

 - 

 - 

 SELECT * FROM Test.TestTran WHERE Col = 1; /* When this SELECT statement is executed, the connec- tion will be blocked and wait for the transaction in connection 1 to complete. */ 

 UPDATE Test.TestTran SET Col = 2       WHERE Col = 1; 

Still waiting

 COMMIT TRAN; 

Because the transaction in connection 1 is now completed, this connection's SELECT statement now returns an empty result set (because the value of the Col-column is now 2 and it searched for Col = 1).

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

The following code sample uses the SET LOCK_TIMEOUT setting together with a try/catch block to return only the rows found before the first dirty row is encountered by the SELECT statement.

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION OFF; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT OFF; -- Table used in this example. CREATE TABLE Test.TestTran (Col INT NOT NULL); 

Open table as spreadsheet

Connection 1

Connection 2

 INSERT Test.TestTran (Col)       VALUES (1); BEGIN TRAN;       INSERT Test.TestTran (Col)           VALUES (2); 

 - 

 - 

 BEGIN TRY      SET LOCK_TIMEOUT 0;      SELECT * FROM Test.TestTran; END TRY BEGIN CATCH END CATCH /* Returns only Col = 1. */ 

 COMMIT TRAN; 

 - 

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

Read Uncommitted Isolation Level

The read uncommitted isolation level is tempting to consider because it can provide great performance benefits. Unfortunately, this is at the expense of returning valid results. This isolation level only guarantees that a transaction does not experience lost updates. When executing a DML statement in the read uncommitted isolation level, SQL Server will allow dirty rows to be returned. This might cause you to read data that has never existed because the data that was read was inserted or updated by another transaction that was later rolled back.

You should consider using this isolation level only in routines where the issue of dirty reads is not a problem. Such routines usually return information that is not directly used as a basis for decisions. A typical example where dirty reads might be allowed is for queries that return data that are only used in lists in the application (such as a list of customers) or if the database is only used for read operations.

The read uncommitted isolation level is by far the best isolation level to use for performance, as it does not wait for other connections to complete their transactions when it wants to read data that these transactions have modified. In the read uncommitted isolation level, shared locks are not acquired for read operations; this is what makes dirty reads possible. This fact also reduces the work and memory required by the SQL Server lock manager. Because shared locks are not acquired, it is no problem to read resources locked by exclusive locks. However, while a query is executing in the read uncommitted isolation level, another type of lock called a schema stability lock (Sch-S) is acquired to prevent Data Definition Language (DDL) statements from changing the table structure. Below is an example of the behavior of this isolation level. This example shows the behavior of the read uncommitted isolation level.

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION OFF; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT OFF; -- Table used in this example. CREATE TABLE Test.TestTran (Col INT NOT NULL); 

Open table as spreadsheet

Connection 1

Connection 2

 BEGIN TRAN;       INSERT Test.TestTran (Col)           VALUES (1); 

 - 

 - 

 SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; 

 - 

 SELECT * FROM Test.TestTran; /* Returns Col = 1 which has not yet been committed, a dirty read. */ 

 UPDATE Test.TestTran SET Col = 2       WHERE Col = 1; 

 - 

 - 

 SELECT * FROM Test.TestTran; /* Returns Col = 2 which has not yet been committed, another dirty read. */ 

 ROLLBACK TRAN; 

 - 

 - 

 SELECT * FROM Test.TestTran; /* Returns an empty result set (which, in this case, is not a dirty read). This shows the problem with the read uncommitted isolation level. The previous 2 statements returned data that logically never existed! */ 

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

Repeatable Read Isolation Level

In the repeatable read isolation level, SQL Server not only guarantees that dirty reads do not happen in your transaction, but it also guarantees that if you issue two DML statements against the same table with the same WHERE clause (one query could be a select and the next could be an update), both queries will return the same results. This is not entirely true because the latter queries might return more rows than the first query. A repeatable read protects against deletes and updates of accessed rows, but not against inserts that match the specific WHERE clause, which is the phantom rows concurrency problem. Note that phantom rows might also occur if you use aggregate functions, although it is not as easy to detect.

The first query might, for example, include the sum over four rows, while the second query includes the sum over five rows. One important thing to note is that, because the repeatable read isolation level is achieved using blocking of the other transaction, the use of this isolation level greatly increases the number of locks held for the duration of the transaction. This, in turn, can cause serious perfomance problems due to blocked transactions. It also greatly increases the risk for deadlocks. A deadlock is encountered if different connections are waiting for locks held by each other. In the repeatable read isolation level, shared locks are acquired for read operations and are also held for the duration of the transaction, not allowing changes to the data by other transactions. The following example shows the behavior of the repeatable read isolation level:

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION OFF; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT OFF; -- Table including some example rows used in this example. CREATE TABLE Test.TestTran (Col1 INT NOT NULL, Col2 INT NOT NULL); INSERT Test.TestTran (Col1, Col2) VALUES (1, 1); INSERT Test.TestTran (Col1, Col2) VALUES (1, 2); 

Open table as spreadsheet

Connection 1

Connection 2

 SET TRANSACTION ISOLATION LEVEL        REPEATABLE READ; 

 SET TRANSACTION ISOLATION LEVEL        READ COMMITTED; 

 BEGIN TRAN;       SELECT * FROM Test.TestTran       WHERE Col1 = 1;       /* Returns 2 rows. */ 

 - 

 - 

 SELECT * FROM Test.TestTran; /* Returns the 2 rows in the table. This state- ment is allowed because it does not change any rows that are included in the transaction in connection 1. */ 

 - 

 UPDATE Test.TestTran SET Col1 = 2      WHERE Col1 = 1; /* This statement is now blocked by connection 1 because it would cause the transaction in connection 1 to experience a non-repeatable read. */ 

 SELECT * FROM Test.TestTran WHERE Col1 = 1;  /* Still returns 2 rows, i.e. a non-repeatable read did not occur. */ 

Still blocked

 - 

 /* Abort the above query and execute the following insert statement. */ INSERT Test.TestTran (Col1, Col2)       VALUES (1, 3); 

 SELECT * FROM Test.TestTran WHERE Col1 = 1; /* Now returns 3 rows. The rows with the values 1 & 2 for the column Col2 are unchanged. But the row with Col2 = 3 is now included in the result. That row is a phantom read! */ 

 - 

 COMMIT TRAN; 

 - 

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

Serializable Isolation Level

The serializable isolation level guarantees that none of the concurrency problems mentioned earlier can occur. Because this isolation level also protects from the occurrence of phantom reads through blocking, the use of the serializable isolation level increases the risk for blocked transactions and deadlocks even more than using the repeatable read isolation level. This isolation level guarantees that if you issue two DML statements against the same table with the same WHERE clause, both queries will return exactly the same results, including the same number of rows. To protect the transaction from inserts, SQL Server will need to lock a range of an index over a column that is included in the WHERE clause with shared locks. If such an index does not exist, SQL Server will need to lock the entire table. The following example shows the behavior of the serializable isolation level:

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION OFF; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT OFF; -- Table including some example rows used in this example. CREATE TABLE Test.TestTran (Col1 INT NOT NULL, Col2 INT NOT NULL); INSERT Test.TestTran (Col1, Col2) VALUES (1, 1); INSERT Test.TestTran (Col1, Col2) VALUES (1, 2); 

Open table as spreadsheet

Connection 1

Connection 2

 SET TRANSACTION ISOLATION LEVEL        SERIALIZABLE; 

 SET TRANSACTION ISOLATION LEVEL        READ COMMITTED; 

 BEGIN TRAN;      SELECT * FROM Test.TestTran      WHERE Col1 = 1;      /* Returns 2 rows. */ 

 - 

 - 

 SELECT * FROM Test.TestTran; /* Returns the 2 rows in the table. This state- ment is allowed because it does not change any rows that are included in the transaction in connection 1. */ 

 - 

 UPDATE Test.TestTran SET Col1 = 2      WHERE Col1 = 1; /* This statement is now blocked by connection 1 because it would cause the transaction in connection 1 to experience a non-repeatable read. */ 

 SELECT * FROM Test.TestTran WHERE Col1 = 1; /* Still returns 2 rows, i.e. a non-repeatable read did not occur. */ 

Still blocked

 - 

 /* Abort the above query and execute the following insert statement. */ INSERT Test.TestTran (Col1, Col2)       VALUES (1, 3); /* This statement is also blocked, exactly as the update was, because it would cause the transaction in connection 1 to experience a phantom read. */ 

 SELECT * FROM Test.TestTran WHERE Col1 = 1; /* Still returns 2 rows, i.e. a phantom read did not occur. */ 

 - 

 - 

 /* Abort the above query and execute the follow- ing insert statement. */ INSERT Test.TestTran (Col1, Col2)       VALUES (2, 1); /* This statement is also blocked, exactly as the previous insert was. This happened even though the row inserted by this statement would not be included in the query in connec- tion 1. This in turn is because SQL Server found no index to lock a range in, so it locked the entire table. */ 

 COMMIT TRAN; 

 - 

 - 

The row from the above insert is now completed.

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

Snapshot Isolation Level

SQL Server 2005 introduces the snapshot isolation level in addition to the four isolation levels defined by the SQL-99 standard. This isolation level can be very useful because it protects against all of the previously mentioned concurrency problems, just as the serializable isolation level does. However, it does not provide this protection by blocking access to rows by other transactions; it does so by storing versions of rows while the transaction is active as well as tracking when a specific row was inserted. This means that while in a transaction in the snapshot isolation level, depending on what has occurred since the transaction was started, you might not be looking at the current version of a row, but rather an older version of the row presenting it as it looked when the transaction first accessed it. Because of this, when working in the snapshot isolation level, the transaction can be considered to be executing in an "alternate reality." For more information on the snapshot isolation level, see the article "Using Snapshot Isolation" at http://msdn2.microsoft.com/en-us/library/tcbchxcb.aspx.

If a transaction is started by executing the BEGIN TRANSACTION statement (the transaction is actually started when the first statement within it is executed) at 5:02 A.M., for example, it will only return data from this exact point in time, independent of how many rows are being deleted, inserted, or updated while the transaction is active. The older row versions are stored as a linked list of rows in the tempdb system database. There can be multiple versions of a single row at any point in time in order to service multiple snapshot transactions starting at different points in time.

When snapshot transactions are being used, the tempdb database experiences additional activity and might require additional disk space. Because of storage of row versions, write operations will suffer some performance degradation. The snapshot isolation level can be particularly useful for read intensive databases and for databases that have performance problems caused by blocking. If you are looking for performance improvement only and can accept dirty reads, remember that the read uncommitted isolation level provides the best performance.

Using the Snapshot Isolation Level

To be able to use the snapshot isolation level in a database, the database option ALLOW_SNAPSHOT_ISOLATION must first be set to ON. This database option requires that there are no connections to the database when it is set. It is the setting of this database option, rather than the start of a transaction in the snapshot isolation level, that starts the process of generating and storing row versions. This is notable because it means that you pay the performance penalty of generating row versions for all transactions in the database, not just for transactions that use the snapshot isolation level. This occurs because the old version of a row must have been made available when the row was changed or deleted, even if the change was made by a transaction in an isolation level other than snapshot. In the snapshot isolation level, shared locks are not acquired for read operations because row versions are used instead. The following example shows the behavior of the snapshot isolation level:

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION ON; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT OFF; -- Table including some example rows used in this example. CREATE TABLE Test.TestTran (Col1 INT NOT NULL, Col2 INT NOT NULL); INSERT Test.TestTran (Col1, Col2) VALUES (1, 1); INSERT Test.TestTran (Col1, Col2) VALUES (1, 2); INSERT Test.TestTran (Col1, Col2) VALUES (2, 1); 

Open table as spreadsheet

Connection 1

Connection 2

 SET TRANSACTION ISOLATION LEVEL      SNAPSHOT; 

 SET TRANSACTION ISOLATION LEVEL        READ COMMITTED; 

 BEGIN TRAN;      SELECT * FROM Test.TestTran      WHERE Col1 = 1;      /* Returns 2 rows. */ 

 - 

 - 

 UPDATE Test.TestTran SET Col1 = 2 WHERE Col1 = 1 AND Col2 = 1; 

 SELECT * FROM Test.TestTran      WHERE Col1 = 1; /* Still returns the same 2 rows with exactly the same values. */ 

 - 

 SELECT * FROM Test.TestTran      WHERE Col1 = 1; /* Still returns the same 2 rows with exactly the same values. */ 

 INSERT Test.TestTran (Col1, Col2)       VALUES (1, 3); 

 - 

 DELETE Test.TestTran; /* Deletes all rows in the table... */ 

 SELECT * FROM Test.TestTran WHERE Col1 = 1; /* Still returns the same 2 rows with exactly the same values. */ 

 - 

 SELECT * FROM Test.TestTran; /* Returns the 3 rows that existed in the table when the transaction was started. */ 

 - 

 COMMIT TRAN; 

 - 

 SELECT * FROM Test.TestTran; /* Now returns an empty result set because the alternate reality of the transaction has been discarded. */ 

 - 

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

To track snapshot transactions and row versions, you can query the dynamic management views sys.dm_tran_active_snapshot_database_transactions (which shows the current active snapshot transactions) and sys.dm_tran_version_store (which shows you the storage for row versions).

Update Conflicts

There is an additional concurrency problem not yet mentioned because it is specific to the snapshot isolation level. If a specific row (or version of a row) is read in snapshot isolation, SQL Server guarantees that you will get the same row if you issue the query later in the transaction. What happens if the later query is an UPDATE or DELETE statement and the row has changed since it was read the first time? SQL Server can't use the current version of the row as the base for the update because it would break the promise of the row not changing while the snapshot transaction is active. And it can't use the row version used by the snapshot transaction as a base because the other transaction that updated or deleted the row would experience a lost update (which is not allowed or supported in SQL Server). Instead, the snapshot transaction is rolled back, and it receives the following error message:

 Msg 3960, Level 16, State 4, Line 1 Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'Test.TestTran' directly or indirectly in database 'TestDatabase' to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement. 

Of course, this error can be caught in a try/catch block, but then the transaction state (XACT_STATE()) will be set to uncommittable, meaning that the transaction must be rolled back. To find concurrent snapshot transactions that might experience update conflicts, you can query the sys.dm_tran_transactions_snapshot dynamic management view. Here is an example of an update conflict occurrence:

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION ON; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT OFF; -- Table including some example rows used in this example. CREATE TABLE Test.TestTran (Col1 INT NOT NULL, Col2 INT NOT NULL); INSERT Test.TestTran (Col1, Col2) VALUES (1, 1); 

Open table as spreadsheet

Connection 1

Connection 2

 SET TRANSACTION ISOLATION LEVEL        SNAPSHOT; 

 SET TRANSACTION ISOLATION LEVEL        READ COMMITTED; 

 BEGIN TRAN;       SELECT * FROM Test.TestTran        WHERE Col1 = 1; /*  Returns the value 1 for the column Col2. */ 

 - 

 - 

 UPDATE Test.TestTran SET        Col2 = 10 WHERE Col1 = 1; 

 UPDATE Test.TestTran SET       Col2 = Col2 + 1 WHERE Col1 = 1; /* Receives the update conflict error and the transaction is rolled  back. According to the snapshot, the value of Col2 should become 2, but that would cause the transaction in connection 2 to be lost (or not durable according  to ACID). */ 

 - 

 SELECT * FROM Test.TestTran       WHERE Col1 = 1; /* Now returns the value 10 for the Col2 column because the snapshot transaction was rolled back. */ 

 - 

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

Apparently, because of update conflicts, the snapshot isolation level is not the best choice if you are reading and later updating a specific set of rows within a transaction. It is, however, interesting to know that pregardless of whether rows changed, you would not experience an update conflict if you execute an INSERTSELECT statement within the snapshot transaction.

If you must issue an UPDATE or DELETE statement within a snapshot transaction, consider issuing it within a try/catch block. You can also consider switching back to the read committed isolation level before you issue the update. This will block the transaction if an open transaction has updated or deleted the rows that need to be accessed and, if the transaction is not blocked, the update will use the data from the current row version rather than the one in your snapshot. Remember that the update might experience phantom rows, so you need to specify exactly which rows to update (by referencing the primary key column in your WHERE clause).

You can also combine the two isolation levels by reading from the snapshot isolation level and writing to the read committed isolation level. If a transaction is implemented this way, other transactions cannot be considered durable, because they can have their committed data overwritten by old data. (This is also an alternative way of interpreting the lost update concurrency problem.) Here is an example of this scary but feasible implementation. In this case, the transaction isolation level is set for the UPDATE statement using a table locking hint.

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION ON; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT OFF; -- Table including some example rows used in this example. CREATE TABLE Test.TestTran (     PKCol INT IDENTITY(1,1) PRIMARY KEY     ,Col1 INT NOT NULL     ,Col2 INT NOT NULL ); INSERT Test.TestTran (Col1, Col2) VALUES (1, 1); 

Open table as spreadsheet

Connection 1

Connection 2

 SET TRANSACTION ISOLATION LEVEL       SNAPSHOT; 

 SET TRANSACTION ISOLATION LEVEL        READ COMMITTED; 

 BEGIN TRAN;       SELECT * FROM Test.TestTran       WHERE Col1 = 1; /* Returns 1 row with the value 1 for the Col2 column. */ 

 - 

 - 

 INSERT Test.Testtran        (Col1, Col2) VALUES (1, 2); 

 SELECT * FROM Test.TestTran WHERE Col1 = 1; /* Still returns the 1 row with the value 1 for the Col2 column.  */ 

 - 

 - 

 UPDATE Test.TestTran SET        Col2 = 10 WHERE Col1 = 1; /* Sets the Col2 column of both rows to the value 10. */ 

 SELECT * FROM Test.TestTran WHERE Col1 = 1; /* Still returns the 1 row with the value 1 for the Col2 column. */ 

 SELECT * FROM Test.TestTran WHERE Col1 = 1; /* Returns 2 rows with the value 10 for the Col2 column in both rows. */ 

 UPDATE currRow SET       Col2 = snapshotRow.Col2 + 1 FROM Test.TestTran AS currRow       WITH(READCOMMITTED) INNER JOIN Test.TestTran        AS snapshotRow        ON snapshotRow.PKCol              = currRow.PKCol; /* This update is now executed in read committed isolation (because of the locking hint) but reads the data from the snapshot isolation. This causes the data  to be overwritten with- out an  update conflict being issued. If the transaction in connection 2 was still open, this query would have been blocked until it finished. But when it is no longer blocked, it will just overwrite whatever data had been put in the row by the other transaction. */ 

 - 

 SELECT * FROM Test.TestTran  WHERE Col1 = 1; /* Still returns the 1 row, but now with the value 2 for the Col2 column (because of the update above). */ 

 SELECT * FROM Test.TestTran  WHERE Col1 = 1; /* This SELECT statement is now blocked because it is being executed in read committed isola- tion and the transaction in connection 1 has not been completed yet. */ 

 COMMIT TRAN; 

The SELECT statement above now returns the 2 rows, but for the first row it returns the value 2 for the Col2 column.

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

Read Committed Snapshot Isolation Level

To the four isolation levels defined by the SQL-99 standard, SQL Server 2005 adds the read committed snapshot isolation level. This isolation level is a bit different from the previous isolation levels because it cannot be set using the SET TRANSACTION ISOLATION LEVEL statement; it is set by setting the database option READ_COMMITTED_SNAPSHOT to ON. You could also say that this isn't really its own transaction isolation level, but rather a variation of the read committed isolation level. By changing this database option, the behavior of the read committed isolation level is changed to read committed snapshot. The read committed isolation level described earlier no longer exists for the specific database (although it can be set by using a table locking hint that we will describe later in this lesson). For more information about the read committed snapshot isolation level, see the article "Using Snapshot Isolation" at http://msdn2.microsoft.com/en-us/library/tcbchxcb.aspx.

Important 

The READ_COMMITTED_SNAPSHOT database setting

Setting the database option READ_COMMITTED_SNAPSHOT to ON for an existing database changes the behavior of the read committed isolation level. It is very important to thoroughly test the database after setting this option to avoid introducing bugs into your application.

The read committed snapshot isolation level is a combination of the isolation levels read committed and snapshot. Because of this, the database option READ_COMMITTED_SNAPSHOT will also cause SQL Server to start generating and storing row versions in the tempdb system database. As opposed to the read committed isolation level, in this isolation level, readers are not blocked by writers, and vice versa. This is implemented through the reading of the latest available version of a row, in case the row is locked by another transaction that updated or deleted the row but had not yet completed.

Isn't this what snapshot isolation does? No, not exactly. In the read committed snapshot case, SQL Server always reads the latest version of the row; it doesn't care when the transaction was started. This means that it does not return dirty rows but can still encounter nonrepeatable and phantom reads. In the read committed snapshot isolation level, shared locks are not acquired for read operations because row versions are used instead.

So, if you already pay the price of generating row versions, why don't you just use the snapshot isolation level? First, in this isolation level, SQL Server doesn't need to find a particular version of a row; it just needs to find the last committed version. Also, update conflicts don't occur in the read committed snapshot isolation level. This is because SQL Server hasn't guaranteed that it will only return data that appears the way it did when the transaction started, so SQL Server will not break any guarantee if it uses the new committed version of a row. In this isolation level, instead of getting the update conflict error, SQL Server will just block the connection until the other connection releases its locks, and then it will execute the update. Here is an example:

 ALTER DATABASE <current_database> SET ALLOW_SNAPSHOT_ISOLATION ON; ALTER DATABASE <current_database> SET READ_COMMITTED_SNAPSHOT ON; -- Table including some example rows used in this example. CREATE TABLE Test.TestTran (Col1 INT NOT NULL, Col2 INT NOT NULL); INSERT Test.TestTran (Col1, Col2) VALUES (1, 1); 

Open table as spreadsheet

Connection 1

Connection 2

 SET TRANSACTION ISOLATION LEVEL        READ COMMITTED; 

 SET TRANSACTION ISOLATION LEVEL        READ COMMITTED; 

 BEGIN TRAN;       SELECT * FROM Test.TestTran       WHERE Col1 = 1; /* Returns 1 row. */ 

 - 

 - 

 BEGIN TRAN;        UPDATE Test.TestTran SET            Col2 = 10        WHERE Col1 = 1;        INSERT Test.TestTran           (Col1, Col2)        VALUES (1, 2); 

 SELECT * FROM Test.TestTran       WHERE Col1 = 1; /* Returns the same row as above, still with the value 1 for the Col2 column. It does not return the row inserted by connection 2. */ 

 - 

 - 

 COMMIT TRAN; 

 SELECT * FROM Test.TestTran       WHERE Col1 = 1; /* Now returns both rows, including the row inserted by connection 2. The first row now includes the updated value of the Col2 column. */ 

 - 

 UPDATE Test.TestTran SET       Col2 = Col2 + 1 WHERE Col1 = 1; /* Does not encounter an update conflict. Rather, it updates both rows' Col2 column and sets it to the values 3 and 11. */ SELECT * FROM Test.TestTran WHERE Col1 = 1; 

 - 

 COMMIT TRAN; 

 - 

 -- Drop the table used in this example. DROP TABLE Test.TestTran; 

Exam Tip 

The difference in behavior between snapshot isolation and read committed snapshot is an important part of this exam.

Avoiding Extensive Blocking

There are a few easy rules for reducing the risk of blocking issues caused by extensive locking:

  • Use the least-restrictive isolation level possible. Consider whether you can make use of the read uncommitted isolation level.

  • Avoid using the repeatable read and serializable isolation levels.

  • Keep your transactions as short as possible (measured in execution time, not number of T-SQL code lines).

  • If you mostly read data, consider using the snapshot and read committed snapshot isolation levels.

Lab: Comparing Transaction Isolation Levels

This lab contains two exercises. In the first exercise, you investigate how locks are acquired in the read committed isolation level, which is the default isolation level. In the second exercise, you perform the same tests against the read committed snapshot isolation level.

Important 

Lab requirements

You will need to have SQL Server 2005 and the Adventure Works database installed before you can complete this lab. Refer to the Introduction for setup instructions.

Exercise 1: Acquire Locks by Using the Read Committed Isolation Level

image from book
  1. Open Microsoft SQL Server Management Studio, and connect to an instance of SQL Server 2005.

  2. In a new query window, which will be referred to as Connection 1, type and execute the following SQL statements to create the TestDB database, the Test schema, and the table that you will use in this exercise:

     -- Connection 1 - Session ID: <put @@SPID result here> /*  Leave the above line to easily see that this query window belongs to Connection 1. */ SELECT @@SPID; GO CREATE DATABASE TestDB; GO USE TestDB; GO CREATE SCHEMA Test; GO CREATE TABLE Test.TestTable (     Col1 INT NOT NULL     ,Col2 INT NOT NULL ); INSERT Test.TestTable (Col1, Col2) VALUES (1,10); INSERT Test.TestTable (Col1, Col2) VALUES (2,20); INSERT Test.TestTable (Col1, Col2) VALUES (3,30); INSERT Test.TestTable (Col1, Col2) VALUES (4,40); INSERT Test.TestTable (Col1, Col2) VALUES (5,50); INSERT Test.TestTable (Col1, Col2) VALUES (6,60); 

  3. Open another query window, which will be referred to as Connection 2, and type and execute the following SQL statement to prepare the connection:

     -- Connection 2 - Session ID: <put @@SPID result here> /*  Leave the above line to easily see that this query window belongs to Connection 2. */ SELECT @@SPID; GO USE TestDB; 

  4. Open a third query window, which will be referred to as Connection 3, and type and execute the following SQL statement to prepare the connection:

     -- Connection 3 /*  Leave the above line to easily see that this query window belongs to Connection 3. */ USE TestDB; 

  5. In Connection 1, execute the following SQL statements to start a transaction in the read committed transaction isolation level, and read a row from the test table (but do not commit the transaction!).

     -- Connection 1 SET TRANSACTION ISOLATION LEVEL READ COMMITTED; BEGIN TRAN;     SELECT * FROM Test.TestTable     WHERE Col1 = 1; 

  6. To see which locks have been acquired by the transaction in Connection 1, open Connection 3, and execute the following SELECT statement. In the line of code that contains <@@SPID of Connection 1>, be sure to replace this with the ID value returned by the code executed in step 2 of this exercise.

     SELECT     resource_type     ,request_mode     ,request_status FROM sys.dm_tran_locks WHERE resource_database_id = DB_ID('TestDB')     AND request_session_id = <@@SPID of Connection 1>     AND request_mode IN ('S', 'X')     AND resource_type <> 'DATABASE'; 

    Why doesn't Connection 1 have a shared lock on the row that it read using the SELECT statement?

  7. In Connection 1, execute the following SQL statement to end the started transaction:

     -- Connection 1 COMMIT TRAN; 

  8. In Connection 2, execute the following SQL statements to start a transaction, and acquire an exclusive lock on one row in the test table.

     -- Connection 2 BEGIN TRAN;     UPDATE Test.TestTable SET Col2 = Col2 + 1     WHERE Col1 = 1; 

  9. In Connection 1, execute the following transaction to try to read the row that has been updated (but not committed) by Connection 2. After you execute the code in this step, move on to the next step, as this connection will now be blocked.

     -- Connection 1 SET TRANSACTION ISOLATION LEVEL READ COMMITTED; BEGIN TRAN;     SELECT * FROM Test.TestTable     WHERE Col1 = 1; -- This SELECT statement will be blocked! 

  10. To see which locks have been acquired by the transaction in Connection 1, open Connection 3, and execute the following SELECT statement. In the line of code that contains <@@SPID of Connection 1>, be sure to replace this with the ID value returned by the code executed in step 2 of this exercise.

     SELECT     resource_type     ,request_mode     ,request_status FROM sys.dm_tran_locks WHERE resource_database_id = DB_ID('TestDB')     AND request_session_id = <@@SPID of Connection 1>     AND request_mode IN ('S', 'X')     AND resource_type <> 'DATABASE'; 

    Here you can see that Connection 1 tries to acquire a shared lock on the row.

  11. In Connection 2, execute the following SQL statements to end the transaction started earlier.

     -- Connection 2 COMMIT TRAN; 

  12. Now, first have a look in Connection 1 and note that the SELECT statement has been completed. Switch to Connection 3, and execute its SELECT statement again to see which locks are now acquired by the transaction in Connection 1. In the line of code that contains <@@SPID of Connection 1>, be sure to replace this with the ID value returned by the code executed in step 2 of this exercise.

     SELECT     resource_type     ,request_mode     ,request_status FROM sys.dm_tran_locks WHERE resource_database_id = DB_ID('TestDB')     AND request_session_id = <@@SPID of Connection 1>     AND request_mode IN ('S', 'X')     AND resource_type <> 'DATABASE'; 

    You should now see that no locks are acquired by Connection 1. This is because, after acquiring the lock on the row, Connection 1 released the lock.

  13. Close the three query windows for Connections 1, 2, and 3. Open a new query window, and execute the following SQL statement to clean up after this exercise:

     USE master; DROP DATABASE TestDB; 

image from book

Exercise 2: Acquire Locks by Using the Read Committed Snapshot Isolation Level

image from book

In this exercise, you execute the same type of transactions as in the previous exercise, but use the read committed snapshot transaction isolation level.

  1. Open SQL Server Management Studio, and connect to an instance of SQL Server 2005.

  2. In a new query window, which will be referred to as Connection 1, type and execute the following SQL statements to create the TestDB database, the Test schema, and the table that will be used in this exercise:

     -- Connection 1 /*  Leave the above line to easily see that this query window belongs to Connection 1. */ CREATE DATABASE TestDB; GO ALTER DATABASE TestDB SET READ_COMMITTED_SNAPSHOT ON; GO USE TestDB; GO CREATE SCHEMA Test; GO CREATE TABLE Test.TestTable (     Col1 INT NOT NULL     ,Col2 INT NOT NULL ); INSERT Test.TestTable (Col1, Col2) VALUES (1,10); INSERT Test.TestTable (Col1, Col2) VALUES (2,20); INSERT Test.TestTable (Col1, Col2) VALUES (3,30); INSERT Test.TestTable (Col1, Col2) VALUES (4,40); INSERT Test.TestTable (Col1, Col2) VALUES (5,50); INSERT Test.TestTable (Col1, Col2) VALUES (6,60); 

  3. Open another query window, which will be referred to as Connection 2, and type and execute the following SQL statement to prepare the connection:

     -- Connection 2 /*  Leave the above line to easily see that this query window belongs to Connection 2. */ USE TestDB; 

  4. Open a third query window, which will be referred to as Connection 3, and type and execute the following SQL statement to prepare the connection:

     -- Connection 3 /*  Leave the above line to easily see that this query window belongs to Connection 3. */ USE TestDB; 

  5. In Connection 2, execute the following SQL statements to start a transaction, and acquire an exclusive lock on one row in the test table.

     -- Connection 2 BEGIN TRAN;     UPDATE Test.TestTable SET Col2 = Col2 + 1     WHERE Col1 = 1; 

  6. In Connection 1, execute the following transaction to try to read the row that has been updated (but not committed) by Connection 2.

     -- Connection 1 SET TRANSACTION ISOLATION LEVEL READ COMMITTED; BEGIN TRAN;     SELECT * FROM Test.TestTable     WHERE Col1 = 1; 

    Why wasn't the SELECT statement blocked by Connection 2?

    Which values were returned by the query, the values that existed before or after the update?

  7. To see which locks have been acquired by the transaction in Connections 1 and 2, open Connection 3, and execute the following SELECT statement:

     SELECT     resource_type     ,request_mode     ,request_status FROM sys.dm_tran_locks WHERE resource_database_id = DB_ID('TestDB')     AND request_mode IN ('S', 'X')     AND resource_type <> 'DATABASE'; 

  8. To see if any row versions are available for the TestDB database, execute the following query in Connection 3:

     SELECT * FROM sys.dm_tran_version_store WHERE database_id = DB_ID('TestDB'); 

  9. In Connection 2, execute the following SQL statements to end the transaction started earlier.

     -- Connection 2 COMMIT TRAN; 

  10. In the open transaction in Connection 1, execute the SELECT statement again.

     -- Connection 1 SELECT * FROM Test.TestTable WHERE Col1 = 1; 

    Which values are now returned, the values that existed before or after the update?

    Did this SELECT statement return dirty reads?

    Did the first SELECT statement in Connection 1 return dirty reads?

  11. Close the three query windows for Connection 1, 2, and 3. Open a new query window, and execute the following SQL statement to clean up after this exercise:

     USE master; DROP DATABASE TestDB; 

image from book




MCITP Self-Paced Training Kit Exam 70-442  .Designing and Optimizing Data Access by Using Microsoft SQL Server 2005
MCITP Self-Paced Training Kit Exam 70-442 .Designing and Optimizing Data Access by Using Microsoft SQL Server 2005
ISBN: 073562383X
EAN: N/A
Year: 2007
Pages: 162

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net