Lesson 3:Managing SQL Server Locking

3 4

SQL Server 2000 uses locking to ensure transactional integrity and database consistency. Locking prevents users from reading data being changed by other users and prevents multiple users from changing the same data at the same time. If locking is not used, data within the database might become logically incorrect, and queries executed against that data might produce unexpected results. Although SQL Server enforces locking automatically, you can design applications that are more efficient by understanding and customizing locking in your applications. This lesson provides information about locking and concurrency in a SQL Server database. The lesson also discusses how to customize locking.


After this lesson, you will be able to:

  • Identify the various types of concurrency problems.
  • Describe optimistic and pessimistic concurrency.
  • Set isolation levels and customize locking.

Estimated lesson time: 35 minutes


Types of Concurrency Problems

If locking is not available and several users access a database concurrently, problems might occur if their transactions use the same data at the same time. Concurrency problems can include any of the following situations:

  • Lost or buried updates
  • Uncommitted dependency (dirty read)
  • Inconsistent analysis (non-repeatable read)
  • Phantom reads

Lost Updates

Lost updates occur when two or more transactions select the same row and then update the row based on the value originally selected. Each transaction is unaware of other transactions. The last update overwrites updates made by the other transactions, which results in lost data.

For example, two editors make an electronic copy of the same document. Each editor changes the copy independently and then saves the changed copy, thereby overwriting the original document. The editor who saves the changed copy last overwrites the changes made by the first editor. This problem could be avoided if the second editor could not make changes until the first editor had finished.

Uncommitted Dependency (Dirty Read)

Uncommitted dependency occurs when a second transaction selects a row that is already being updated by a transaction. The second transaction is reading data that has not been committed yet and might be changed by the transaction updating the row.

For example, an editor is making changes to an electronic document. During the changes, a second editor takes a copy of the document that includes all of the changes made so far and distributes the document to the intended audience. The first editor then decides the changes made so far are wrong and removes the edits and saves the document. The distributed document contains edits that no longer exist and should be treated as if they never existed. This problem could be avoided if no one could read the changed document until the first editor determined that the changes were final.

Inconsistent Analysis (Nonrepeatable Read)

Inconsistent analysis occurs when a second transaction accesses the same row several times and reads different data each time. Inconsistent analysis is similar to uncommitted dependency in that another transaction is changing the data that a second transaction is reading. In inconsistent analysis, however, the data read by the second transaction was committed by the transaction that made the change. Also, inconsistent analysis involves multiple reads (two or more) of the same row and each time the information is changed by another transaction (hence the term non-repeatable read).

For example, an editor reads the same document twice, but between each reading, the writer rewrites the document. When the editor reads the document for the second time, it has changed. The original read was not repeatable. This problem could be avoided if the editor could read the document only after the writer has finished writing it.

Phantom Reads

Phantom reads occur when an insert or delete action is performed against a row that belongs to a range of rows being read by a transaction. The transaction's first read of the range of rows shows a row that no longer exists in the second or succeeding read as a result of a deletion by a different transaction. Similarly, as the result of an insert by a different transaction, the transaction's second or succeeding read shows a row that did not exist in the original read.

For example, an editor makes changes to a document submitted by a writer, but when the changes are incorporated into the master copy of the document by the production department, they find that new, unedited material has been added to the document by the author. This problem could be avoided if no one could add new material to the document until the editor and production department finish working with the original document.

Optimistic and Pessimistic Concurrency

SQL Server 2000 offers both optimistic and pessimistic concurrency control. Optimistic concurrency control uses cursors. Pessimistic concurrency control is the default for SQL Server.

Optimistic Concurrency

Optimistic concurrency control works on the assumption that resource conflicts between multiple users are unlikely (but not impossible) and enables transactions to execute without locking any resources. Only when attempting to change data are resources checked to determine whether any conflicts have occurred. If a conflict occurs, the application must read the data and attempt the change again.

Pessimistic Concurrency

Pessimistic concurrency control locks resources as they are required, for the duration of a transaction. Unless deadlocks occur, a transaction is assured of successful completion.

Isolation Levels

When locking is used as the concurrency control mechanism, it solves concur-rency problems. This feature enables all transactions to run in complete isolation from one another, although there can be more than one transaction running at any time.

Serializability is achieved by running a set of concurrent transactions equivalent to the database state that would be achieved if the set of transactions were executed serially.

SQL-92 Isolation Levels

Although serialization is important to transactions to ensure that the data in the database is correct at all times, many transactions do not always require full isolation. For example, several writers are working on different chapters of the same book. New chapters can be submitted to the project at any time; however, after a chapter has been edited, a writer cannot make any changes to the chapter without the editor's approval. This way, the editor can be assured of the accuracy of the book project at any point in time, despite the arrival of new, unedited chapters. The editor can see both previously edited chapters and recently submitted chapters.

The level at which a transaction is prepared to accept inconsistent data is termed the isolation level. The isolation level is the degree to which one transaction must be isolated from other transactions. A lower isolation level increases concurrency, but at the expense of data correctness. Conversely, a higher isolation level ensures that data is correct but can negatively affect concurrency. The isolation level required by an application determines the locking behavior that SQL Server uses.

SQL-92 defines the following isolation levels, all of which are supported by SQL Server:

  • Read uncommitted (the lowest level, at which transactions are isolated only enough to ensure that physically corrupt data is not read)
  • Read committed (SQL Server default level)
  • Repeatable read
  • Serializable (the highest level, at which transactions are completely isolated from one another)

If transactions are run at an isolation level of serializable, any concurrent, overlapping transactions are guaranteed to be serializable.

The following isolation levels enable different types of behavior:

Isolation Level Dirty Read Non-Repeatable Read Phantom Read
Read uncommitted Yes Yes Yes
Read committed No Yes Yes
Repeatable read No No Yes
Serializable No No No

Transactions must be run at an isolation level of repeatable read or higher to prevent lost updates that can occur when two transactions each retrieve the same row and update the row later based on the originally retrieved values. If the two transactions update rows by using a single UPDATE statement and do not base the update on the previously retrieved values, lost updates cannot occur at the default isolation level of read committed.

Customizing Locking

Although SQL Server implements locking automatically, it is possible to customize this feature in applications by performing the following tasks:

  • Handling deadlocks and setting the deadlock priority
  • Handling timeouts and setting the lock timeout duration
  • Setting the transaction isolation level
  • Using table-level locking hints with the SELECT, INSERT, UPDATE, and DELETE statements
  • Configuring the locking granularity for an index

Managing Deadlocks

A deadlock occurs when there is a cyclic dependency between two or more threads for a set of resources. Deadlock can occur on any system that has multiple threads, not just on a relational database management system. A thread in a multi-threaded system can acquire one or more resources (locks, for example). If the resource being acquired is currently owned by another thread, the first thread might have to wait for the owning thread to release the target resource. The waiting thread is said to have a dependency on the owning thread for that particular resource.

If the owning thread wants to acquire another resource that is currently owned by the waiting thread, the situation becomes a deadlock. Both threads cannot release the resources that they own until their transactions are committed or rolled back, and their transactions cannot be committed or rolled back because they are waiting on resources that the other owns.

Figure 12.1 provides an example of two transactions attempting to access data in two tables. Thread T1 running transaction 1 has an exclusive lock on the Supplier table. Thread T2 running transaction 2 obtains an exclusive lock on the Part table and then wants a lock on the Supplier table. Transaction 2 cannot obtain the lock because transaction 1 has it. Transaction 2 is blocked because it is waiting on transaction 1. Transaction 1 then wants a lock on the Part table but cannot obtain it because transaction 2 has it locked. The transactions cannot release the locks that they are holding until the transaction is committed or rolled back. The transactions cannot commit or roll back because, in order to continue, they require a lock held by the other transaction.

NOTE


Deadlocking is often confused with normal blocking. When one transaction has a lock on a resource that another transaction wants, the second transaction waits for the lock to be released. By default, SQL Server transactions do not time out (unless LOCK_TIMEOUT is set). The second transaction is blocked, not deadlocked.

In Figure 12.1, thread T1 has a dependency on thread T2 for the Part table lock resource. Similarly, thread T2 has a dependency on thread T1 for the Supplier table lock resource. Because these dependencies form a cycle, there is a deadlock between threads T1 and T2.

Minimizing Deadlocks

Although deadlocks cannot be avoided completely, the number of deadlocks can be minimized. Minimizing deadlocks can increase transaction throughput and reduce system overhead because fewer transactions are rolled back, undoing all of the work performed by the transaction. In addition, fewer transactions are resubmitted by applications because they were rolled back when they were deadlocked.

figure 12.1-a deadlock on two transactions accessing the supplier table and the part table.

Figure 12.1  A deadlock on two transactions accessing the Supplier table and the Part table.

You should adhere to the following guidelines to help minimize deadlocks:

  • Access objects in the same order.
  • Avoid user interaction during transactions.
  • Keep transactions short and in one batch.
  • Use a low isolation level.
  • Use bound connections.

NOTE


Bound connections enable two or more connections to share the same transaction and locks. Bound connections can work on the same data without lock conflicts. Bound connections can be created from multiple connections within the same application or from multiple applications with separate connections. Bound connections also make coordinating actions across multiple connections easier. For more information about bound connections, refer to SQL Server Books Online.

Customizing Timeouts

When SQL Server cannot grant a lock to a transaction on a resource because another transaction already owns a conflicting lock on that resource, the first transaction becomes blocked while waiting on that resource. If this situation causes a deadlock, SQL Server terminates one of the participating transactions (with no timeout involved). If there is no deadlock, the transaction requesting the lock is blocked until the other transaction releases the lock. By default, there is no mandatory timeout period and no way to test whether a resource is locked before locking it, except to attempt to access the data (and potentially get blocked indefinitely).

NOTE


The sp_who system stored procedure can be used to determine whether a process is being blocked and who is blocking it.

The LOCK_TIMEOUT setting enables an application to set a maximum time that a statement will wait on a blocked resource. When a statement has waited longer than the LOCK_TIMEOUT setting, the blocked statement is canceled automatically, and error message 1222, "Lock request time-out period exceeded," is returned to the application.

However, any transaction containing the statement is not rolled back or canceled by SQL Server. Therefore, the application must have an error handler that can trap error message 1222. If an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction might depend on the statement that was never executed.

Implementing an error handler that traps error message 1222 enables an application to handle the timeout situation and take remedial action (for example, automatically resubmitting the statement that was blocked or rolling back the entire transaction).

You can use the SET LOCK_TIMEOUT statement to specify the number of milliseconds that a statement will wait for a lock to be released, as shown in the following example:

 SET LOCK_TIMEOUT -1 SELECT @@LOCK_TIMEOUT 

The SET LOCK_TIMEOUT statement enables an application to set the maximum time that a statement will wait for a blocked resource. When a statement has waited longer than the LOCK_TIMEOUT setting, the blocked statement is automatically canceled and an error message is returned to the application.

To determine the current lock timeout setting (in milliseconds) for the current session, you can use the @@LOCK_TIMEOUT function, as shown in the following example:

 SELECT @@LOCK_TIMEOUT 

If a LOCK_TIMEOUT value has not been set for a session, the @@LOCK_TIMEOUT function will return a value of –1.

Setting Transaction Isolation Levels

By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. An application might have to operate at a different isolation level, however. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement.

When the isolation level is specified, the locking behavior for all SELECT statements in the SQL Server session operates at that isolation level and remains in effect until the session terminates or until the isolation level is set to another level.

In the following example, the transaction isolation level is being set to SERIALIZABLE, which ensures that no phantom rows can be inserted into the Authors table by concurrent transactions:

 USE Pubs SET TRANSACTION ISOLATION LEVEL READ COMMITTED 

NOTE


The isolation level can be overridden, if necessary, for individual SELECT statements by specifying a table-level locking hint. Specifying a table-level locking hint does not affect other statements in the session. You should use table-level locking hints to change the default locking behavior only if absolutely necessary.

To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, as shown in the following example:

 USE Pubs SET TRANSACTION ISOLATION LEVEL READ COMMITTED 

Implementing Table-Level Locking Hints

A range of table-level locking hints can be specified along with the SELECT, INSERT, UPDATE, and DELETE statements in order to direct SQL Server 2000 to the type of locks to be used. Use table-level locking hints for finer control of the types of locks acquired on an object. Locking hints override the current transaction isolation level for the session.

NOTE


The SQL Server query optimizer automatically makes the correct determination. You should use table-level locking hints to change the default locking behavior only when necessary. Disallowing a locking level can affect concurrency adversely.

The following table provides a description of the available locking hints:

Locking Hint Description
HOLDLOCK Hold a shared lock until completion of the transaction instead of releasing the lock as soon as the required table, row, or data page is no longer required. HOLDLOCK is equivalent to SERIALIZABLE.
NOLOCK Do not issue shared locks, and do not honor exclusive locks. When this option is in effect, it is possible to read an uncommitted transaction or a set of pages that are rolled back in the middle of a read. Dirty reads are possible. This hint only applies to the SELECT statement.
PAGLOCK Use page locks where a single table lock would usually be taken.
READCOMMITTED Perform a scan with the same locking semantics as a transaction running at the READ COMMITTED isolation level. By default, SQL Server 2000 operates at this isolation level.
READPAST Skip locked rows. This option causes a transaction to skip rows locked by other transactions that would ordinarily appear in the result set, rather than block the transaction waiting for the other transactions to release their locks on these rows. The READPAST lock hint applies only to transactions operating at READ COMMITTED isolation and will read only past row-level locks. Applies only to the SELECT statement.
READUNCOMMITTED Equivalent to NOLOCK.
REPEATABLEREAD Perform a scan with the same locking semantics as a transaction that is running at the REPEATABLE READ isolation level.
ROWLOCK Use row-level locks instead of the coarser-grained page- and table-level locks.
SERIALIZABLE Perform a scan with the same locking semantics as a transaction that is running at the SERIALIZABLE isolation level. Equivalent to HOLDLOCK.
TABLOCK Use a table lock instead of the finer-grained row- or page-level locks. SQL Server holds this lock until the end of the statement. If you also specify HOLDLOCK, however, the lock is held until the end of the transaction.
TABLOCKX Use an exclusive lock on a table. This lock prevents others from reading or updating the table and is held until the end of the statement or transaction.
UPDLOCK Use update locks instead of shared locks while reading a table, and hold locks until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it.
XLOCK Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement. This lock can be specified with either PAGLOCK or TABLOCK, in which case the exclusive lock applies to the appropriate level of granularity.

In the following example, the transaction isolation level is set to SERIALIZABLE, and the table-level locking hint NOLOCK is used with the SELECT statement:

 USE Pubs GO SET TRANSACTION ISOLATION LEVEL SERIALIZABLE GO BEGIN TRANSACTION SELECT Au_lname FROM Authors WITH (NOLOCK) GO 

When the SELECT statement is executed, the key-range locks typically used to maintain serializable transactions are not taken.

Customizing Locking for an Index

The SQL Server 2000 dynamic locking strategy automatically chooses the best locking granularity for queries in most cases. In cases where access patterns are well understood and consistent, limiting the locking levels available for an index can be beneficial.

For example, a database application uses a lookup table that is refreshed weekly in a batch process. The most efficient locking strategy is to turn off page and row locking and to enable all concurrent readers to get a shared (S) lock on the table, thereby reducing overhead. During the weekly batch update, the update process can take an exclusive (X) lock and then update the entire table.

The granularity of locking used on an index can be set using the sp_indexoption system stored procedure. To display the current locking option for a given index, use the INDEXPROPERTY function. Page-level locks, row-level locks, or a combination of page-level and row-level locks can be disallowed for a given index, as described in the following table:

Disallowed Locks Index Accessed by
Page level Row-level and table-level locks
Row level Page-level and table-level locks
Page level and row level Table-level locks

For example, when a table is known to be a point of contention, it can be beneficial to disallow page-level locks, thereby allowing only row-level locks. Or, if table scans are always used to access an index or table, disallowing page-level and row-level locks can help by allowing only table-level locks.

IMPORTANT


The SQL Server query optimizer automatically makes the correct determination. You should not override the choices the optimizer makes. Disallowing a locking level can affect the concurrency for a table or index adversely. For example, specifying only table-level locks on a large table accessed heavily by many users can affect performance significantly. Users must wait for the table-level lock to be released before accessing the table.

Exercise 3:  Configuring Transaction Properties

In this exercise, you will use Transact-SQL statements to configure transaction timeouts, isolation levels, and locking hints. To perform this exercise, you should be logged into your Windows 2000 Server computer as Administrator.

To use Transact-SQL to configure a session timeout

  1. Open Query Analyzer and connect to your local server.
  2. In the Editor pane of the Query window, enter the following Transact-SQL code:
 SELECT @@LOCK_TIMEOUT 

In this statement, you are requesting the LOCK_TIMEOUT setting for the current session.

  1. Execute the Transact-SQL statement.

A value of –1 appears in the Grids tab of the Results pane. This value is returned if no LOCK_TIMEOUT setting has been implemented for this session.

  1. In the Editor pane of the Query window, enter the following Transact-SQL code:
 SET LOCK_TIMEOUT 1800 SELECT @@LOCK_TIMEOUT 

In this statement, you are first using the SET LOCK_TIMEOUT statement to set the timeout value to 1800 milliseconds. You are then using the @@LOCK_TIMEOUT function to view the new setting.

  1. Execute the Transact-SQL statement.

A value of 1800 appears in the Grids tab of the Results pane.

To use Transact-SQL to set the session isolation level

  1. In the Editor pane of the Query window, enter the following Transact-SQL code:
 USE BookShopDB SET TRANSACTION ISOLATION LEVEL SERIALIZABLE 

In this statement, you are setting the isolation level to SERIALIZABLE for the current session.

  1. Execute the Transact-SQL statement.

A message appears in the Messages tab of the Results grid, stating that the command has been successfully completed.

  1. In the Editor pane of the Query window, enter and execute the following Transact-SQL code:
 DBCC USEROPTIONS 

The Grids tab of the Results pane displays the SET option values. The isolation level value is now set to SERIALIZABLE.

To use Transact-SQL to set a table-level locking hint

  1. In the Editor pane of the Query window, enter the following Transact-SQL code:
 USE BookShopDB SELECT TitleID, Title  FROM Books WITH (NOLOCK) 

In this statement, you are setting a table-level locking hint (NOLOCK) for the SELECT statement. The NOLOCK locking hint does not issue shared locks and does not honor exclusive locks.

  1. Execute the Transact-SQL statement.

The result set appears in the Grids tab of the Results pane.

  1. Close Query Analyzer.

Lesson Summary

Locking prevents users from reading data being changed by other users and prevents multiple users from changing the same data at the same time. If locking is not available and several users access a database concurrently, problems might occur if their transactions use the same data at the same time. Concurrency problems can include lost or buried updates, uncommitted dependency (dirty reads), inconsistent analysis (non-repeatable reads), or phantom reads. SQL Server 2000 offers both optimistic and pessimistic concurrency control. Optimistic concurrency control works on the assumption that resource conflicts between multiple users are unlikely (but not impossible) and enables transactions to execute without locking any resources. Pessimistic concurrency control locks resources as they are required for the duration of a transaction. Unless deadlocks occur, a transaction is assured of successful completion. The level at which a transaction is prepared to accept inconsistent data is termed the isolation level. The isolation level is the degree to which one transaction must be isolated from other transactions. SQL Server supports the following isolation levels: read uncommitted, read committed, repeatable read, and serializable. Although SQL Server implements locking automatically, you can customize locking by handling deadlocks and setting the deadlock priority, handling timeouts and setting the lock time-out duration, setting the transaction isolation level, using table-level locking hints, and configuring locking granularity for an index.



Microsoft Press Staff - MCSE. Microsoft SQL Server 2000 Database Design and Implementation Training Kit
MCSE Training Kit (Exam 70-229): Microsoft SQL Server(TM) 2000 Database Design and Implementation (Pro Certification)
ISBN: 073561248X
EAN: 2147483647
Year: 2001
Pages: 97

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net