SQL Server 2005 Lock Management


A lock is an object that obtains information about the dependency that any process might have on data. Locks are managed by the lock manager, which is responsible for setting and releasing locks, the order of priority of the locks, and so on. The lock manager controls how sessions access data that is locked. It doesn’t just prevent access; it assesses how to manage access in such a way that the data is not compromised by the session that “owns” the lock.

SQL Server 2005 automatically locks data that is being accessed by multiple users. It also automatically decides when data gets locked, who gets the benefit of the lock, the duration of the lock, and so on. SQL Server employs dynamic locking architecture, which means that the DBA does not have to manage the locking thresholds of transaction, and the developer can concentrate on making good query code without having to worry about locking. We will return to dynamic locking later in this chapter.

All locks are issued to connections on a first-come, first-served basis. The locking architecture is pessimistic by default. In terms of locking, “pessimistic” means that SQL Server assumes that the data is subject to concurrent access and could thus be compromised. The manager manages the locks on a transaction basis and on a session basis. You cannot, for example, obtain a lock in one session, even as one user, and then try to access the data from another session (connection). For example, you cannot execute a stored procedure in one connection that obtains a lock on the row and then execute another stored procedure, even another instance of the same procedure, in another session in the hope that the two sessions can access the data. This is only permitted through a bound connection, as is discussed later.

Lock Granularity

Locks can be placed at many different levels in your database. SQL Server can obtain a lock on a single row at the one end of the “granularity scale” and on the entire database at the other end of the scale. The various levels of granularity are, from lowest level to highest level, row, page, key or key-range, index, table, and database. Table 17–2 illustrates the actual objects that are protected from clashing interests.

The lock manager manages the level at which locks are acquired; the user or administrator is not required to perform any specific manual configuration unless an exceptional situation calls for it. Likewise, an application does not need to specify the granularity for a session. If a table is very small and there is only one session attached to it, the manager may place a table lock on behalf of the session. However, on a very large table that has many sessions attached to it, the manager will automatically lock rows and not tables. And depending on the isolation level, SQL Server may resort to a key-range lock.

A key-range lock locks index rows and the ranges between the index rows. SQL Server uses a key-range lock to solve the phantom read concurrency problem. The lock also supports serialization of transactions. The lock is used to prevent phantom insertions and deletes inside a range of records accessed by the transaction. If you want more detail of how the key-range lock works, consult Books Online. SQL Server pretty much handles the keyrange lock for you automatically as part of its end-to-end lock management technology

Table 17–2: Lock Granularity

Object Name

Object Description

RID

Row identifier

Key

A row lock within an index, which can protect a key or key range in serializable transactions

Page

The 8KB data or index page

Extent

I The eight data or index pages that make up an extent

Table

All data pages and indexes that make up a table

Database

I The entire database and its objects

Lock Mode

The manager also ensures that sessions respect the locks, and that a lock at one level is respected by locks at other levels. The respect that locks have for each other depends on the lock mode. For example, if a session has a shared-mode lock on a row, no other session can attempt to acquire an exclusive lock on the same row. The various modes are shared, update, exclusive, intent, schema, and bulk update. Table 17–3 describes the benefit of each lock mode.

Table 17–3: Lock Modes

Lock Mode

Benefit

Shared

Used for operations that do not change data (read only). Can be acquired by multiple sessions.

Update

Used for operations that do not change data. Can be acquired with a shared lock in place.

Exclusive

Used for operations that insert, delete, and update data. The lock has exclusive use of the object.

Intent (intent shared, intent exclusive, intent and shared with intent exclusive)

Used to establish a lock at one level in the hierarchy with intent to move to another level.

Schema (schema stability)

Used for operations dependent on schema stability.

Bulk Update

Used for bulk copy with TABLOCK specified.

Shared Locks

A shared lock (S) is issued to allow transactions to read data concurrently. For example, multiple concurrent SELECT statements can all read the same data concurrently. However, the data cannot be updated by any transaction for the duration of shared locks on the object. (See “Lock Duration” later in this chapter for information peculiar to each type of lock.)

Update Locks

A common form of deadlock occurs in a database when more than one transaction has acquired a shared lock. When a transaction (T1) is ready to change data it has just read using a shared lock, it attempts to convert the shared lock to an exclusive lock. But if another transaction (T2) has a shared lock and the same idea, it must wait for T1 to release its shared lock. However, T1 sees the shared lock of T2 and also has to wait for T2 to release its shared lock. The two locks wait indefinitely until a time-out or the DBA decides to throw the server out the window (if it is not SQL Server).

To avoid such a problem, SQL Server supports the concept of an update lock (U). Only one transaction can acquire an update lock, which means that even if another transaction tries to convert from a shared lock, it cannot. When the update lock is ready to modify the object, it converts to an exclusive lock. The update lock is akin to a shared lock with a red flag.

Exclusive Locks

Exclusive locks (X) get exclusive access the object, and no other transaction can gain access to the same object.

Intent Lock

The best way to explain an intent lock is with an analogy from physical reality. When you walk into a supermarket and to the deli counter, you will often have to pull a tag with a number out of a machine. Everyone holding a number is able to maintain a serving position without having to stand in line, but it gives them the right to step forward for service when their number is called. By holding the tag and the number, they signify an intent to be served. An intent lock is similar in that instead of standing in line to place a lock on an object, a transaction can place an intent lock on an object higher up on the hierarchy when it actually needs to lock an object lower down. It is actually reserving its position for the service of a more stringent lock. Intent locks are placed on table objects. They improve performance because this is the only place the lock manager checks them.

There are three types of intent locks in SQL Server 2005:

  • Intent Shared (IS)   This lock indicates the intention to read resources lower down in the hierarchy (shared locks are placed higher up).

  • Intent Exclusive (IX)   This lock indicates the intention to lock exclusive (for updates or changes) resources lower down in the hierarchy (exclusive locks are placed higher up).

  • Shared with Intent Exclusive (SIX)   A shared lock with intent exclusive allows other intent locks to be placed on the table while one transaction is holding the option to lock exclusive an object lower down the hierarchy.

Schema Lock

A schema lock (Sch-M) is acquired by a transaction that issues DML statements on a table, such as column adding, changing, or dropping. This lock prevents access to the schema until the transaction has completed the schema update.

A second type of schema lock is called the schema stability lock (Sch-S), which is used to prevent any operation from making changes to the schema while a query is in process. For lengthy statements that make several trips to the table or take a long time to compute, a schema lock is essential. The last thing you want to happen while executing the query of the millennium is for a DBA to make changes to the very table you are working on.

Bulk Update Lock

The bulk update lock (BU) is used when you need to perform a bulk copy or bulk insert and the TABLOCK is specified or the “table lock on bulk load” option is set using the sp_tableoption stored procedure.

Lock Duration

The manager holds locks in place for an amount of time needed to ensure the protection of the data. The duration is known as the lock duration. SQL Server supports several lock duration scenarios.

The transaction isolation level (discussed later in this chapter) governs shared lock duration. If the default isolation level of READ COMMITTED is specified, then the shared lock is held as long as it takes to read the page. The lock is released when the read completes. If upon opening the transaction the HOLD LOCK hint is issued or the transaction isolation level is set to REPEATABLE READ or SERIALIZABLE, the locks are held until the end of the transaction or after COMMIT TRANSACTION is issued.

Cursors, which are coming up after the discussion on transactions, can fetch one row or block of rows at a time and may acquire shared-mode scroll locks to protect the fetching of additional data. Scroll locks are held until the next fetching of rows or the closing of the cursor. If HOLDLOCK is specified for the cursor, the scroll locks are held until the end of the transaction.

Locks acquired as exclusive are held until the transaction has completed.

Lock Blocks

No connection can acquire a lock if the lock it is trying to acquire conflicts with another lock already in place. This problem is known as lock blocking. For example, if a session attempts to acquire a table lock on a row that has an exclusive row lock established, the session attempting the lock is blocked. The block persists until either the in-place lock is released or a certain amount of time for the former connection has passed. Time-outs are not normally set for locks, but this may become necessary to prevent an indefinite wait caused by a transaction that has not completed by a certain amount of time.

Lock Compatibility

You cannot place a lock on an object if it is not compatible with any other locks that are already in place on the objects. If a shared lock is placed on an object, only other shared locks or update locks can also be acquired. If an exclusive lock is placed on an object, then no lock of any other kind can be placed on the same object. But exclusive locks cannot acquire a lock on a resource until a transaction that owns a shared lock has been released. The matrix in Table 17–4 illustrates the compatibility of each lock type.

Note that an IX lock can be placed on the table that already has another IX lock on it. This is permitted because the IX signals intent to update one or more rows, but not all of them. However, if one IX lock wants to jump the gun, so to speak, and update the same rows as another IX lock, it is blocked.

The schema locks do not feature in the matrix illustrated in Table 17–4, because they are compatible with all the lock modes. However, the schema modification lock cannot be used on a table if a schema stability lock is in place and vice versa.

Table 17–4: Lock Compatibility Matrix. Key: IS=Intent Shared, S=Shared, U=Update, IX=Intent Exclusive, SIX=Shared with Intent Exclusive, X=Exclusive

RM/GM

IS

S

U

IX

SIX

X

IS

Yes

Yes

Yes

Yes

Yes

No

S

Yes

Yes

Yes

No

No

No

U

Yes

Yes

No

No

No

No

IX

Yes

No

No

No

No

No

SIX

Yes

No

No

No

No

No

X

No

No

No

No

No

No

The bulk update lock is only compatible with the schema stability lock and other bulk update locks.

Lock Escalation

SQL Server automatically escalates fine-grain locks into coarse-grain locks as it needs to. This helps reduce system overhead: the finer the lock granularity, the harder SQL Server works. A key range or row, for example, requires more resources than a page lock or a table lock

SQL Server escalates row locks and page locks into table locks when a transaction exceeds an escalation threshold. In other words, when the number of locks held by the transaction exceeds its threshold, SQL Server will attempt to change the intent lock on the table to a stronger lock-an IX to an X. If it succeeds, it can release the row or key-range locks.

These lock escalation thresholds are handled automatically by SQL Server, and you cannot configure their usage or reference them in T-SQL code.

Obtaining Information about Locks

You can obtain and read information about locks by executing the system stored procedure sp_lock in your code (see Appendix). A result set returning a mass of information is obtained. The result set, fully documented in Books Online, returns information about lock modes, locked objects, type status, and so forth.

Examining the information about the locking activity undertaken by SQL Server can tell you a lot about your code and the optimization of queries (especially what goes into your stored procedures and triggers). SQL Server employs a dynamic locking algorithm which determines the most cost-effective locks. These decisions are made on the fly by the optimizer that considers, inter alia, the characteristics of the schema and the nature of the query. The sp_lock stored procedure will give you information about the lock activity SQL Server has been undertaking. Getting lock information can also help you troubleshoot deadlocks, as discussed shortly. The dynamic locking eases the administrative burden of SQL Server so you do not need to be adjusting lock escalation thresholds. This automation also means SQL Server works faster because it makes lock decisions on the fly and decides which locks are appropriate for the query at hand and the nature of the transaction.

Deadlocks

A deadlock occurs when a transaction cannot complete because another process has a lock on a resource it needs. The other process also cannot complete because the former process has a lock on a resource the latter needs. For example, Transaction A (TA) has a lock on the credit table but requires a lock on the debit table before the transaction can complete; it first updates the credit table and then needs to update the debit table. Transaction B (TB) is working in the reverse and has a lock on the debit table, exactly where TA wants a lock. A tug of war ensues, and you now have a deadlock because TA cannot complete (commit) nor can TB; each has a lock on the resource the other requires. SQL Server will have to decide which transaction yields.

While SQL Server makes use of many resources, such as threads, to prevent deadlocks, a deadlock is still entirely possible. SQL Server thus employs a sophisticated lock monitor thread that is devoted to deadlock detection and resolving. The lock monitor handles deadlocks as follows:

  1. It is alerted to a thread that has been waiting for a resource for a specified amount of time. The lock monitor identifies all threads queued to the resource in the event that it needs to take further action.

  2. If the scenario persists, the lock monitor begins to determine where the deadlock exists and initiates what is termed an “eager deadlock search.” The eager deadlock search is a process whereby the lock monitor thread traces the thread activity to the point the deadlock occurred.

  3. SQL Server then chooses a deadlock victim. This is usually the thread that is the least expensive to kill (in terms of resources).

When the deadlock victim is chosen, SQL Server rolls back its transaction and sends error message 1205 back to the application in the open session that led to the deadlock in the first place. The application will get the following message:

 Your transaction (process ID #52) was deadlocked on {lock | communication buffer | thread} resources with another process and has been chosen as  the deadlock victim. Rerun your transaction.

This information is also placed in the error log, and thus an application does not have to store the error message, just be aware of it. You should code an error handler in the client application to deal with the error message because it means that the transaction just initiated did not complete. So you might have to ask the user to repeat the exercise or the application might have to take some special covert steps. Pushing the message up to the GUI usually serves no purpose because the user of your application will have no idea what it means. Just the words, “deadlock,” “victim,” and “Rerun your transaction” are enough to cause a heart attack.

The message is returned to the client as soon as the lock monitor “shoots” the deadlock victim, so you wait about a 100 milliseconds for the deadlock mess to be cleaned up. Resubmitting immediately could end up repeating the deadlock.

You can take certain steps to minimize deadlocks because there is always a chance a deadlock will occur. Consider the following:

  • Acquire your resources late and release them early. Don’t code transactions that lock objects early, and then keep them locked while you perform the mother of all subqueries. Intelligent code should keep the object locked for the time it is needed and then immediately release it. Ensure your code accesses and releases the objects in the same order.

  • Avoid asking for user input in a transaction. Gather up the data from the user before you start the transaction and avoid asking for data from the user in the middle of a transaction. If there is one thing you cannot control, it is the user. What would happen in the middle of a transaction when you need user input and he or she has fallen asleep, switched to eBay to place a bid, or decided to leap off the roof?

  • Keep transactions short and code them in a single batch. The longer and more complex the transaction, the higher the chance of a deadlock, because the resources are locked for longer. Keeping transactions short also means you can code them in a single small batch. Minimizing round trips means you can complete the transaction quickly and avoid deadlocks.

  • Use bound connections. This allows two or more connections opened by the same application to cooperate and not block each other. So any locks acquired by the secondary connections are held as if they were acquired by the primary connection.

  • Use a low isolation level in your transactions. First decide if the transaction can run at a low isolation level. For example, if you use "read committed" for the isolation level, the transaction is allowed to read data previously read but not modified by another transaction without waiting for the other transaction to complete. Read committed maintains the lock for less time than a higher isolation level such as serializable, which means less of a chance for lock contention. (Working with the isolation levels is discussed further in the section "Transaction Processing" of this chapter.)

Note 

A block is a mechanism by which one process prevents another from obtaining access- a lock-to a resource. A block prevents a deadlock and should not be confused with one.

Working with the LockTime-Out Setting

SQL Server permits you to change the LOCK_TIMEOUT setting in your application. If your application is blocked from a resource and has waited longer than the LOCK_TIMEOUT setting, SQL Server will send it a message to the effect that the LOCK_TIMEOUT exceeded the time it was set for. This is message 1222 or "Lock request time-out period exceeded."

This situation needs to be explicitly handled by your application with the same diligence described in the deadlock event. Rather than automatically rolling back your transaction as SQL Server does after selecting a deadlock victim, it cancels the statement that was waiting on the resource. You need to then make a determination of what to do next, such as resubmitting the statement or rolling back the entire transaction.

To check the lock time-out, execute the @@LOCK_TIMEOUT function.

Locking Hints

You can specify locking hints to SQL Server in your code within your SELECT and DML statements. Although you can use the hints to influence SQL Server, it is recommended you leave the locking process entirely to SQL Server because the hints override the current transaction isolation level for the current session. Unless you know exactly what you are doing and why you are doing it, hints can have a negative impact on concurrency and cause problems for you.

In the same fashion you can customize locking for an index if you have a very special reason to do so. In most cases you would be better off leaving SQL Server to make the determinations for you.




Microsoft SQL Server 2005. The Complete Reference
Microsoft SQL Server 2005: The Complete Reference: Full Coverage of all New and Improved Features
ISBN: 0072261528
EAN: 2147483647
Year: 2006
Pages: 239

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net