The values listed earlier in Table 38.1 represent all the various levels, or granularity, of locks from which the SQL Server Lock Manager can choose when processing queries and transactions. Lock granularity is essentially the minimum amount of data that is locked as part of a query or update to provide complete isolation and serialization for the transaction. The Lock Manager needs to balance the concurrent access to resources versus the overhead of maintaining a large number of lower-level locks. For example, the smaller the lock size, the greater the number of concurrent users who can access the same table at the same time, but the greater the overhead in maintaining those locks. The greater the lock size, the less overhead that is required to manage the locks, but concurrency is also less. Figure 38.8 demonstrates the tradeoffs between lock size and concurrency. Figure 38.8. Tradeoffs between performance and concurrency depending on lock granularity.
Currently, SQL Server balances performance and concurrency by locking at the row level or higher. Based on a number of factors, such as key distribution, number of rows, row density, search arguments (SARGs), and so on, the query optimizer makes lock granularity decisions internally, and the programmer does not have to worry about such issues. SQL Server 2000 does provide a number of T-SQL extensions that give you better control over query behavior from a locking standpoint. These optimizer overrides are discussed in the "Table Hints for Locking" section later in this chapter. The following list describes the locking levels in SQL Server 2000 in more detail:
Serialization and Key-Range LockingAs mentioned in the previous section, SQL Server provides serialization (isolation level 3) through the SET TRANSACTION ISOLATION SERIALIZABLE command. One of the isolations that is provided by this isolation level is the prevention against phantom reads. Preventing phantom reads means that the recordset that a query obtains within a transaction must return the same resultset when it is run multiple times within the same transaction. That is, while a transaction is active, another transaction should not be allowed to insert new rows that would appear in the recordset of a query that were not in the original recordset retrieved by the transaction. SQL Server 2000 provides this capability though key-range locking. As described earlier, key-range locking within SQL Server provides isolation for a transaction from data modifications made by other transactions. This means that a transaction should return the same recordset each time. In this section, you will see how key-range locking works with various lock modes. Key-range locking covers the scenarios of a range search that returns a resultset as well as searches against nonexistent rows. Key-Range Locking for Range SearchIn this scenario, SQL Server places locks on the index pages for the range of data covered in the WHERE clause of the query. (For a clustered index, the rows would be the actual data rows in the table.) Because the range is locked, no other transaction will be able to insert new rows that fall within the range. In Figure 38.9, Transaction B tries to insert a row with a key value ( stor_id = 7200 ) that falls within the range being used by Transaction A ( stor_id between 6000 and 7500 ). Figure 38.9. Key-range locking with range search.
Now take a look at the locks acquired using the sp_lock stored procedure (in this sample output, spid 52 is executing the SELECT statement, and spid 54 is attempting the INSERT ): exec sp_lock go spid dbid ObjId IndId Type Resource Mode Status ------ ------ ----------- ------ ---- ---------------- -------- ------ 52 5 117575457 1 KEY (36000050901c) RangeS-S GRANT 52 5 117575457 1 KEY (3700560a5b33) RangeS-S GRANT 52 5 117575457 1 KEY (ffffffffffff) RangeS-S GRANT 52 5 117575457 1 PAG 1:105 IS GRANT 52 5 117575457 1 KEY (3700f04c0158) RangeS-S GRANT 52 5 0 0 DB S GRANT 52 5 117575457 0 TAB IS GRANT 52 5 117575457 1 KEY (370087018ad1) RangeS-S GRANT 52 5 117575457 1 KEY (370011318da6) RangeS-S GRANT 52 5 117575457 1 KEY (38004ab7b2bc) RangeS-S GRANT 54 5 117575457 0 TAB IX GRANT 54 5 0 0 DB S GRANT 54 5 117575457 1 KEY (3700f04c0158) RangeIn- WAIT 54 5 117575457 1 PAG 1:105 IX GRANT To provide key-range isolation, SQL Server places RangeS-S locks (shared lock on the key range and shared lock on the key at the end of the range) on the index keys for the rows with the matching values. It also places intent share (IS) locks on the page(s) and the table that contain the rows. The insert process acquires intent exclusive (IX) locks on the destination page(s) and the table. In this case, the insert process is waiting for a RangeIn-Null lock on the key range until the RangeS-S locks in the key range are released. As described earlier in this chapter, the RangeIn-Null lock is an exclusive lock on the range between keys with no lock on the key. This is acquired because the insert process is attempting to insert a new store ID that has no associated key value. Key-Range Locking When Searching Nonexistent RowsIn this scenario, if a transaction is trying to delete or retrieve a row that does not exist in the database, it still should not find any rows at a later stage in the same transaction with the same query. For example, in Figure 38.10, Transaction A is trying to fetch a nonexistent row with the key value 7200 , and another concurrent transaction (Transaction B) is trying to insert a record with the same key value ( stor_id = 7200 ). Figure 38.10. Key-range locking with a nonexistent dataset.
SQL Server in this mode will prevent Transaction B ( spid 54 ) from inserting a new row by using a RangeS-S lock for Transaction A ( spid 52 ). This lock is placed on the index key rows for the rows in the range between MAX(stor_id) < 7200 (key value 7100 in Figure 38.10) and MIN(stor_id) > 7200 (key value 7300 in Figure 38.10). Transaction B will hold a RangeIn-NULL lock and wait for the RangeS-S lock to be released. Following is the sample output of the sp_lock command for these two transactions: Exec sp_lock Go spid dbid ObjId IndId Type Resource Mode Status ------ ------ ----------- ------ ---- ---------------- -------- ------ 52 5 117575457 1 KEY (36000050901c) RangeS-S GRANT 52 5 117575457 1 KEY (3700560a5b33) RangeS-S GRANT 52 5 117575457 1 KEY (ffffffffffff) RangeS-S GRANT 52 5 117575457 1 PAG 1:105 IS GRANT 52 5 117575457 1 KEY (3700f04c0158) RangeS-S GRANT 52 5 0 0 DB S GRANT 52 5 117575457 0 TAB IS GRANT 52 5 117575457 1 KEY (370087018ad1) RangeS-S GRANT 52 5 117575457 1 KEY (370011318da6) RangeS-S GRANT 52 5 117575457 1 KEY (38004ab7b2bc) RangeS-S GRANT 54 5 117575457 0 TAB IX GRANT 54 5 0 0 DB S GRANT 54 5 117575457 1 KEY (3700f04c0158) RangeIn- WAIT 54 5 117575457 1 PAG 1:105 IX GRANT Using Application LocksThe SQL Server Lock Manager knows nothing about the object or the structure of the object it is locking. The actual resources are represented only as strings. (This information can be seen in the rsc_text field in syslockinfo .) The Lock Manager simply checks to see if two processes are trying to obtain incompatible locks on the same resource. If so, blocking will occur. SQL Server 2000 allows you to extend the resources that can be locked beyond the ones automatically provided. You can define your own custom locking resources and let the Lock Manager control the access to those resources as it would for any resource in a database. This essentially allows you to choose to lock anything you want. These user-defined lock resources are called application locks. To define an application lock, you use the sp_getapplock stored procedure and specify a name for the resource you are locking, a mode, an optional lock owner, and an optional lock timeout interval. The syntax for sp_getapplock is as follows : sp_getapplock [ @Resource = ] ' resource_name ', [ @LockMode = ] ' lock_mode ' [ , [ @LockOwner = ] { 'transaction' 'session' } ] [ , [ @LockTimeout = ] ' value ' ] Two resources are considered to be the same resource and are subject to lock contention if they have the same name and the same lock owner in the same database. The resource name used in these procedures can be any identifier up to 255 characters long. The lock owner can be specified as either transaction or session . Multiple requests for locks on the same resource can be granted only if the locking modes of the requests are compatible. (See the "Lock Compatibility" section later in this chapter for a lock compatibility matrix.) The possible modes of the lock allowed are Shared, Update, Exclusive, IntentExclusive, and IntentShared. For what purpose can you use application locks, and how do you use them? Suppose you have a table that contains a queue of items to be processed by the system. You need a way to serialize the retrieval of the next item from the queue so that the multiple concurrent processes do not grab the same item at the same time. In the past, one way this could be accomplished was by forcing an exclusive lock on the table. (The use of table hints to override default locking behavior is covered in the "Table Hints for Locking" section later in this chapter.) Only the first process to acquire the exclusive lock would be able to retrieve the next item from the queue. The other processes would have to wait until the exclusive lock was released. The problem with this approach is that the exclusive lock would also block other processes that might need to simply retrieve data from the table. You could make use of application locks to avoid having to place an exclusive lock on the entire table. Using sp_getapplock , you can define and lock a custom lock resource for a transaction or session. Locks that are owned by the current transaction are released when the transaction commits or rolls back. Locks that are owned by the session are released when the session is closed. Locks can also be explicitly released at any time with the sp_releaseapplock stored procedure. The syntax for sp_releaseapplock is as follows: sp_releaseapplock [ @Resource = ] ' resource_name ' [ , [ @LockOwner = ] { 'transaction' 'session' }]
In the following example, you first request an exclusive lock on an application lock called 'QueueLock' by using sp_getapplock . You then invoke the procedure to get the next item in the queue. After the procedure returns, you call sp_releaseapplock to release the application lock called 'QueueLock' to let another session acquire the application lock: sp_getapplock 'QueueLock', 'Exclusive', 'session' exec get_next_item_from_queue sp_releaseapplock 'QueueLock', 'session' As long as all processes that need to retrieve items from the queue execute this same sequence of statements, no other process can execute the get_next_item_from_queue process until the application lock is released. The other processes will block attempts to acquire the exclusive lock on the resource 'QueueLock' . For example, the following output from sp_lock shows one process ( spid 55 ) holding an exclusive lock on QueueLock (the hash value generated internally for QueueLock is shown as Queu1e2eefa9 in the Resource field), while another process ( spid 52 ) is waiting for an exclusive lock on QueueLock : sp_lock go spid dbid ObjId IndId Type Resource Mode Status ------ ------ ----------- ------ ---- ---------------- -------- ------ 51 4 0 0 DB S GRANT 52 8 0 0 DB S GRANT 52 8 0 0 APP Queu1e2eefa9 X WAIT 53 1 85575343 0 TAB IS GRANT 53 8 0 0 DB S GRANT 55 8 0 0 DB S GRANT 55 8 0 0 APP Queu1e2eefa9 X GRANT
Index LockingSimilar to locks on the data pages, SQL Server manages locks on index pages internally. Compared to data pages, there is the opportunity for greater locking contention in index pages. Contention at the root page of the index is the highest because the root is the starting point for all searches via the index. Contention usually decreases as you move down the various levels of the B-tree, but it is still higher than contention at the data page level due to the typically greater number of index rows per index page than data rows per data page. If locking contention in the index becomes an issue, SQL Server provides a system-stored procedure called sp_indexoption that allows expert users to manage the locking behavior at the index level. The syntax of this stored procedure is as follows: Exec sp_indexoption {[@IndexNamePattern = ] ' index_name '}[, [@OptionName = ] ' option_name '] [, [@OptionValue = ]' value '] The following describes the parameter values:
SQL Server usually makes good choices for the index locks, but based on the distribution of data and nature of the application, you might want to force a specific locking option on a selective basis. For example, if you are experiencing a high level of locking contention on index pages of an index, you might want to force SQL Server to use row-level locks by turning off page locks. If you turn off both row and page locks, only table-level locks will be acquired. The following example turns off page-level locking for index pages on an index named aunmind for the authors table: Exec sp_indexoption 'authors.aunmind', 'AllowPageLocks', false
Row-Level Versus Page-Level LockingFor years , it was often debated whether row-level locking was better than page-level locking. That debate still goes on in some circles. Many people will argue that if databases and applications are well designed and tuned , row-level locking is unnecessary. This can be borne out somewhat by the number of large and high-volume applications that were developed when row-level locking wasn't even an option. (Prior to version 7, the smallest unit of data that SQL Server could lock was at the page level.) However, at that time, the page size in SQL Server was only 2K. With page sizes expanded to 8K, a greater number of rows (four times as many) can be contained on a single page. Page-level locks on 8K pages could lead to greater page-level contention because the likelihood of the data rows being requested by different processes residing on the same page is greater. Using row-level locking increases the concurrent access to the data. On the other hand, row-level locking consumes more resources (memory and CPU) than page-level locks, simply because there are a greater number of rows in a table than pages. If a process needed to access all rows on a page, it would be more efficient to lock the entire page than acquire a lock for each individual row. This would result in a reduction in the number of lock structures in memory that the Lock Manager would have to manage. Which is better ”greater concurrency or lower overhead? As shown earlier in Figure 38.8, it's a tradeoff . As lock size decreases, concurrency improves, but performance degrades due to the extra overhead. As the lock size increases, performance improves due to less overhead, but concurrency degrades. Depending on the application, the database design, and the data, either page-level or row-level locking can be shown to be better than the other in different circumstances. SQL Server will make the determination at runtime ”based on the nature of the query, the size of the table, and the estimated number of rows affected ”of whether to initially lock rows, pages, or the entire table. In general, SQL Server attempts to first lock at the row level more often, than the page level, in an effort to provide the best concurrency. With the speed of today's CPUs and the large memory support, the overhead of managing row locks is not as expensive as in the past. However, as the query processes and the actual number resources locked exceed certain thresholds, SQL Server might attempt to escalate locks from a lower level to a higher level as appropriate. Lock EscalationWhen SQL Server detects that the locks acquired by a query are using too much memory and consuming too many system resources for the Lock Manager to manage the locks efficiently , it will automatically escalate row, key, or page locks to table-level locks. For example, as a query on a table continues to acquire row locks and every row in the table eventually will be accessed, it makes more sense for SQL Server to escalate the row locks to a table-level lock. After the table-level lock is acquired, the row-level locks are released. This helps reduce locking overhead and keeps the system from running out of available lock structures. Recall from earlier sections in this chapter that the potential need for lock escalation is reflected in the intent locks that are acquired on the table by the process locking at the row or page level.
What are the lock escalation thresholds? They are determined dynamically and do not require user configuration. When the number of locks acquired within a transaction exceeds 1,250, or when the number of locks acquired by an index or table scan exceeds 765, the Lock Manager examines how much memory is currently being used by all locks in the system. When more than 40 percent of the available memory is being used for locks, the Lock Manager attempts to escalate multiple page, key, or row locks to table locks. SQL Server attempts to identify all tables that are locking at the row or page level that are capable of escalation (that is, no other processes hold incompatible locks on other pages or rows in the table) for which escalation has not yet been performed. This will continue until all possible lock escalations have taken place, or the total memory used for locks drops to below 40 percent. At times, SQL Server might choose to do both row and page locking for the same query. For example, if a query returns multiple rows, and enough contiguous keys in a nonclustered index page are selected to satisfy the query, SQL Server might place page locks on the index while using row locks on the data. This reduces the need for lock escalation. The locks Configuration SettingAs mentioned previously, the total number of locks available in SQL Server is dependent on the amount of memory available for the lock structures. This is controlled by the locks configuration option for SQL Server. By default, this option is set to 0, which allows SQL Server to allocate and deallocate lock structures dynamically based on ongoing system requirements. Initially, SQL Server allocates 2 percent of the total memory allocated to SQL Server to the lock structure pool. Each lock structure consumes 64 bytes of memory, and each held or waiting lock on the resource requires an additional 32 bytes of memory. As the pool of locks is exhausted, additional lock structures are allocated up to a maximum of 40 percent of the memory currently allocated to SQL Server. If more memory is required for locks than is currently available to SQL Server, and more server memory is available, SQL Server will allocate additional memory from the operating system dynamically. Doing so will satisfy the request for locks as long as the allocation of the additional memory does not cause paging at the operating system level. If paging were to occur, more lock space would not be allocated, and SQL Server would essentially run out of locks. In addition, the transaction would be aborted, and the user would see a message like the following: Server: Msg 1204, Level 19, State 1, Line 1 The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configuration. It is recommended that you leave the locks configuration setting at 0 to allow SQL Server to allocate lock structures dynamically. If you repeatedly receive error messages that you have exceeded the number of available locks, you might want to override SQL Server's ability to allocate lock resources dynamically by setting the locks configuration option to a value large enough for the number of locks needed. Because each lock structure requires 96 bytes of memory, be aware that setting the locks option to a high value might result in an increase in the amount of memory dedicated to the server. For more information on changing SQL Server configuration options, see Chapter 40, "Configuring, Tuning, and Optimizing SQL Server Options." |