Lock Duration

 <  Day Day Up  >  

Before you learn about the various types of locks that can be acquired by DB2, you should understand lock duration , which refers to the length of time that a lock is maintained .

The duration of a lock is based on the BIND options chosen for the program requesting locks. Locks can be acquired either immediately when the plan is requested to be run or iteratively as needed during the execution of the program. Locks can be released when the plan is terminated or when they are no longer required for a unit of work.

The BIND parameters affecting DB2 locking are covered in detail in Chapter 13, "Program Preparation." They are summarized in the following sections as a reminder.

BIND Parameters Affecting Table Space Locks

ACQUIRE(ALLOCATE) versus ACQUIRE(USE) : The ALLOCATE option specifies that locks will be acquired when the plan is allocated, which normally occurs when the first SQL statement is issued. The USE option indicates that locks will be acquired only as they are required, SQL statement by SQL statement.

RELEASE(DEALLOCATE) versus RELEASE(COMMIT) : When you specify DEALLOCATE for a plan, locks are not released until the plan is terminated. When you specify COMMIT , table space locks are released when a COMMIT is issued.

BIND Parameters Affecting Page and Row Locks

ISOLATION level ( CS , RR , RS , UR ): There are four choices for isolation level.

  • ISOLATION(CS) , or Cursor Stability , acquires and releases page locks as pages are read and processed . CS provides the greatest level of concurrency at the expense of potentially different data being returned by the same cursor if it is processed twice during the same unit of work.

  • ISOLATION(RR) , or Repeatable Read , holds page and row locks until a COMMIT point; no other program can modify the data. If data is accessed twice during the unit of work, the same exact data will be returned.

  • ISOLATION(RS) , or Read Stability , holds page and row locks until a COMMIT point, but other programs can INSERT new data. If data is accessed twice during the unit of work, new rows may be returned, but old rows will not have changed.

  • ISOLATION(UR) , or Uncommitted Read , is also known as dirty read processing. UR avoids locking altogether, so data can be read that never actually exists in the database.

Regardless of the ISOLATION level chosen, all page locks are released when a COMMIT is encountered .

Implementing Dirty Reads Using ISOLATION(UR)

Programs that read DB2 data typically access numerous rows during their execution and are thus quite susceptible to concurrency problems. DB2 provides read-through locks, also known as "dirty reads" or "uncommitted reads," to help overcome concurrency problems. When using uncommitted reads, an application program can read data that has been changed but is not yet committed.

Dirty read capability is implemented using the UR isolation level ( UR stands for uncommitted read). When an application program uses the UR isolation level, it reads data without taking locks. This way, the application program can read data contained in the table as it is being manipulated.

How does "dirty read" impact data availability and integrity? Consider the following sequence of events:

  1. At 9:00 a.m., a transaction is executed containing the following SQL to change a specific value:

     

     UPDATE DSN8810.EMP    SET FIRSTNME = "MICHELLE" WHERE EMPNO = '010020'; 

    The transaction, which is a long-running one, continues to execute without issuing a COMMIT .

  2. At 9:01 a.m., a second transaction attempts to SELECT the data that was changed but not committed.

If the UR isolation level were specified for the second transaction, it would read the changed data even though it had yet to be committed. Obviously, if the program does not wait to take a lock and merely reads the data in whatever state it happens to be at that moment, the program will execute faster than if it has to wait for locks to be taken and resources to be freed before processing.

However, you must carefully examine the implications of reading uncommitted data before implementing such a plan. Several types of "problems" can occur. A dirty read can cause duplicate rows to be returned where none exist. Also, a dirty read can cause no rows to be returned when one (or more) actually exists. Dirty reads might even cause rows to be returned out of order even when using an ORDER BY . Obviously, you must take these "problems" into consideration before using the UR isolation level.

ISOLATION(UR) Requirements

The UR isolation level applies to read-only operations: SELECT , SELECT INTO , and FETCH from a read-only result table. Any application plan or package bound with an isolation level of UR uses uncommitted read functionality for read-only SQL. Operations that are contained in the same plan or package that are not read-only use an isolation level of CS .

You can override the isolation level that is defined at the plan or package level during BIND or REBIND as you want for each SQL statement in the program by using the WITH clause, as shown in the following SQL:

 

 SELECT EMPNO, FIRSTNME, LASTNAME FROM   DSN8810.EMP WITH UR; 

The WITH clause allows an isolation level to be specified at the statement level in an application program. However, the restriction that the UR isolation level can be used with read-only SQL statements only still applies.

CAUTION

If you are running on a pre-V6 DB2 subsystem, be aware that dirty read processing requires type-2 indexes. The UR isolation level is incompatible with type-1 indexes. If the plan or package is rebound to change to UR isolation, DB2 does not consider any access paths that use a type-1 index. If an acceptable type-2 index cannot be found, DB2 chooses a table space scan.


When to Use Dirty Reads

When is using UR isolation appropriate? The general rule of thumb is to avoid UR whenever the results must be 100% accurate. Examples would be when

  • Calculations that must balance are performed on the selected data

  • Data is retrieved from one source to insert to or update another

  • Production, mission-critical work that cannot contain or cause data-integrity problems is performed

In general, most current DB2 applications are not candidates for dirty reads. However, in a few specific situations, the dirty read capability is of major benefit. Consider the following cases in which the UR isolation level could prove to be useful:

  • Access is required to a reference, code, or lookup table that is basically static in nature. Due to the non-volatile nature of the data, a dirty read would be no different than a normal read the majority of the time. In the cases in which the code data is being modified, any application reading the data would incur minimum, if any, problems.

  • Statistical processing must be performed on a large amount of data. For example, your company may want to determine the average age of female employees within a certain pay range. The impact of an uncommitted read on an average of multiple rows is minimal because a single value changed usually does not have a significant impact on the result.

  • Dirty reads can prove invaluable in a DB2 data warehousing environment. A data warehouse is a time-sensitive, subject-oriented store of business data that is used for online analytical processing. Refer to Chapter 45, "Data Warehousing with DB2," for more information on DB2 data warehouses. Other than periodic data propagation and/or replication, access to a data warehouse is read only. An uncommitted read is perfect in a read-only environment because it can cause little damage because the data is generally not changing. More and more data warehouse projects are being implemented in corporations worldwide, and DB2 with dirty read capability can be a wise choice for data warehouse implementation.

  • In the rare cases in which a table, or set of tables, is used by a single user only, UR can make a lot of sense. If only one individual can modify the data, the application programs can be coded so that all (or most) reads are done using UR isolation level, and the data will still be accurate.

  • Dirty reads can be useful in pseudo-conversational transactions that use the save and compare technique. A program using the save and compare technique saves data for later comparison to ensure that the data was not changed by other concurrent transactions.

    Consider the following sequence of events: transaction 1 changes customer A on page 100. A page lock will be taken on all rows on page 100. Transaction 2 requests customer C, which is on page 100. Transaction 2 must wait for transaction 1 to finish. This wait is not necessary. Even if these transactions are trying to get the same row, the save then compare technique would catch this.

  • Finally, if the data being accessed is already inconsistent, little harm can be done by using a dirty read to access the information.

 <  Day Day Up  >  


DB2 Developers Guide
DB2 Developers Guide (5th Edition)
ISBN: 0672326132
EAN: 2147483647
Year: 2004
Pages: 388

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net