Isolation Levels

   

All DBMSs except for IBM support the SQL SET TRANSACTION statement, though they may have differing optional clauses. The essential syntax, though, always looks like this:

 SET TRANSACTION   [ ISOLATION LEVEL        { READ UNCOMMITTED         READ COMMITTED         REPEATABLE READ         SERIALIZABLE } ]   [ READ ONLY ] 

If you're using ODBC, you can set the isolation level permanently using this function call, which works with all of the Big Eight:

 SQLSetConnectAttr(...SQL_ATTR_TXN_ISOLATION,...) 

With JDBC, this is the call:

 Connection.setTransactionIsolation(...) 

The SQL Standard states that the isolation level of a transaction determines the type of concurrency problem that will be tolerated for the transaction. In order from lowest level of isolation to highest, the options are READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, and SERIALIZABLE. The Standard does not specify how a DBMS should enforce these rules; that decision is left up to the vendor. We've already noted that most DBMSs use locking to resolve concurrency problems; in this chapter, therefore, we discuss the isolation levels only in terms of locking.

READ UNCOMMITTED is the lowest level of transaction isolation; it also gives you the highest concurrency level. No locks are issued and no locks are checked during the transaction. READ UNCOMMITTED tells the DBMS you want it to allow reading of rows that have been writtenbut not committedby other transactions, and always implies a READ ONLY transaction, that is, no data changes allowed.

The next level of transaction isolation, READ COMMITTED , still allows for a fairly high concurrency level. Shared locks are mandatory but can be released before the transaction ends. READ COMMITTED tells the DBMS you want it to allow reading of rows that have been written by other transactions only after they have been committed.

REPEATABLE READ is the next level; concurrency drops sharply. Shared locks are mandatory and will not be released until the transaction ends. REPEATABLE READ tells the DBMS it must not allow a situation where one transaction gets two sets of data from two reads of the same set of rows because a second transaction changed that set of rows between the two reads.

SERIALIZABLE is the final and highest level of transaction isolation and thus gives you the lowest concurrency level. The DBMS may lock whole tables, not just rows (or paths to objects rather than just the objects) during the course of a transaction. SERIALIZABLE tells the DBMS you want it to execute concurrent transactions in a manner that produces the same effect as a serial execution of those transactions.

Of course, supporting the SQL statement syntax and doing something with the statement are different things! The SQL Standard allows a DBMS to auto-upgrade isolation levels. For example, if you specify an isolation level of REPEATABLE READ, you'll probably actually get SERIALIZABLE. Furthermore, any DBMS can ignore the READ ONLY clause, because it's just an optimizer hint. The main point, however, is that you can change the transaction isolation level. When you do, certain effects happen, and those effects are the same no matter which DBMS you use.

That may sound surprising, because we all know that the SQL Standard doesn't say how SET TRANSACTION should workit only specifies what the effect of the statement must be. So you may think of the sameness of DBMSs as a lucky coincidence . We think it's conformist: faced with many alternatives, all implementors decided to support transaction isolation by turning locks on or off at appropriate moments in the transaction. Table 15-4 shows the SQL Standard requirements, and the level of support the Big Eight have for the various transaction isolation levels.

Table 15-4. ANSI/DBMS Isolation Level Support
  READ ONLY R-U R-C R-R S Default Isolation Level
ANSI SQL Yes Yes Yes Yes Yes S
IBM Yes Yes Yes Yes Yes R-C
Informix Yes Yes Yes No Yes R-C
Ingres Yes No Yes Yes Yes S
InterBase No No Yes Yes Yes R-C
Microsoft Yes Yes Yes Yes Yes R-C
MySQL No No Yes No No R-C
Oracle Yes No Yes No Yes R-C
Sybase No Yes Yes Yes Yes R-R

Notes on Table 15-4:

  • The following abbreviations are used for the various isolation levels:

    R-U for READ UNCOMMITTED

    R-C for READ COMMITTED

    R-R for REPEATABLE READ

    S for SERIALIZABLE

  • Default Isolation Level column

    This column shows the isolation level the DBMS will provide if you don't specify one. Note that the default is important because the DBMS optimizer assumes that the default will be true when an SQL statement is executed. It doesn't know the actual isolation level in advance.

We mentioned earlier that the lock rules for read and/or write permission are not simple. This is because the isolation level chosen also affects read-write permissions. Now that we've looked at the transaction isolation levels, we can go into this in greater detail. First, recall that "read" means either "access" or "add to result set," while "write" means "data change"it doesn't imply "physical write to disk," though that can certainly happen. Table 15-5 shows the lock rules for read/write permission on READ UNCOMMITTED transactions, Table 15-6 shows the rules for READ COMMITTED transactions, Table 15-7 shows the rules for REPEATABLE READ transactions, and Table 15-8 shows the rules for SERIALIZABLE transactions.

Table 15-5. Lock Modes and Read/Write Permissions for READ UNCOMMITTED Transactions
  Shared Lock Update Lock Exclusive Lock
When acquired Never Never Never
When released N/A N/A N/A
Table 15-6. Lock Modes and Read/Write Permissions for READ COMMITTED Transactions
  Shared Lock Update Lock Exclusive Lock
When acquired Before read Before read Before update
When released After read After read Transaction end
Table 15-7. Lock Modes and Read/Write Permissions for REPEATABLE READ Transactions
  Shared Lock Update Lock Exclusive Lock
When acquired Before read Before read Before update
When released Transaction end Transaction end Transaction end
Table 15-8. Lock Modes and Read/Write Permissions for SERIALIZABLE Transactions
  Shared Lock Update Lock Exclusive Lock
When acquired Before read Before read Before update
When released Transaction end Transaction end Transaction end

Notes on Tables 15-5 through 15-8:

  • Except in the READ UNCOMMITTED level, you must have a shared lock before reading.

  • The locks acquired for the SERIALIZABLE level are more extensive than the locks for the REPEATABLE READ level. For example, Microsoft locks a range of index keys as well as the rows to which they point.

Concurrency Problems and Isolation Levels

Recall that transactions are concurrent when they have overlapping start or end times. There are four general problems that can arise from concurrent transactions. In order of seriousness, they are: the Lost Update, the Dirty Read, the Non-repeatable Read, and the Phantom. Here are examples of each, using timeline points (the lower the event in the chart, the later in time the event takes place).

Lost Update
  Transaction #1           Transaction #2  ...                      Read Row #1 Read Row #1              ... Data change Row #1       ... ...                      Data change Row #1 COMMIT                   ... ...                      COMMIT 

The Lost Update problem occurs when two transactions read and make a change to the same row. When the first transaction COMMITs, its change is lost because the second transaction's change supersedes it. The result is as if Transaction #1's change never happened . You can avoid Lost Update by using an isolation level of READ UNCOMMITTED or higher.

Dirty read
  Transaction #1           Transaction #2  Read Row #1              ... Data change Row #1       ... ...                      Read Row #1 ROLLBACK                 ... ...                      COMMIT 

The Dirty Read problem occurs when a transaction reads a row that has been changed but not committed by another transaction. The result is that Transaction #2's work is based on a change that never really happened. You can avoid Dirty Read by using an isolation level of READ COMMITTED or higher.

Non-repeatable read
  Transaction #1           Transaction #2  ...                      Read Row #1 Read Row #1              ... ...                      Data change Row #1 Read Row #1              ... ...                      COMMIT COMMIT                   ... 

The Non-repeatable Read problem occurs when a transaction reads a set of rows twice, once before and once after another transaction does a data change that affects those rows. The result is that Transaction #1 gets conflicting data from its reads. You can avoid Non-repeatable Read by using an isolation level of REPEATABLE READ or higher.

Phantom
  Transaction #1           Transaction #2  Read set of Rows         ... ...                      data change same set of Rows Re-read set of Rows      ... 

The Phantom problem occurs when a transaction reads multiple rows twice, once before and once after another transaction does a data change that affects the search condition in the first transaction's reads. The result is that Transaction #1 gets a different (larger) result set back from its second read. We call this the "now-you-don't-see-it-now-you-do" phenomenon . You can avoid the Phantom only by using an isolation level of SERIALIZABLE.

A Phantom can affect transactions that contain at least two search conditions that overlap or depend on one another, as the following example shows.

  • Transaction #1 asks for all rows with a column1 value equal to 15. Assume the result set returned has 12 rows.

  • Then Transaction #2 inserts three new rows where column1 has a value of 15.

  • Then Transaction #1 repeats its read. The result is that Transaction #1 gets 15 rows the second time around. The three extra rows are the phantoms.

Table 15-9 recaps the concurrency problems tolerated under each isolation level.

Table 15-9. Concurrency Problems Tolerated under Each Isolation Level
Isolation Level Lost Update Dirty Read Non-repeatable Read Phantom
READ UNCOMMITTED No Yes Yes Yes
READ COMMITTED No No Yes Yes
REPEATABLE READ No No No Yes
SERIALIZABLE No No No No

READ UNCOMMITTED Transactions

As Table 15-9 shows, sometimes a READ UNCOMMITTED transaction can return data that came from a Dirty Read; this sounds upsetting. But do you use an Internet search engine? If so, you tolerate result sets where at least 1% of the results either have nothing to do with your query or refer to nonexistent pages. But at least you didn't have to wait long. Besides, if the other transactions are deferring writes , the chances of Dirty Read are negligible.

READ UNCOMMITTED transactions must be READ ONLY, so that a Lost Update won't happen. And that's all! If your DBMS supports READ UNCOMMITTED, then

 SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED READ ONLY 

is the same as saying, "Don't bother checking any locks, I'd ignore them anyway. And don't bother setting any locks, I won't be changing anything that will affect somebody else. And, oh yeah, turn off that versioning stuff too."

During the rare times when all transactions are READ UNCOMMITTED, all locks are off and throughput is at its peak. As an example, we ran a test with one of the Big Eight DBMS's, consisting of 10,000 SELECT statements, each looking for a random key value in an index. The first time we ran the test, we set all transactions to READ UNCOMMITTED using an ODBC function. Then we ran it again, leaving the transactions at the InterBase default of READ COMMITTED. The READ UNCOMMITTED test was 200% faster than the READ COMMITTED test. To be fair, we then tried the READ COMMITTED test once again using a LOCK TABLE SHARED statement, but it was still slower.

READ UNCOMMITTED is a good isolation level to specify for (a) transactions that are slow and (b) transactions that are likely to have small errors or errors that will cancel one another out. Typical READ UNCOMMITTED situations are letting patrons look up a book in a library catalog or letting Internet browsers see an occasionally changing list of the Top Ten song hits.

READ COMMITTED Transactions

For READ COMMITTED transactions, just ask yourself: Is a Non-repeatable Read tolerable? Sometimes this is a no-brainer. If your transaction will never try to repeat a read, then of course Non-repeatable Reads are tolerable. So what kind of transaction never tries to repeat a read? Here's an example.

 (transaction start) SELECT * FROM Table1 WHERE column1 = 15; for (;;) {   ...   FETCH NEXT ...   if (NOT_FOUND) break;   ...   } (transaction end) 

For READ COMMITTED transactions, assume row locks, no versioning, and no updating. We can guarantee that, for every row that the FETCH NEXT in this example fetches, the row still exists and contains a column1 value equal to 15. No other transaction can interrupt and change those facts, because during the SELECT a shared lock is set by the DBMS. So it is safe to use READ COMMITTED here.

Things would be different if a scroll cursor was used and the fetch wasn't always FETCH NEXT. In such cases, there is no guarantee that a row will be the same the second time it's fetched . Why? Because once you finish with Row #1 and ask for the next row, the DBMS releases the shared lock on Row #1. Clearly that's goodthe sooner a lock goes away, the more concurrency there is. It has to be said, though, that when there's only one SQL statement in the transaction, we couldn't find cases where READ COMMITTED yields any more throughput than REPEATABLE READ. Gains appear only when result sets are large and several SQL statements are in the transaction.

READ COMMITTED is a good isolation level to specify for transactions that contain only one SQL statement. Typical READ COMMITTED situations are changing a name or adding a club member.

REPEATABLE READ Transactions

In earlier days REPEATABLE READ was the highest possible isolation level. For some DBMSs, it still is. REPEATABLE READ won't let through Lost Updates, Dirty Reads, or Non-repeatable Reads; it allows only Phantoms, which are rare.

With regard to locking, the philosophy of REPEATABLE READ transactions is to keep locks until the transaction ends. In this respect they differ from READ COMMITTED transactions, which free locks after finishing with a row. In general, it's a bad idea to access the same row twice in a READ COMMITTED transaction, because you're (a) getting a lock, then (b) freeing the lock, and then (c) getting the same lock again.

When two SQL statements in the same transaction have overlapping WHERE search conditions, or no WHERE clauses at all, there is a danger of a Phantom appearing. For example, consider these two SQL statements:

 SELECT * FROM Table1    WHERE column1 = 5 UPDATE Table1            /* there might be a Phantom here */    SET column1 = 5    WHERE column1 = 6 

Even programmers who understand nothing of Phantoms can obey this rule of thumb: Don't write overlapping transactions. In the previous example, we should have done the UPDATE first.

If your WHERE clause is always for a single row on a unique key, and fetch succeeds, Phantoms can't happen with the REPEATABLE READ isolation level.

If you need the security of REPEATABLE READ, but you would still like locks to get an early release, here are some tricks to force the DBMS to do what you want.

 Example #1:  (transaction start) SELECT * FROM Table1 SAVEPOINT savepoint1 SELECT * FROM Table2 ROLLBACK TO SAVEPOINT savepoint1 (transaction end) 

In Example #1, the ROLLBACK TO SAVEPOINT statement will eliminate the locks on Table2 only ( assuming the DBMS supports this SQL:1999 instruction). ROLLBACK TO SAVEPOINT is also useful if a lock attempt fails and you want to try a different tack.

 Example #2: SELECT * FROM Table1; for (n=0; n<#-of-rows; ++n) {   FETCH ... INTO :x[n];   if (NOT_FOUND) break; } ROLLBACK; for (n=0; n<#-of-rows; ++n) {   printf("x[n]=%d\n",x[n]); } 

In Example #2, the ROLLBACK statement comes before the display. If we'd used a scalar variable for the fetch instead, we would also have had to call printf() from within the fetch loop, thus delaying the transaction end. If you can make big arrays, you can end transactions early. By the way, we don't bother to close the cursor. ROLLBACK handles that.

REPEATABLE READ is a good isolation level to specify for transactions that contain multiple SQL statements involving data changes. Typical REPEATABLE READ situations are transferring money from one bank account to another or producing a customer invoice.

SERIALIZABLE Transactions

SERIALIZABLE is the default isolation level specified by the SQL Standard and is supported by all the Big Eight except MySQL. But all of the Big Eight except Ingres recommend either READ COMMITTED or REPEATABLE READ for the default. That tells us that nobody else has found a practical way to implement SERIALIZABLE and look good at the same time. You must avoid the SERIALIZABLE level unless it's utterly necessary but, alas, it sometimes is.

For example, suppose you're coding an application for a bank and want to retrieve and display all bank account balances , followed by a total. The total shown must really equal what you'd get by adding up all the individual account lines, so Phantoms are intolerable. This means you can't use REPEATABLE READ for your transaction. Before DBMSs supported SERIALIZABLE, there was still a way to ensure serialization, namely with a non-standard SQL-extension LOCK statement such as

 LOCK TABLE Accounts IN SHARE MODE 

(or whatever the LOCK statement is for your DBMS). In fact, some DBMSs will internally support SERIALIZABLE through the simple device of huge-grain locks, while others try to be more sophisticated. Here is a sampling of what DBMSs do to support this isolation level.

  • Microsoft keeps locks on a range of index keys, thus blocking transactions that use overlapping WHERE clause search conditions. Unlike ordinary index page locks, these range locks are not released until the transaction ends.

  • MySQL allows LOCK TABLES UNLOCK TABLES to span statements (statements in MySQL constitute transactions unless you install an optional package).

SERIALIZABLE is a good isolation level to specify for transactions that contain quickly executed SQL statements that affect only a few rows at a time. Typical SERIALIZABLE situations are master/detail, member/aggregate, or account-balance/total-balance reports .

Bog-standard MySQL won't support SERIALIZABLE transactions (recall that our policy is to ignore non-default installation options; see Chapter 1, "Facilis Descensus Averni"). The usual solution involves MySQL's non-standard SQL-extension LOCK TABLES statement, but here's another example involving the dread "bank transfer." This transaction should be serializable, and it involves two updates on the same table.

 (transaction start) UPDATE Customer1    SET balance = balance - 5    WHERE custid = 1 UPDATE Customer1    SET balance = balance + 5    WHERE custid = 2 (transaction end) 

If you're using MySQL, you could cause this transaction to be SERIALIZABLE by using a single UPDATE statement with CASE, like this:

 UPDATE Customer1    SET balance =        CASE WHEN custid = 1 THEN balance - 5             WHEN custid = 2 THEN balance + 5        END    WHERE custid IN (1, 2) 

This works because (a) MySQL doesn't have atomic transactions, and (b) MySQL does have atomic statements, therefore (c) transactions must be statements. It's not smart, because MySQL handles ORs badly , [2] but might do in a pinch .

[2] Recall that DBMSs will process an IN list as a series of ORs.

READ ONLY or FOR UPDATE

You can specify that a transaction will be READ ONLY, or its opposite , FOR UPDATE, using the SET TRANSACTION statement, the ODBC SQLSetConnectAttr function call, the JDBC isReadOnly method, or the embedded SQL DECLARE CURSOR FOR UPDATE clause. (If you don't specify the option you want, the DBMS probably assumes FOR UPDATE rather than READ ONLY.) It's important to make this specification, as the value affects locking strategy drastically.

FOR UPDATE transactions cause update locks. Recall that update locks are less concurrent than shared locks because they won't coexist with other update locks on the same object. So why are they there? Well, follow this scenario with our old pals Transaction #1 and Transaction #2.

  • Transaction #1 gets a shared lock.

  • Transaction #2 gets a shared lock on the same object.

  • Transaction #1 tries to upgrade to an exclusive lock, but it must wait (because of Transaction #2's lock).

  • Transaction #2 tries to upgrade to an exclusive lock, but it must wait too (because of Transaction #1's lock).

  • Deadlock!

A deadlock is a condition that arises when two or more transactions are waiting for one another to release locks. Now consider the same scenario using update locks rather than shared locks.

  • Transaction #1 gets an update lock.

  • Transaction #2 tries to get an update lock on the same object, but it must wait (because of Transaction #1's lock).

  • Transaction #1 upgrades to an exclusive lock.

  • Transaction #1 continues until end of transaction, when it releases all locks.

  • Transaction #2 tries again; gets the update lock it needs.

  • Transaction #2 upgrades to an exclusive lock.

  • No deadlock!

FOR UPDATE transactions have fewer deadlocks.

Now here's the bad news about update locks.

  • Update locks aren't released as quickly as shared locks when transactions are READ COMMITTED.

  • Update locks force more waiting because, by definition, they are exclusive when transactions are READ COMMITTED, REPEATABLE READ, or SERIALIZABLE.

Unfortunately, you always get more update locks than you need. Consider these two examples.

 Example #1: UPDATE Table1   SET unindexed_column = 1   WHERE indexed_column = 2 

In Example #1, the search is via the index on indexed_column . But because the index won't be updated, update locks are made on the index pages unnecessarily.

 Example #2: UPDATE Table1   SET unindexed_column = 1   WHERE unindexed_column = 2 

In Example #2, the search is via a table scan. But many rows won't match the search condition, so update locks are made on table pages unnecessarily.

Our conclusion is that READ ONLY transactions help concurrency more than you might at first expect. READ ONLY is particularly "lock friendly" when combined with versioning or with the READ COMMITTED isolation level.

Deadlocks

We've just seen that update locks are useful for deadlock avoidance. Because deadlock avoidance is something everybody wants, here are some tips to gain that end.

  • If your DBMS supports only shared and exclusive locks (not update locks), then the most common deadlocks occur when a shared lock is upgraded to an exclusive lock. Avoid this by making locks exclusive at the start, with dummy UPDATE statements containing non-updating clauses like SET column1 = column1 .

  • Similarly, deadlocks occur if the DBMS tries to escalate. When eventual escalation is a certainty , accelerate it to the start of the transaction with a dummy UPDATE or a LOCK statement.

  • Don't change index columns or, conversely, don't index changeable columns . If all accesses are via an index but all changes are of unindexed columns, then a single UPDATE statement can never ask for more than one exclusive lock (for the table's row or page).

Deadlocks are rare if all transactions access objects in a fixed order. For example, the following situation is bad.

  • Transaction #1 exclusive locks Table1 .

  • Transaction #2 exclusive locks Table2 .

  • Transaction #1 tries to get a lock on Table2 , and must wait.

  • Transaction #2 tries to get a lock on Table1 , and must also wait.

  • Deadlock!

Here's another example, this time of a good situation.

  • Transaction #1 exclusive locks Table1 .

  • Transaction #2 tries to get an exclusive lock on Table1 , and must wait.

  • Transaction #1 gets an exclusive lock on Table2 .

  • Transaction #1 is unblocked, so it proceeds to end of transaction and releases all locks.

  • Transaction #2 tries again and gets an exclusive lock on Table1 .

  • Transaction #2 gets an exclusive lock on Table2 .

  • Transaction #2 is unblocked, so it proceeds to end of transaction and releases all locks.

The only difference between the "bad" example and the "good" example is that, in the good example, Transaction #1 and Transaction #2 both asked for locks in the same order: first on Table1 , then on Table2 . You can ensure such situations happen by always following this policy: when all else is equal, master tables before detail tables, DELETE before INSERT, and UPDATE ... WHERE x = 1 before UPDATE ... WHERE x = 2 .

Avoid exclusive index-page locks by including the index column in your data-change statement. For example, instead of executing Statement #1, use Statement #2:

 Statement #1: UPDATE Table1   SET indexed_column = 5   WHERE column1 = 77 Statement #2: UPDATE Table1   SET indexed_column = 5   WHERE column1 = 77     AND indexed_column <> 5 

Now here is a different kind of tip: split transactions up however you can. For example, instead of writing a single transaction to get a ROWID and UPDATE it, code a read transaction to get the ROWID and then code a write transaction to do the data change for that ROWID. Or instead of writing a single transaction to change two columns of a table or to change many rows of a table, code separate passes to change first column1 and then column2 , or code separate passes to change only the first 20 rows and then the next 20 rows and so on. Splitting transactions is a fearsome tool for avoiding deadlocks, although at the cost of both throughput and response time.

Finally, because you can't be sure that you'll always be able to avoid deadlocks, make sure your application includes code that will handle a "deadlock" error return from the DBMS.

The Bottom Line: Isolation Levels

The isolation level of a transaction determines the type of concurrency problem that will be tolerated for the transaction. The sooner a lock is released, the better.

You can set a transaction's isolation level with the SQL SET TRANSACTION statement, with the ODBC function call SQLSetConnectAttr(...SQL_ATTR_TXN_ISOLATION,...) , or with the JDBC method setTransactionIsolation(...) .

READ UNCOMMITTED gives you the lowest isolation and the highest concurrency level. No locks are issued or checked during the transaction; the DBMS will allow reading of rows that have been written but not committed by other transactions. No data changes are allowed at this level.

READ COMMITTED gives you medium isolation and a fairly high concurrency level. Shared locks are mandatory but can be released before the transaction ends; the DBMS will allow reading of rows that have been written by other transactions only after they have been committed.

REPEATABLE READ gives you fairly high isolation, but concurrency drops sharply. Shared locks are mandatory and will not be released until the transaction ends; the DBMS will not allow a situation where one transaction gets two sets of data from two reads of the same set of rows because a second transaction has changed that set of rows between the two reads.

SERIALIZABLE gives you the highest isolation and the lowest concurrency level. The DBMS might lock whole tables, not just rows, during a transaction. In effect, the DBMS will execute concurrent transactions in a manner that produces the same effect as a serial execution of those transactions.

Transactions are concurrent when they have overlapping start or end times. There are four general problems that can arise from concurrent transactions: the Lost Update, the Dirty Read, the Non-repeatable Read, and the Phantom.

To avoid Lost Updates, set your transaction isolation level to READ UNCOMMITTED or higher. READ UNCOMMITTED is a good isolation level to specify for (a) transactions that are slow and (b) transactions that are likely to have small errors, or errors that will cancel one another out.

To avoid Dirty Reads, set your transaction isolation level to READ COMMITTED or higher. For READ COMMITTED transactions, assume row locks, no versioning, and no updating. READ COMMITTED is a good isolation level to specify for transactions that contain only one SQL statement.

To avoid Non-repeatable Reads, set your transaction isolation level to REPEATABLE READ or higher. REPEATABLE READ is a good isolation level to specify for transactions that contain multiple SQL statements involving data changes.

Use savepoints to force REPEATABLE READ transactions to release locks early.

To avoid Phantoms, set your transaction isolation level to SERIALIZABLE. SERIALIZABLE is a good isolation level to specify for transactions that contain quickly executed SQL statements that affect only a few rows at a time.

If your WHERE search condition is always for a single row on a unique key, and fetch succeeds, Phantoms can't happen with REPEATABLE READ.

If you can make big arrays, you can end transactions (and release locks) early.

You can specify that a transaction will be READ ONLY, or its opposite, FOR UPDATE, using the SET TRANSACTION statement, the ODBC function call SQLSetConnectAttr , the JDBC method isReadOnly , or the embedded SQL DECLARE CURSOR FOR UPDATE clause. It's important to be explicit about this, as the specification affects locking strategy drastically.

READ ONLY transactions help concurrency. READ ONLY is particularly lock friendly when combined with versioning or with the READ UNCOMMITTED isolation level.

FOR UPDATE transactions cause update locks and thus have fewer deadlocks.

Update locks aren't released as quickly as shared locks when transactions are READ COMMITTED.

Update locks force more waiting because they are exclusive when transactions are READ COMMITTED, REPEATABLE READ, or SERIALIZABLE.

If your DBMS doesn't support update locks, force exclusive locks at the start of your transactions with dummy UPDATE statements or use your DBMS's LOCK statement.

Don't change indexed columns.

Don't index frequently changed columns.

Deadlocks are rare if all transactions access objects in a fixed order. Follow this policy: when all else is equal, master tables before detail tables, DELETE before INSERT, and UPDATE ... WHERE x = 1 before UPDATE ... WHERE x = 2 .

Avoid exclusive index-page locks by including the index column in the data-change statement.

Splitting transactions is a fearsome tool for avoiding deadlocks, but costs both throughput and response time.

   


SQL Performance Tuning
SQL Performance Tuning
ISBN: 0201791692
EAN: 2147483647
Year: 2005
Pages: 125

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net