Fundamentals of Transaction Processing

A typical command in an OLTP application often must modify several items of data to complete its work. Suppose you're building an application that lets users submit orders to purchase products. To keep things simple, let's say that each order is submitted on behalf of a single customer for a specific quantity of one type of product. The application must make three modifications to a database to properly record the purchase:

  • Subtract the purchased quantity from the product's inventory level in the Products table.

  • Add the purchase price to the customer's account balance in the Customers table.

  • Add a new order record to the Orders table.

These three modifications make up a single unit of work. In essence, they make up a single transaction. A transaction can be defined as a related set of operations that read and modify data. As you'll see, quite a few requirements must be met when transactions are run in a high-volume multiuser environment.

Many OLTP systems are written against relational database management systems (DBMSs) such as SQL Server and Oracle, but this isn't a requirement. You can build an OLTP system using many other types of data sources, including mainframe applications, message queues, and the file system. However, the examples in this chapter primarily use SQL Server as the data source.

Every transaction must meet four essential requirements. It must be atomic, consistent, isolated, and durable. These requirements are known as the ACID rules. When you write a transaction for an OLTP system, all participants must follow these rules. Let's look at what these requirements mean:

  • Atomic A transaction must be an all-or-nothing proposition. Everything must be successfully updated or nothing should be updated.

  • Consistent Individual operations within a transaction may leave data in such a state that the data is in violation of the system's integrity constraints. However, when an OLTP system completes a transaction, the data as a whole must be left in a valid state.

  • Isolated One transaction can't view the uncommitted changes of another transaction. This means that the uncommitted changes of a transaction must be isolated from all other transactions.

  • Durable When a transaction is committed, the data sources involved must store all changes in stable storage and these changes must be recoverable in the event of a system failure.

When an application successfully completes all the modifications associated with a transaction, it commits the transaction. This is obviously the most desirable outcome. After a transaction has been committed, the data source must be able to account for all changes, even in the face of a system failure. This makes the transaction durable.

If an application can't complete all of its work, it must abort the transaction. When an application aborts a transaction, it tells the data source to roll back any work that has been completed. This means that the data source must provide the ability to roll back uncommitted changes. Most data sources do this through the use of a logging mechanism. For example, some DBMSs undo changes made by a transaction by playing the logged updates in reverse. However, many data sources use rollback mechanisms that are completely different. All that's required is that the data source be able to undo any changes that haven't been committed. This makes the transaction atomic.

Now let's look at what it takes to meet the requirement of consistency. The data in a system typically has a set of integrity constraints. These constraints are defined so as to keep the data for the entire system in a valid state. For example, one constraint could be that no two customers in a Customers table can have the same primary key. Another constraint could be that a customer record can never have an account balance that exceeds the credit limit. These are just two of the many types of constraints that can be used.

It's sometimes permissible to make changes while a transaction is running that violate the data integrity constraints of the system as a whole. However, when the system completes the transaction, it must make sure that all the data integrity constraints are met. Much of this burden falls on the programmer who's writing the transaction. When you're writing business logic and data access code for a transaction, you must have an understanding of what the constraints are. You should never commit the changes for a transaction unless you're sure that you're returning the database to a consistent state.

As you can see, you're required to write a transaction so that when it ends it leaves the data in a consistent state. However, your code needs additional help to maintain consistency when multiple users are involved. A change made by one transaction isn't real (stable) until it has been committed. Therefore, one transaction shouldn't be able to see the uncommitted changes of another transaction. If this isn't the case, the system's data can easily get into an inconsistent state.

Imagine two transactions, one running for user A and one running for user B. If transaction B can see uncommitted changes made by transaction A, transaction B might use values from these uncommitted changes when making its own changes to the database. If transaction A is rolled back, a problem occurs: Transaction B has used one or more values that never really existed. As you can see, a transaction can't read uncommitted data if it is to remain consistent.

A transaction must be isolated from the uncommitted changes of every other transaction. The application therefore needs some locking help from the data source. Locks allow transactions to run without compromising consistency. Each data source that supports transactions in a multiuser environment must provide some type of lock management. A data source's lock manager acts as blocking agent to make sure that transactions wait while other transactions use shared resources such as a record in a database.

A lock manager places locks on different types of data items. A data item can be a table, a page, a record, or a file. Each type of data item provides a different level of granularity. Some levels of locking granularity provide better performance, while others provide higher levels of concurrency.

Most data sources provide locking at more than one level of granularity. Microsoft SQL Server 6.5 provides locking at the page level and at the table level. Locking at the page level is better for concurrency—many users can access the data in a table at the same time. On the other hand, what if a transaction must lock every page in a table? Concurrency isn't an issue because the entire table must be locked. However, a single table lock can be acquired much faster than hundreds or thousands of page locks. SQL Server automatically determines which types of locks to use and escalates page locks to a table lock when a certain threshold is met.

SQL Server 7 and many other relational DBMSs, such as Oracle, provide row-level locking, in which two different transactions can independently lock individual rows on the same physical page. Row-level locking can provide greater concurrency than page-level locking in an OLTP system.

From here on, I won't differentiate among these levels of granularity. While each level offers trade-offs in performance and concurrency, all of the levels work the same way in terms of lock management. I'll use the generic term data item, which can mean any level of granularity.

When a transaction accesses a data item, the lock manager places a lock on it. However, if a conflicting lock has been placed on the data item by another transaction, the lock manager blocks the transaction until the conflicting lock is removed. The transaction must wait its turn until the other transaction has ended. Most of the time transactions simply wait their turn and then complete their work. In some rare cases, transactions are aborted when the system detects a deadlock. We'll revisit deadlocks later in this chapter.

Write Locks and Read Locks

Lock managers use two primary lock types: write locks and read locks. The data source places a write lock (also called an exclusive lock) on a data item when it's modified by a transaction. Write locks isolate uncommitted changes from other transactions. A read lock (also called a shared lock) is placed on a data item while it's being read by a transaction.

A write lock conflicts with other write locks and with read locks. A transaction that has a write lock blocks all other transactions from reading from or writing to the data item in question. The data item remains locked until the transaction is committed or rolled back. This makes sense because the system must isolate uncommitted changes to ensure data consistency. However, this isolation has a price: The blocking reduces overall system concurrency and throughput.

Read locks don't conflict with read locks. Many transactions can obtain a read lock on the same data item concurrently. However, a transaction can't obtain a write lock on a data item that has outstanding read locks. This ensures that a transaction doesn't overwrite a data item while another transaction is reading it. The following table summarizes how the lock manager handles lock requests when locks are already in place.

No Lock HeldRead Lock HeldWrite Lock Held
Read Lock RequestedLock acquiredLock acquiredRequest blocked
Write Lock RequestedLock acquiredRequest blockedRequest blocked

Finding the Correct Isolation Level

While locks help enforce data consistency and transaction isolation, they also can degrade concurrency and throughput. You can increase or decrease the level of blocking by adjusting the transaction's isolation level. The lock manager uses the isolation level of a transaction when determining how to handle locking. The four commonly supported isolation levels are the following:

  • Read Uncommitted The transaction can read any data items regardless of whether they have outstanding write locks. Additionally, the transaction will not acquire read locks. This isolation level provides the fastest access but is highly vulnerable to data inconsistency.

  • Read Committed Only data items that have been committed can be read. A transaction running at this isolation level must wait until a write lock is released before accessing a data item. Read locks are acquired while the transaction reads a data item, but they aren't held for the transaction's lifetime. A read lock is released once the data item has been read. This is the default isolation level used by both SQL Server and ADO when you run a transaction without MTS.

  • Repeatable Read This level is like Read Committed except that all read locks are held for the lifetime of the transaction. Any data item that's read in any stage of a transaction can be read later without any change. That is, all data read by the transaction remains in the same state until the transaction is committed or rolled back. This level isn't supported by SQL Server 6.5, which automatically escalates it to Serializable.

  • Serializable This level is like Repeatable Read with one extra qualification: A query that runs multiple times in a serializable transaction should always have the same results. Phantom data items can't be inserted by other transactions until the OLTP system has completed the transaction. For example, if a transaction running at this level runs a query to determine the number of records in a table, no other transaction can insert a row in the table until the transaction is completed. This is the default (and only) isolation level used by SQL Server when called by an MTS transactional object.

Every level except Serializable compromises isolation in order to improve concurrency and throughput. When you run transactions with an isolation level of Serializable, the data source attempts to provide the best level of concurrency and throughput without compromising consistency and isolation. A transaction is said to be serializable if it isn't affected by other transactions. When every transaction is serializable, it has the same effect on an application as running the transactions one at a time.

Programming Simple Transactions

It's not complicated to write a transaction against a single data source. Let's say you're creating a two-tier application using a SQL Server database. You can control a transaction programmatically using Transact-SQL. For example, you can write the following code to run a transaction:

 -- Set isolation level to Serializable. SET TRANSACTION ISOLATION LEVEL SERIALIZABLE -- Run transaction to purchase a dog for $20. BEGIN TRANSACTION      -- Decrement product quantity from inventory.     UPDATE Products     SET Quantity = Quantity - 1     WHERE Product = 'Dog'     -- Charge price to customer account.     UPDATE Customers SET AccountBalance = AccountBalance + 20     WHERE Customer = 'Bob'     -- Add order record.     INSERT Orders(Customer, Product, Quantity, Price)         VALUES('Bob', 'Dog', 1, 20) COMMIT TRANSACTION 

This example doesn't include any code to deal with problems and transaction rollback, but it illustrates how to execute a series of SQL statements as a single transaction. When you execute this code, SQL Server uses its own internal transaction manager to enforce the ACID rules. All of the changes are committed or no changes are committed. Because the transaction's isolation level is set to Serializable, SQL Server also makes sure that the uncommitted changes of one transaction don't affect any other transactions.

You can also manage transactions from a database API such as ADO, RDO, ODBC, or OLE-DB. These APIs let you create and manage a transaction within the scope of an open connection. For example, here's how to run a transaction against SQL Server using ADO:

 Dim conn As Connection Set conn = New Connection conn.Open "DSN=Market;UID=sa;PWD=" conn.IsolationLevel = adXactSerializable conn.BeginTrans     ' Assume the same SQL statements from the Transact-SQL example.     conn.Execute SQL_DecrementInventory     conn.Execute SQL_ChargeCustomer     conn.Execute SQL_AddOrder conn.CommitTrans 

As you can see, it's not difficult to program a transaction with either Transact-SQL or ADO. All you do is set the appropriate isolation level and then use three commands to begin, commit, and roll back the transaction. The DBMS does its part to enforce the ACID rules. As long as you understand isolation levels and locking, you can find opportunities to optimize concurrency without sacrificing data consistency. If you are programming a two-tier OLTP application against a single DBMS, this might be all you need.



Programming Distributed Applications With Com & Microsoft Visual Basic 6.0
Programming Distributed Applications with Com and Microsoft Visual Basic 6.0 (Programming/Visual Basic)
ISBN: 1572319615
EAN: 2147483647
Year: 1998
Pages: 72
Authors: Ted Pattison

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net