Programming Local Transactions

[Previous] [Next]

Writing a simple transaction against a single DBMS isn't very complicated. Let's say you're creating an application that keeps all of its data in one SQL Server database. One way to start and control a local SQL Server transaction is to use methods exposed by the ADO Connection object. Look at the following code:

 Dim conn As Connection Set conn = New Connection conn.Open MyConnectString conn.IsolationLevel = adXactSerializable conn.BeginTrans     ' SQL statements omitted for clarity.     conn.Execute sqlDecrementInventory     conn.Execute sqlChargeCustomer     conn.Execute sqlAddOrder conn.CommitTrans 

While this might be the easiest approach, controlling a transaction through an ADO connection object results in inefficient locking because it takes at least three round trips to the DBMS to complete a transaction. You hold on to locks while calls are sent back and forth between your ADO code in the middle tier and the DBMS. Whenever possible, you should run each transaction in a single round trip to the DBMS, especially if your ADO code and the DBMS run on separate computers.

There are two common ways to run a local transaction in a single round trip to the DBMS. You can submit a SQL batch that includes a set of statements, or you can invoke a stored procedure that holds the equivalent SQL logic. Both approaches require you to control the transaction programmatically by using Transact-SQL. Here's an example of a batch that runs a transaction:

 -- Set isolation level to Serializable. SET TRANSACTION ISOLATION LEVEL SERIALIZABLE -- Run transaction to purchase five dogs for $100. BEGIN TRANSACTION      -- Decrement product quantity from inventory.     UPDATE Products SET Quantity = Quantity - 5     WHERE Product = 'Dog'     -- Charge price to customer account.     UPDATE Customers SET AccountBalance = AccountBalance + 100     WHERE Customer = 'Bob'     -- Add order record.     INSERT Orders(Customer, Product, Quantity, Price)     VALUES('Bob', 'Dog', 5, 100) COMMIT TRANSACTION 

This simple example doesn't include any code to validate data or deal with transaction rollback, but it illustrates how to execute a series of SQL statements in a single transaction. When you execute this batch using an ADO Connection or Command object, SQL Server uses its own internal transaction manager to enforce the ACID rules. You should observe that this batch also allows you to run and release a transaction in a single round trip.

SQL Server transactions use a default isolation level of Read Committed. The batch in the previous example adjusts the default isolation level to Serializable. All statements in this transaction run at this level. SQL Server 7 also allows you to add hints to adjust the isolation level on a statement-by-statement basis.

An alternative to including SQL batches in your data access code is to create stored procedures. Stored procedures can hold the same Transact-SQL logic, and they typically yield the best performance because of the way they're cached in memory in the DBMS. When you define a store procedure, you can include parameters defined for input, input/output, and output, as well as a return value. If you call a stored procedure that has output parameters or a return value, you should call it using an ADO command object.

Many people have suggested that stored procedures don't fit into multitier application design. They argue that the logic for validation and business rules should be maintained in a language such as Visual Basic instead of Transact-SQL. If you agree with this argument, you're more concerned with code maintainability than with application performance. Stored procedures will always run faster. If you decide to write most or all your logic in Visual Basic, you really need to pay attention to how many round trips you're making to the DBMS.

For example, if you try to maintain too much logic in Visual Basic, you might end up with a design in which you make one round trip to start a transaction, a second to get some data for validation, a third to execute an UPDATE statement, and a fourth to commit the transaction. In a larger, more complex transaction, this approach can lead to 20 to 30 round trips for each transaction. My point is that this approach simply doesn't scale. While calls are going back and forth between your data access code and the DBMS, locks are being held that degrade your application's throughput and scalability. Reducing round trips is a very important aspect of scalability.

Catching Errors from SQL Server's Native OLE DB Provider

Database programmers often encounter a problem when they submit batches or execute stored procedures against SQL Server for the first time. If you're using the native OLE DB provider, it's easy to miss errors that are raised by a batch or a stored procedure. For example, if you write a batch with multiple SELECT, INSERT, UPDATE, and DELETE statements, the underlying data access layer returns a separate result for each statement. This happens even with a SQL statement such as an action query that doesn't return any rows. Each result carries an HRESULT, the RowsAffected value and possibly a recordset, and an error message associated with the execution of the statement. The fact that the native OLE-DB provider returns multiple results means you must process each one to make sure you handle errors raised by statements defined later in the batch. Here's a simple example of what your code should look like:

 Dim rs As Recordset Set rs = conn.Execute sqlMyBigBatch Do Until rs Is Nothing     Set rs = rs.NextRecordset ' Process all results. Loop 



Programming Distributed Applications with COM+ and Microsoft Visual Basic 6.0
Programming Distributed Applications with Com and Microsoft Visual Basic 6.0 (Programming/Visual Basic)
ISBN: 1572319615
EAN: 2147483647
Year: 2000
Pages: 70
Authors: Ted Pattison

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net