MTS and Declarative Transactions

MTS lets you create distributed transactions with the DTC while hiding many of the low-level details. It does this by proving a programming model based on declarative transactions. You are responsible for creating objects inside the scope of an MTS transaction. When you do this, MTS creates the distributed transaction with the DTC and silently enlists all your RM connections for you.

After you create your objects inside a transaction, you use a handful of methods supplied by the ObjectContext interface to control the flow of the transaction and to make sure the transaction is released as soon as possible. Let's examine how you create one or more objects inside an MTS transaction.

Every MTS component has a transaction support attribute. Figure 10-3 shows the Transaction tab of the component's Properties dialog box in the MTS Explorer. This dialog box lets you see and modify the transaction support attribute for each component in an MTS package. If you're using Visual Basic 5, you must install your components first and then adjust the transaction support attribute setting by hand.

Figure 10-3. Each MTS component has a transaction support attribute. When the MTS run time creates an object from a component, it looks at this attribute to determine whether the object should be created in a new or in an existing transaction.

Visual Basic 6 added the MTSTransactionMode setting to the property sheet of each public class module inside an ActiveX DLL project. When you set this property, as shown in Figure 10-4, Visual Basic publishes the corresponding MTS transaction support attribute in your server's type library. When you install your DLL with the MTS Explorer, MTS automatically configures your components with the appropriate transaction support attributes. The person installing a DLL created with Visual Basic 6 in an MTS package doesn't have to set these attributes by hand, as is the case with DLLs created with Visual Basic 5.

Figure 10-4. The MTSTransactionMode setting is published in the server's type library. When you install your DLL with the MTS Explorer, each component is configured with the transaction support attribute that corresponds to its MTSTransactionMode setting.

Notice the difference between the transaction support setting in MTS and the MTSTransactionMode setting for a Visual Basic class module. Don't let this bother you; it is just a slight inconsistency in the wording used by MTS and Visual Basic. Every transaction support property has a corresponding MTSTransactionMode setting. Here's how they map to one another:

MTSTransactionMode SettingEquivalent MTS Transaction Support Setting
RequiresTransactionRequires a transaction
RequiresNewTransactionRequires a new transaction
UsesTransactionSupports transactions
NoTransactionsDoesn't support transactions
NotAnMTSObjectN/A

Creating Objects Inside a Transaction

There's only one way to place an object inside a transaction. You must ask the MTS run time to place the object inside the transaction when the object is created. When the MTS run time creates a new object, it examines the component's setting to see whether the new object should be created inside a transaction. You use the transaction support setting to tell the MTS run time what to do.

You can't add an object to a transaction after the object has been created. Likewise, you can never disassociate an object from the transaction in which it was created. Once an object is created inside a transaction, it spends its entire lifetime there. When the transaction is committed or aborted, the MTS run time destroys all the objects inside it. As you'll see, this cleanup activity is essential to achieving the proper semantics of a transaction. The objects must be destroyed to enforce the consistency requirement of the ACID rules.

Both base clients and MTS objects can create new MTS objects. When the MTS run time receives an activation request to create a new object, it determines whether the object's creator is running inside an existing transaction. The MTS run time also inspects the transaction support attribute of the component. From these two pieces of information, it can determine how to proceed. A component can have one of the following transaction support settings:

  • Requires A Transaction The object is created within the context of a transaction. The object is placed inside the transaction of its creator if one exists. If the creator isn't running in a transaction, the MTS run time creates a new transaction for the object.

  • Requires A New Transaction The MTS run time creates a new transaction for the object.

  • Supports Transactions The object is placed inside the transaction of its creator if one exists. If the creator isn't running in a transaction, the object is created without a transaction.

  • Doesn't Support Transactions The object isn't created within the context of a transaction.

A base client can initiate a transaction by activating an MTS object from a component marked as Requires or Requires New. The MTS run time determines that the new object must run inside a transaction and that the creator isn't running inside a transaction. The MTS run time therefore creates a new MTS transaction and then creates the new object inside it. The first object created inside a transaction is known as the root of the transaction.

The root object can create additional objects inside its transaction. If you're writing a method for an MTS object and you want to propagate another object inside that transaction, you must follow two rules. First, you must create the object from components marked as either Requires or Supports. The MTS run time will create the new object in the same transaction as its creator with either setting. Second, you must create additional objects with CreateInstance. The scope of every transaction is always contained within the scope of a single activity. As you'll recall from Chapter 9, CreateInstance always creates the new object inside the same activity and gives the new object its own context wrapper.

A Tale of Two Transactions

In most cases, you create several objects inside an MTS transaction. However, let's look at a transaction that involves only the root object. Once you understand how the root object works, you'll be ready to examine what happens when you have several objects running inside a transaction.

When a base client creates an MTS object from a component marked as Requires, the MTS run time creates a new MTS transaction in which to place the new object. The MTS run time also calls down to the DTC to create a new distributed transaction on your behalf. You should think of the MTS transaction as the logical transaction and the DTC transaction as the physical transaction. Note that the MTS run time doesn't create the physical transaction when it creates the logical transaction.

The MTS run time creates the logical transaction when it creates the root object, but it defers creating the physical transaction until the root object is activated. As Chapter 9 explained, an MTS object isn't activated until the client invokes the first method call. Thus there can be a short or a long delay between the creation of these two transactions. The MTS run time delays creating the physical transaction as long as possible.

When the root object is activated, the MTS run time calls down to the DTC to begin the transaction by obtaining an ITransactionDispenser reference and calling BeginTransaction. The MTS run time creates the physical transaction with an isolation level of Serializable and a timeout interval of 60 seconds. This means that you work with the highest level of isolation and have one minute to complete your work. (The timeout interval is adjustable; you can modify this machinewide setting by selecting a computer from the MTS Explorer's tree view and opening its Properties dialog box.)

When the MTS run time calls down to create the physical transaction, the DTC creates a transaction object and passes the MTS run time an ITransaction reference. The MTS run time holds onto this reference for the lifetime of the logical MTS transaction. The ITransaction interface lets the MTS run time call Commit or Abort. As you can see, the MTS run time can control the outcome of the transaction.

As a Visual Basic programmer, you cannot obtain an ITransaction reference to the transaction object and call Commit or Abort yourself. So now you're asking, "How do I control the transaction?" The answer lies in knowing when the MTS run time calls Commit and Abort and seeing how you can influence its decision. The MTS run time decides to commit or abort the transaction when the root object is deactivated. Before this time, you can do quite a lot to change the state of the transaction. You can invoke methods on the object context that allow you to vote on whether you think the transaction should succeed.

The Three Flags: Happy, Done, and Doomed

Figure 10-5 shows a diagram of the root object, its context wrapper, and the MTS transaction in which they are running. It also shows some important internal flags that are maintained by MTS. These variables aren't directly exposed to you, but you should understand how they influence the MTS run time's decision to complete or abort the transaction.

click to view at full size.

Figure 10-5. The doomed flag tells the MTS transaction whether to commit or abort the transaction when the root object is deactivated. The happy flag indicates whether the object wants to commit its changes. The done flag indicates whether the object is ready to be deactivated.

The transaction as a whole maintains a Boolean value that indicates whether the transaction must be aborted. We'll call this the doomed flag. This value is initially set to False when the MTS transaction is created. When this flag is set to True, the MTS run time knows to abort the transaction. Now you must answer two important questions:

  • When does the MTS run time inspect the doomed flag?

  • How can you change the value of the doomed flag?

To answer these questions, you must understand that the root object plays an important role in every MTS transaction. There are two cases in which the MTS run time inspects the doomed flag. Let's look at these cases to see what the MTS run time does after inspecting the doomed flag.

In the first case, the MTS run time inspects the doomed flag whenever the root object returns control to its caller. Remember that the flow of execution always passes through the context wrapper first. When control passes through the context wrapper, the MTS run time checks the doomed flag. If the doomed flag is False, the MTS run time doesn't do anything. However, if the MTS run time finds that the doomed flag has been set to True by one of the objects inside the transaction, it aborts the transaction and deactivates all its objects except for the root object. As you will see, this situation is undesirable. The transaction has been aborted, but the root object remains activated in a futile state. You can prevent this from happening by following the rules outlined later in this chapter.

In the second case, the MTS run time inspects the doomed flag when the root object is deactivated. When the root object is deactivated during an active transaction, the MTS run time inspects the doomed flag and releases the transaction by calling either Commit or Abort. If the doomed flag is set to False, the MTS run time calls Commit. If the flag is set to True, the MTS run time calls Abort. Note that the deactivation of the root object should always cause the end of the transaction's life cycle. When the root is deactivated, the transaction will always be released. However, as long as the root object remains activated, the transaction can remain alive and can hold all of its locks.

Now let's look at the two flags maintained inside the context wrapper for every MTS transactional object. The first one is the happy flag. It has an initial value of True when the context wrapper is created. When an object running inside an MTS transaction is deactivated, the MTS run time examines its happy flag. If the happy flag is set to False, the MTS run time sets the transaction's doomed flag to True. Once the transaction's doomed flag is set to True, it can't be reversed. This has a powerful implication. If the root object or any other object inside a transaction is deactivated in an unhappy state, the transaction is doomed to failure.

Let's look at a few scenarios to see how all this works. First, imagine that a base client creates an object from a component marked as Requires A Transaction and invokes a method call. This results in the creation of an MTS transaction and triggers the MTS run time to call down to the DTC to create the physical transaction as well. What happens if the base client simply releases its reference to the MTS object? When the MTS object is deactivated, the MTS run time inspects its happy flag. The happy flag still has its initial value of True. Therefore, the MTS run time doesn't change the doomed flag. The doomed flag remains False, and the MTS run time calls Commit on the transaction. You can run a simple example and confirm these results by examining the Transaction Statistics in the MTS Explorer.

So that's pretty easy. You create an object inside a transaction, activate it, and then release it. All it takes is three steps to successfully begin and commit a transaction with the DTC. Even though we haven't done anything interesting yet, such as writing to a database, this example shows how and when the MTS run time interacts with the DTC. We didn't write any code to explicitly begin or commit the transaction. The MTS model of declarative transaction does all of that for you.

Now let's abort a transaction. All you do is set the happy flag inside the root object's context wrapper to False. One way to do this is by calling DisableCommit in a method of the root object, as shown in the code below.

 Dim ObjCtx As ObjectContext Set ObjCtx = GetObjectContext() ObjCtx.DisableCommit() 

When the base client invokes a method on the root object with this code, the MTS run time changes the value of the happy flag to False. Now, when the base client releases its connection, the root object is deactivated. During the object's deactivation, the MTS run time sees that the happy flag is False and changes the value of the transaction's doomed flag to True. When the root object is deactivated, the MTS run time calls Abort on the transaction.

DisableCommit is complemented by another method named EnableCommit, which simply returns the happy flag to True. You can call each of these methods repeatedly. The happy flag is just a Boolean value, so whichever method is called last before the object is deactivated determines how the MTS run time handles the transaction. When you call one of these methods, you simply vote on whether the transaction should succeed. Note that because MTS looks at these bits only after your method has returned control to the context wrapper, you can call EnableCommit and DisableCommit as many times as you like within a given method. Only the last call that your object's method makes prior to returning control to the context wrapper will actually matter.

Calling SetComplete and SetAbort

In addition to DisableCommit and EnableCommit, two other important methods let you control your transaction: SetComplete and SetAbort. Like the other two methods, these cast a vote by modifying the happy flag. SetComplete sets the happy flag to True, while SetAbort sets it to False. However, SetComplete and SetAbort are different from the other two methods because they set the done flag to True. As you'll see, the done flag has a dramatic effect.

As you'll recall from Chapter 9, the context wrapper provides a basis for interception. Interception lets the MTS run time conduct preprocessing and postprocessing on a call-by-call basis. When a method call returns from an MTS object through the context wrapper, the MTS run time inspects the done flag. If the done flag is set to True, the MTS run time deactivates the object. Therefore, when an MTS object calls SetComplete or SetAbort, the MTS run time deactivates it upon return of the method call. Because SetAbort also sets the happy bit to false, this has the effect of dooming the transaction. If, on the other hand, an object that is the root of a transaction calls SetComplete, the context wrapper no longer waits for the client to release its reference and tries to commit the transaction immediately. The MTS run time inspects the other flags and decides whether to commit or abort the transaction in the same way it does when calling EnableCommit and DisableCommit. SetComplete and SetAbort simply force the MTS run time to end the transaction much faster, which means those expensive locks that your transaction was holding get released earlier, allowing greater scalability. You don't have to wait for the client to release the root object.

It's important to use SetComplete and SetAbort at the appropriate times. When you cast your vote by calling DisableCommit or EnableCommit, the transaction and all its locks are held until the base client releases the root object. Calling SetComplete or SetAbort is much better because the root object forces the MTS run time to release the transaction. It's faster and more reliable. When you work in an OLTP environment, the most important thing you can do to improve concurrency and throughput is to reduce the amount of time that any transaction holds its locks.

You've seen that the MTS run time deactivates the root object and releases the transaction when you call SetComplete or SetAbort. This leads to another important question: How does the deactivation of the root object affect the client? When a base client invokes a method that includes a call to SetComplete or SetAbort, the object is deactivated and destroyed. If the client had to deal with the fact that the object has died, you'd have a messy problem. Fortunately, the MTS run time can hide the object's demise from the client. Let's look at how this is possible.

The base client thinks that it's connected directly to the object, but it's really holding a reference to the context wrapper. This allows the MTS run time to perform a little trickery. When the object is deactivated, the client remains connected to the context wrapper. When the base client invokes another method call, the MTS run time creates another instance from the same component and activates it inside the context wrapper. This just-in-time activation policy makes the base client think that it's dealing with a single object across method calls. However, behind the context wrapper, the MTS run time creates and destroys objects on every call.

If a base client continues to call methods that call either SetComplete or SetAbort, the MTS run time creates and releases a new root object and a new transaction each time. All of this occurs within the window of time that starts with the method call's preprocessing and ends with the postprocessing. This is how you should write your MTS transactions. An MTS transaction and the objects inside it should be flashes in the pan. In the world of OLTP, shorter transactions result in better concurrency and throughput.

Next we'll examine how to add secondary objects to an MTS transaction. The core concepts are the same. The root object still calls SetComplete or SetAbort. This forces the MTS run time to release the transaction and all the locks that it holds before returning control to the base client. When you program in an OLTP environment, you must adopt a "get in and get out" philosophy. Acquire your locks late and release them early. The MTS programming model gives you plenty of support for this new mind-set.

Multiobject Transactions

As you know, the root object can create secondary objects inside the same transaction by calling CreateInstance on components that are marked as either Requires or Supports. Figure 10-6 shows a root object and three secondary objects inside an MTS transaction. If you create these secondary objects correctly, each will have its own context wrapper and thus its own happy flag and done flag.

click to view at full size.

Figure 10-6. The root object can propagate secondary objects into its transaction by calling CreateInstance on components that are marked as Requires or Supports.

An MTS transaction is a democratic community in which each object gets to vote on whether the transaction should succeed. A secondary object follows the same rules as the root object. When a secondary object is deactivated, the MTS run time inspects its happy flag. If the happy flag is set to False, the MTS run time sets the transaction's doomed flag to True. When this happens, the transaction is doomed to failure. There is nothing that any other object can do to set the doomed flag back to True. As you can see, any object inside a transaction can prevent the transaction from committing.

You'll often write methods in the root object to create secondary objects. Once the secondary objects have been created, the root object can call their methods to complete the work for the transaction. You need to understand how things work inside a transaction so that you can coordinate communication among all these objects.

Let's look at the case in which the root object makes several successful method calls on a secondary object. As long as the secondary object doesn't call SetComplete or SetAbort, it remains activated until it's released by the root object. The root object can make several successful calls on the secondary object and then call SetComplete to commit the transaction. When the root calls SetComplete, both objects are deactivated. The secondary object is deactivated first, followed by the root object. As long as neither object sets its happy flag to False by calling SetAbort or DisableCommit, the transaction is committed.

What happens if the secondary object wants to abort the transaction? If the secondary object calls SetAbort and returns, the MTS run time deactivates the object and sets the doomed flag to True. At this point, the root object gets control back and must decide how to deal with the situation. Since the doomed flag is set to True, the root object can't save the transaction. Once a secondary object has been deactivated and has set the doomed flag to True, the root object shouldn't call SetComplete. If it does, the base client receives an mtsErrCtxAborted error from the MTS run time. Moreover, if the root object tries to activate another object inside the transaction after the doomed flag is set to True, it experiences an mtsErrCtxAborting error.

If the root object simply returns control to its caller at this point without calling SetComplete or SetAbort, the MTS run time sees that the doomed flag has been set to True and it releases the transaction and deactivates all the secondary objects. This results in an unfortunate situation. The root object is left activated in a crippled state. Future method calls on the root object will more than likely result in mtsErrCtxAborted and mtsErrCtxAborting errors being raised by the MTS run time.

These problems should lead you to one conclusion. When a secondary object calls SetAbort, the root object should also call SetAbort and halt any attempt to complete additional work. However, there's one small hitch. The root cannot examine the doomed flag. It can't ask the MTS run time whether a secondary object has called SetAbort. Therefore, your secondary objects must communicate with the root object when it calls SetAbort. You can use the return value or output parameters in the methods of your secondary objects to indicate whether they called SetAbort, or you can raise errors from the secondary objects back to the root object.

For example, if a secondary object decides to roll back the entire transaction in Method1, it can use the following sequence of calls:

 Dim ObjCtx As ObjectContext Set ObjCtx = GetObjectContext() ObjCtx.SetAbort Dim ErrorCode As Long, Description As String ErrorCode = myErrorEnum1 ' Something like (vbObjectError + 3) Description = "The requested quantity is not available" Err.Raise ErrorCode, , Description 

If you follow this convention, a method in the root object can assume that all secondary objects raise an error after calling SetAbort. This means that an error handler in the root object should call SetAbort and raise its own error to forward the secondary object's error message back to the base client. If the root object can call methods on the secondary objects without experiencing an error, it can assume that everything is fine and call SetComplete. You can see what a method call in the root object looks like below.

 Sub RootMethod1()     On Error GoTo MyRootMethod_Err     Dim ObjCtx As ObjectContext     Set ObjCtx = GetObjectContext()     Dim Secondary1 As CSecondary     Set Secondary1 = ObjCtx.CreateInstance("MyDll.CSecondary")     Secondary1.Method1     ' Commit transaction if call completes successfully.     ObjCtx.SetComplete Exit Sub MyRootMethod_Err:     ' Roll back transaction and get out ASAP on error.     ObjCtx.SetAbort     ' Forward error information back to base client.     Err.Raise Err.Number, , Err.Description End Sub 

Of course, this code shows one of many possible approaches. If you take a different approach, be sure you carefully coordinate the communication between the secondary objects and the root object.

At times, you might call DisableCommit in a secondary object and return control to the root. If you do this, the secondary object indicates that it can't commit its work in its present state. However, DisableCommit doesn't deactivate the secondary object upon return to the root object. This is different from a call to SetAbort, in which the MTS run time deactivates the secondary object before the root object gets control back. A call to SetAbort dooms the transaction to failure. When a secondary object calls DisableCommit, it says, "I am currently unhappy, but perhaps the root object can invoke another method and make me change my mind."

As you might imagine, using DisableCommit requires you to design a more elaborate communication protocol among the objects inside a transaction. When a secondary object calls DisableCommit, the root object can try to persuade the object to change its mind by executing additional methods. However, the root object must ultimately call SetComplete or SetAbort. Therefore, the root object must find a way to make the secondary object happy or determine that the transaction can't be saved.

You should avoid calling DisableCommit from the root object. You don't want to pass control back to the base client when a transaction is pending. Never allow the base client to control when the locks are released. This will probably result in locks being left on data items for longer than necessary. It also makes the transaction vulnerable to the 60-second timeout used by MTS. This conflicts with your newfound "get in and get out" mind-set. OK, enough said.

Enlisting Connections to Resource Managers

Now that we've covered the basics of how to commit and roll back an MTS transaction, let's discuss how to write your changes. You must connect one or more of the objects that you create inside a transaction to a resource manager, as you see illustrated in Figure 10-7. Once these connections are established, you must enlist them with the DTC. This involves setting up a line of communication between the DTC for your MTS application and the DTC of each resource manager. As you saw earlier in this chapter, these lines of communication are used to execute the two-phase commit protocol.

Enlisting a connection in an MTS application is actually quite simple. You just have to follow two rules. First, make sure you establish the connection from an object running inside a transaction. Second, make sure that the RM and the RM proxy you're using support OLE Transactions. If you follow these rules, the MTS run time will automatically interact with the RM proxy and the DTC to enlist the connection.

click to view at full size.

Figure 10-7. When you create a connection to a resource manager such as SQL Server or Oracle from an object inside an MTS transaction, the MTS run time and the RM proxy automatically enlist the resource manager with the DTC. When the root object is deactivated, the MTS run time calls Abort or Commit to begin executing the two-phase commit protocol.

When you make a typical connection using ADO, RDO, or ODBC, your write and read operations are automatically part of a distributed transaction. It couldn't be easier. Let's look at an example. If you're working with SQL Server 6.5 and connecting through ADO with an ODBC driver, you can connect with this code:

 Dim conn As ADODB.Connection, sConnectString As String Set conn = New ADODB.Connection sConnectString = "DSN=MyDatabase;UID=BusAccount1;PWD=rosebud" conn.Open sConnectString 

As you can see, this is the same code that you'd write to create any ADO connection. The MTS run time has prepared the ODBC Driver Manager to auto-enlist any ODBC connection made from inside an MTS transaction. The RM proxy (the ODBC driver in this case) interacts with the DBMS to establish a connection between the DTCs running on two different nodes, as shown in Figure 10-7. You simply make your connections and begin accessing data. All of your changes are charged against a single distributed transaction that's being controlled by the MTS run time.

ODBC Connection Pooling

The ODBC Driver Manager is an MTS resource dispenser. The MTS programming model uses the abstraction of resource dispensers to model plug-in modules that work with the MTS run time to efficiently share resources among MTS objects. The MTS SDK has a specification that details the requirements of writing resource dispensers for MTS applications. The ODBC Driver Manager has been written to pool both standard connections and enlisted connections and to share them across MTS objects.

Here's how standard connection pooling works. When you initially establish an ODBC connection in one of your MTS objects, the ODBC Driver Manager goes through a handshake with the DBMS that involves passing a user ID and password. When the application logs on, the DBMS verifies the security profile of the user and returns a connection handle. This connection handle allows the application to submit SQL statements to the database. Establishing an ODBC connection consumes processing cycles on both computers and requires at least one round-trip between the connecting application and the DBMS.

When your object releases a connection, the ODBC driver doesn't drop the connection to the DBMS. Instead, it places the connection handle in a pool for future use by other objects. When another object attempts to connect to the same DBMS, the ODBC Driver Manager looks through the pool to see whether a connection is available. It compares the connect string of the requested connection with the connect string of the connections in the pool. If it finds a match, it allocates a connection without going through the logon process again. This speeds things up and significantly improves the scalability of any middle-tier application that is handling concurrent requests from multiple clients. You should note that this standard connection pooling isn't unique to MTS. The ODBC Driver Manager offers this capability to other applications, such as the Web server process for Internet Information Server (IIS).

The ODBC Driver Manager manages the connection pool dynamically. It creates additional connections as the traffic in your application picks up. It also dynamically releases connections to the DBMS when things slow down. It's great that all this happens behind the scenes. Just remember that objects can share a pooled connection only if they all agree on the same server, user name, and password. Be sure that every component in an MTS application uses the same connect string when it opens a connection to a particular DBMS.

The ODBC resource dispenser provides one other feature in the MTS run-time environment that is not available to other applications that use ODBC connection pooling. The ODBC resource dispenser is capable of pooling enlisted connections. As you have already seen, when a connection is established, it requires a round-trip to and from the DBMS. Likewise, when a connection is enlisted in a distributed transaction, it requires another round-trip to and from the DBMS. The pooling of enlisted connections is a further optimization to reduce round-trips. And as you remember from Chapter 6, round-trips are inherently evil. You should always be looking for ways to reduce the number of round-trips it takes to complete an operation. The pooling of enlisted connections is a great way to enhance performance.

When you close an enlisted connection from an object running inside a transaction, the ODBC resource dispenser returns it to the pool. If another object inside the same transaction requests a connection using the same connect string, the ODBC resource dispenser can use the same preenlisted connection. This means that several objects inside a single transaction can open and close ODBC connections without incurring redundant round-trips to and from the same DBMS for enlistment. The MTS run time takes care of cleaning up the enlisted connection when the transaction is released.

All this affects the way you write data access code because the way you write data access code for MTS is very different from the way you write it for a two-tier application. The two-tier style of writing data access code promotes conducting as many operations as possible through a single open connection because opening connections is very expensive in the absence of connection pooling. In the MTS programming model, you'll be opening and closing connections much more frequently, often within the scope of a single method.

A secondary object usually opens and closes a connection inside each method that is called from the root object. The secondary object should always close the connection before returning control to the root object. Here's what can happen if you don't follow this rule: When one secondary object returns to the root object without closing its connection, the connection will not be returned to the pool. If the root object calls upon another secondary object to perform other operations in the same database, another physical connection must be used. The second connection will require at least one unnecessary round-trip to and from the DBMS for enlistment and possibly another to establish the connection.

Partitioning Business Logic and Data Access Code

Figure 10-7, shown earlier, provides an example of two secondary objects that have each enlisted a connection in an MTS transaction. One connection is to a SQL Server database and the other to an Oracle database. Such a design lets you package the data access code for one connection in one Visual Basic class module and the data access code for the other connection in a second class module. This is a common way to write an MTS application, but it is only one of many possibilities. The approach you take is largely a matter of programming style.

You can also establish a connection to an RM directly from the root object. The root object can enlist a connection and make changes just as secondary objects can. This might result in a simpler design. However, if you do this, you'll create a single class that contains both your business logic and your data access code. In larger designs, it is common to run your business logic in the root object and run your data access code in secondary objects. The root object models the high-level composer of a transaction's workflow. The root object composes a transaction by delegating data access to secondary objects.

If you use this style of programming, each secondary object represents a connection to a DBMS or possibly a more granular entity inside the DBMS, such as a table. Each secondary object directly connects to and enlists with a resource manager. With this approach, secondary objects are also known as enlisted objects.

In an MTS application, each enlisted object should acquire and release its own connection. Don't try to share an open connection by passing an ADO connection reference between objects. Acquiring and releasing connections doesn't impose a performance penalty because the ODBC resource dispenser will reuse a preenlisted connection if one is available. Also keep in mind that no problems will arise because of concurrency. As you remember, each MTS transaction lives inside a single activity, which means that there's a single flow of control. No two objects inside the same transaction will ever be running a method call at the same time. As long as all secondary objects close their connections before returning control, the transaction as a whole will never require more than a single connection to any data source.

While MTS can help you avoid round-trips to and from the DBMS when you're establishing and enlisting connections, it doesn't automatically optimize the data access code that you write. When you open a connection and begin to perform read and write operations, your code will result in round-trips to and from the DBMS. As you design your MTS applications, you should be aware of how many round-trips you're making to the DBMS and back. Reducing round-trips in data access code is one of the best ways to increase the scalability of a distributed application.

Let's look at an example. Suppose you're writing a method in a secondary object to remove a requested quantity of a certain product from inventory. It's natural for a secondary object to accomplish this in two discrete database operations. First, the secondary object can run a SELECT statement to open an ADO recordset on the Products table. This allows the secondary object to determine whether the quantity currently in inventory is enough to satisfy the order request. If the requested quantity is greater than the inventory quantity, the secondary object should call SetAbort and raise an error back to the root object. Then, if the requested quantity is less than or equal to the inventory quantity, the secondary object can complete its mission by running an UPDATE statement against the Products table. The upside to this approach is that it allows you to maintain your logic inside your MTS component by using Visual Basic. The downside to this approach is that it takes two round-trips to and from the DBMS to complete an operation that could be completed in one.

So how can you conduct multiple operations in a single round-trip to and from the DBMS? Unfortunately, the answer is usually writing more logic in SQL and less logic in Visual Basic. You can either submit SQL batches or execute stored procedures to conduct multiple operations in a single round-trip. This means that stored procedures are as important to an MTS application as they are to a two-tier application. Some people claim that stored procedures don't have a place in three-tier development. They don't understand the importance of reducing round-trips between the middle-tier application and the DBMS.

Let's look at one more example. Suppose you create a separate MTS component for each table in the database. If you are dealing with the Products table, the Customers table, and the Orders table, you can create a separate Visual Basic class for each table. This provides great modularity and a pure object-oriented design because it creates a natural one-to-one mapping between tables and classes. However, because each component is conducting its operations independently, your transaction will require at least one round-trip to and from the DBMS per component.

You can further reduce the number of your round-trips by modeling a single Visual Basic class for each database connection. This will give you the ability to conduct all your operations in as few round-trips as possible. If you take this to the extreme, you'll have one stored procedure for each connection. The upside to this approach is that it will give you the best performance and scalability. The downside is that you'll write and maintain more of your logic in SQL and less in Visual Basic. Additionally, you'll have to deal with raising errors in SQL and propagating them back to your Visual Basic code. This is more complicated than raising errors between two Visual Basic objects.

As you can see, you must make some difficult decisions when you design your data access code. From the perspective of application design and code maintenance, it's desirable to create an MTS component for each table in the database and write as much logic as possible in Visual Basic. From a scalability point of view, it's better to conduct multiple operations inside stored procedures. Unfortunately, there's no single answer that will lead to object-oriented design purity and scalability. Then again, scalability is usually an important requirement of an OLTP application, while a pure object-oriented design is not.

Avoiding Deadlocks

Creating an OLTP application typically requires balancing two antagonistic forces. On one hand, you want the highest possible levels of concurrency. On the other hand, you must have isolation to guarantee the consistency of your data. This means that the system must place locks on data items to block certain transactions while other transactions perform their work. Unfortunately, in an OLTP application concurrency and consistency are always in conflict.

As you've seen, an MTS application runs its transactions with an isolation level of Serializable. This means that the integrity of the system's data is the primary concern. If you write your business logic and data access code correctly, the two-phase commit protocol will make sure that your data stays in a consistent form. Most of the time when a transaction is blocked, it simply waits its turn and then gets to do its work. However, sometimes the locks acquired by a transaction don't have the desired effect.

In some situations, the locks held by two or more transactions conflict in such a way that the conflict can't be resolved by waiting. This is known as a deadlock. For example, suppose transaction A has acquired a write lock on data item X and is attempting to acquire a write lock on data item Y to complete its work. If transaction B has acquired a write lock on data item Y and is waiting on the write lock held on data item X, the two transactions have hit a stalemate. Without intervention, both transactions will wait indefinitely for the other to release its locks.

A single DBMS such as SQL Server can detect a deadlock situation. SQL Server resolves a deadlock by terminating one of the transactions and sending an error message to the victim. When the locks of the victim are released, the other transaction can complete its work. In a distributed transaction, some deadlock situations can't be detected by any single resource manager. This is why MTS transactions have a timeout of 60 seconds. If a transaction cannot be completed within a minute, the DTC assumes that a deadlock has occurred. In either case, your code must be ready to deal with deadlock errors when they arise.

To prevent deadlocks in your MTS applications, you can employ a few techniques. For example, a cyclic deadlock can occur when two programmers have written separate transactions for the same system. For example, one programmer might write a transaction that modifies the Products table and then the Customers table. If another programmer writes a second transaction that accesses the same tables in reverse order, the chance of a deadlock is greater. You can reduce the chances of cyclic deadlocks by maintaining a consistent flow of data access across all the transactions that run in a single system.

Deadlocks resulting from lock conversion are also common and require your attention. Let's say that you're writing a transaction that removes a specific quantity of a product from inventory. Your transaction starts by running a SELECT statement against the Products table to find out whether the requested quantity is in stock. If the inventory quantity is equal to or greater than the requested quantity, you then run an UPDATE statement to remove the quantity from inventory. We'll assume that the SELECT statement and the UPDATE statement are both run against the same record in the Products table.

In a high-volume OTLP application, there's a good chance that two separate transactions will both run the SELECT statement and acquire read locks before either can acquire the write lock required for the UPDATE statement. This results in a deadlock. Each transaction waits for the other to remove its read lock so that it can acquire a write lock. The problem is that the transaction first acquires a read lock on a data item and then tries to convert the lock to a write lock. If two transactions acquire the read lock at the same time, neither party can convert the lock to a write lock.

SQL Server supports another type of lock called an update lock, which solves this problem. You should use an update lock whenever you need to escalate a read lock to a write lock during a transaction. An update lock conflicts with write locks and with other update locks. A transaction will block while trying to acquire an update lock if another transaction has already acquired a write lock or an update lock. However, unlike a write lock, an update lock doesn't conflict with read locks. If a transaction holds a read lock on a data item, another transaction can acquire an update lock. Likewise, if one transaction holds an update lock, other transactions can acquire read locks.

An update lock can solve the lock conversion problem without having as dramatic an impact on concurrency as a write lock. You can explicitly ask for an update lock by using the (UPDLOCK) hint in a Transact-SQL SELECT statement. Here's an example of using an update hint in a SELECT statement with ADO:

 Dim sSQL As String sSQL = "SELECT Quantity" & _         " FROM Products (UPDLOCK)" & _         " WHERE Product = 'Dog'" Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset rs.CursorLocation = adUseClient rs.CursorType = adOpenForwardOnly rs.LockType = adLockReadOnly ' Assume conn is an open connection. rs.Open sSQL, conn Dim Quantity As Long Quantity = rs.Fields("Quantity") ' The update lock is now in place. ' Execute UPDATE statement if appropriate. 

Be sure to use parentheses when you use a hint with SQL Server. If you forget them, SQL Server will interpret the hint as an alias for the table name.

The ODBC driver for SQL Server 6.5 also uses update locks if you use pessimistic locking with server-side cursors. The following code doesn't use the (UPDLOCK) hint, but it has the same effect:

 Dim sSQL As String sSQL = "SELECT Quantity" & _         " FROM Products " & _         " WHERE Product = 'Dog'" Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset rs.CursorLocation = adUseServer rs.CursorType = adOpenDynamic rs.LockType = adLockPessimistic ' Assume conn is an open connection. rs.Open sSQL, conn Dim Quantity As Long Quantity = rs.Fields("Quantity") ' The update lock is now in place. ' Execute UPDATE statement if appropriate. 

Both of these examples show how to eliminate deadlocks caused by lock conversion. While you can write your applications to eliminate some types of deadlocks, other types are unavoidable in a large OLTP system. Some deadlocks will be caught by the DBMS; others will be caught when an MTS transaction reaches its timeout value.

You must plan to trap deadlock errors when they occur and handle them as gracefully as possible. Sometimes you can handle a deadlock by resubmitting the transaction. This can hide the deadlock from the user of the application. At other times, you might be forced to return an error to the caller along with instructions to "try again later."

Much Ado About Statelessness

As I've said, to reach higher levels of system concurrency and performance, your locks must be released as quickly as possible. With MTS, transactions stay alive, locks and all, until you release them by calling SetComplete or SetAbort in the root object. When you use these functions, MTS also destroys all the objects associated with the transaction as part of the cleanup process. Therefore, the root object and any enlisted objects live for only the duration of a single method call. This programming model is much different from those of classic object-oriented programming (OOP) and COM and requires a new way of thinking.

The OOP and COM programming paradigms fail to address scenarios in which state is discarded at the completion of each method call. Object-oriented clients assume that they can obtain a persistent reference to a long-lived object. If a client modifies a state within an object, object-oriented programmers assume that the object will hold these changes across multiple method calls. This isn't the case with transactional objects in the MTS environment. Each object must die as part of the transaction's cleanup, and its state must go along with it.

This transparent destruction of transactional objects is known as stateless programming. In the short history of MTS, there has been a good deal of confusion about why statelessness is an essential aspect of the programming model. Quite a few books and articles have suggested that stateless programming is about reclaiming memory on the computer running the MTS application. They argue that destroying objects and reclaiming memory results in higher levels of scalability because of more efficient resource usage. This argument is both confusing and inaccurate.

MTS supports stateless programming to ensure the semantics of a transaction. This has nothing to do with reclaiming memory on the computer running your MTS application. The idea is that an object in an MTS application can see a data item in a consistent state only while the RM is holding a lock. If an MTS object holds a copy of this data item after the lock has been released, another transaction can modify the original data item inside the RM. The original data item and the copy can thus get out of sync and violate the ACID rules of a transaction. The point of stateless programming is that any copy of a data item must be thrown away when the RM releases its locks. This is why MTS requires that all transactional objects be destroyed when a transaction is released.

If you have an object that isn't involved in a transaction and doesn't acquire locks on an RM, having the object retain state across method calls is no problem. If you set a component's transaction support property to Does Not Support Transactions and don't call SetComplete or SetAbort, you can create stateful MTS objects.

Consider the disadvantages of statelessness: If you can't maintain state in an object, you must pass initialization parameters in every method call. For example, if you use a customer object, you must pass the primary key or some other logical ID for the customer in each method call. This isn't only tedious when it comes to defining methods but can actually decrease a system's scalability. Method calls require more parameters, which results in more network traffic. Every call requires additional processing cycles from the server for the creation, initialization, and activation of a physical object. If just-in-time activation requires complex calculations or database access, statelessness can reduce scalability.

As you can see, you must carefully weigh the pros and cons when you decide whether you really need a stateless object. Objects that are stateful definitely have a place in MTS programming as long as they aren't part of a transaction. Stateful objects can retain their logical identity and save processing cycles that are required to create and initialize new instances. You can also use stateful objects to accomplish many programming tasks that are impossible with stateless objects.

Writing Transactions for MTS

This book's companion CD contains a sample MTS project named the Market application, which demonstrates many of the principles presented in this chapter. The application includes an ActiveX DLL with a few MTS components that you can install with the MTS Explorer. It also includes a form-based Visual Basic application that runs as a base client and the required setup scripts to create the Market database in SQL Server. This application should help reinforce the concepts in this chapter and give you ideas about creating your own OLTP applications.

You should try to gain a solid understanding of the database APIs and resource managers that you will use to create your applications. Visual Studio includes many productivity features for writing data access code with ADO and for creating SQL Server databases. For instance, you can create and debug stored procedures inside the Visual Basic IDE. To learn what's available with Visual Basic 6, pick up a copy of the sixth edition of The Hitchhiker's Guide to Visual Basic and SQL Server, by Bill Vaughn (Microsoft Press). This book also has excellent coverage of both ADO and SQL Server. I highly recommend it as a desktop reference.

You should also take the time to learn how to optimize your applications for each resource manager you will work with. You'll find many ways to tune a DBMS such as SQL Server and Oracle to gain significantly higher levels of concurrency and throughput. In most cases, tuning a DBMS requires that you understand how the lock manager works and how to tune your data access code and stored procedures to find the best balance between isolation and concurrency.



Programming Distributed Applications With Com & Microsoft Visual Basic 6.0
Programming Distributed Applications with Com and Microsoft Visual Basic 6.0 (Programming/Visual Basic)
ISBN: 1572319615
EAN: 2147483647
Year: 1998
Pages: 72
Authors: Ted Pattison

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net