Transactional Programming Model


Regardless of the transacted resource you are working with, your application can utilize a common programming model. This model consists of types found in the System.Transaction namespace. Those familiar with existing Microsoft transaction technologies — such as COM+ and the Windows Distributed Transaction Coordinator (DTC) — should feel at home. For those who aren't familiar with these technologies, I will explain terminology and get you up to speed on them in this chapter.

Transactional Scopes

The first question that probably comes to mind when you think of using transactions is the programming model with which to declare the scope of a transaction over a specific block of code. And you'll probably wonder how to enlist specific resources into the transaction. In this section, we discuss the explicit transactions programming model. It's quite simple. This example shows a simple transactional block:

 using (TransactionScope tx = new TransactionScope()) {     // Work with transacted resources...     tx.Complete(); } 

With the System.Transactions programming model, manual enlistment is seldom necessary. Instead, transacted resources that participate with TMs will detect an ambient transaction (meaning, the current active transaction) and enlist automatically through the use of their own RM. An alternate programming model, called declarative transactions, facilitates interoperability with Enterprise Services. A discussion of this feature can be found below.

Once you've declared a transaction scope in your code, you, of course, need to know how commits and rollbacks are triggered. Before discussing mechanics, there are some basic concepts to understand. A transaction may contain multiple nested scopes. Each transaction has an abort bit, and each scope has two important state bits: consistent and done. These names are borrowed from COM+.

abort may be set to indicate that the transaction may not commit (i.e., it must be rolled back).The consistent bit indicates that the effects of a scope are safe to be committed by the TM, and done indicates that the scope has completed its work. If a scope ends while the consistent bit is false, the abort bit gets automatically set to true and the entire transaction must be rolled back. This general process is depicted in Figure 15-3.

image from book
Figure 15-3: A simple transaction with two inner scopes.

In summary, if just one scope fails to set its consistent bit, the abort bit is set for the entire transaction, and the effects of all scopes inside of it are rolled back. Because of the poisoning effect of setting the abort bit, it is often referred to as the doomed bit. With that information in mind, the following section will discuss how to go about constructing scopes and manipulating these bits.

Explicit Transaction Scopes

An instance of the TransactionScope class is used to mark the duration of a transaction. Its public interface is extremely simple, offering just a set of constructors, a Dispose, and a Complete method. You saw a brief snippet of code above showing how to use these via the default constructor, the C# using statement (to automatically call Dispose), and an explicit call to Complete.

When a new transaction scope is constructed, any enlisted resource will participate with the enclosing transaction until the end of the scope. Constructing a new scope installs an ambient transaction in Thread Local Storage (TLS), and can be accessed programmatically through the Transaction.Current property. We'll discuss throughout this text the various uses of the Transaction object. For now you can imagine that whenever a new scope is constructed, an associated transaction is also constructed. This is true of flatly nested scopes (the most common model).

Calling Complete on the TransactionScope sets its consistent bit to true, indicating that the transaction has successfully completed its last operation and is safe to commit. When Dispose gets called, it inspects consistent; if it is false, the transaction's abort bit is set. In simple cases, this is precisely when the effects of the commit or rollback are processed by the TM and its enlisted RMs. In addition to setting the various bits, it instructs the RMs to perform any necessary actions for commitment or rollback. In nested scenarios, however, a child scope does not actually perform the commit or rollback; rather, the root scope is responsible for that (the first scope created inside a transaction).

Scope Construction

When you instantiate a new TransactionScope, there are a few constructor overloads to consider. The simplest, .ctor() shown above, takes no arguments: If there is no ambient transaction, it creates one and a new top-level scope. If there is an ambient transaction already active, it simply erects a new nested scope. There is also an overload that takes a Transaction object so that you can manually nest within another active transaction that the code isn't lexically contained within. The way that a transaction flows across a scope boundary can be controlled with the TransactionScopeOption parameter, permitting you to generate a new transaction or suppress the existing one, for example. We talk about nesting and flowing in detail below.

There are three other bits of information you can supply to the constructor: (1) a TimeSpan for transaction timeout and deadlock detection purposes; (2) a TransactionOptions object that provides both a way to specify timeout, and also isolation level; and (3) an EnterpriseServicesInteropOption for Enterprise Services interoperability. We discuss the first two in the next sections and the Enterprise Services integration in the "Declarative Transactions" section below.

Commit and Rollback

Using our previous example of a block, notice what happens if an exception were generated inside the transactional block before calling Complete. Control would transfer to the Dispose method without setting the consistent bit. The result is an aborted transaction and associated rollback activities.

Calling Complete more than once in a single transaction is a program error, the result of which is an InvalidOperationException. Completing a transaction should only be done after the last transactional operation, so clearly calling it more than once is a mistake. Furthermore, attempting to access the TransactionScope after the Dispose method has been called will result in an ObjectDisposed Exception (e.g., by trying to log a new operation by accessing a resource protected by a participating RM).

We've already seen how a rollback occurs automatically if consistent is false when Dispose is invoked. You can also manually request that the abort bit be set for the transaction by a call to the Rollback method on the Transaction object. This not only sets the bit, but also generates a TransactionException. The Rollback override which takes an Exception object enables you to embed an inner exception in the TransactionException that gets generated:

 using (TransactionScope tx = new TransactionScope()) {     // Work with transacted resources...     // ...     Transaction.Current.Rollback(new Exception("Something bad happened"));     //...     tx.Complete(); } 

This has the benefit that somebody can't catch the exception, accidentally swallow it, and then commit the transaction anyhow. Because the abort bit is set, the entire transaction and its scopes are doomed.

Deadlock Prevention

For all transactions, there is always the possibility of a deadlock. This is the case where multiple transactions are attempting to use the same resources in the opposite order, where each ends up waiting for the other. The result is a so-called deadly embrace; unless that embrace is broken, no forward progress will be made and an entire system of applications could come to a grinding halt. (The general idea of a deadlock was discussed in the context of multi-threading in Chapter 10.)

One way of responding to a deadlock is to using a timeout to govern the total execution time of a transaction. If a transactional activity takes place and the beginning of the transaction is older than the current time minus the timeout, the transaction will be aborted.

The default timeout of 60 seconds can be overridden through the TransactionManager.Default Timeout property or using configuration. A timeout can be specified on a per-transaction basis using the TransactionScope constructor overloads that take a TimeSpan:

 using (TransactionScope tx =     new TransactionScope(         TransactionScopeOption.RequiresNew,         new TimeSpan(0, 0, 5))) {     // Transactional operations... (must complete in under 5s) } 

Alternatively, changing the global default timeout for those transactions that don't manually override it can be done through an addition to your application's configuration file:

 <system.transactions>     <defaultSettings         distributedTransactionManagerName=" name"         timeout="00:00:05"/>     <machineSettings         maxTimeout="02:00:00" /> </system.transactions> 

Modifying the timeout on a per-transaction basis can be useful to avoid cancellation in long-running transactions and/or to shorten the timeout in cases where short transactions are under high contention and the likelihood of deadlock is high. Changing the timeout globally can be useful for testing purposes. For example, if you set the timeout to 1 millisecond, you can more easily test your deadlock- and rollback-handling code. You should obviously test carefully when you begin changing such settings.

Note that setting the timeout to 0 has the effect of an infinite timeout, usually not a good idea unless you're trying to generate deadlocks (or if you've convinced yourself that a deadlock isn't possible, for example by using disciplined resource acquisition orderings).

Isolation Level

An isolation level can be specified when constructing a new TransactionScope, through the use of a TransactionOptions object. The TransactionOptions class has an IsolationLevel property, which takes a value from the IsolationLevel enumeration. A transaction's isolation describes the way in which reads and writes are visible (or not) to other transactions accessing the same resources concurrently, and it must match all parent nested scopes' isolation levels. Note that isolation is a very complex topic. Choosing the wrong isolation level can quickly lead to nasty and difficult-to-debug correctness, scalability, and deadlock bugs. Any attempt to do so should only occur after lots of research and testing.

The default isolation level is Serializable, the highest level of isolation possible. It means that transactions accessing the same resources must do so in an entirely serialized fashion, one after the other. It's as if each takes a big lock for the duration of the transaction when each resource is accessed, both for read and write access. The lowest level of isolation, ReadUncommitted, permits transactions to execute in a highly concurrent fashion but at the risk of noticing invalid state that eventually gets rolled back. The former is pessimistic, while the latter is optimistic.

There are some situations that are likely to occur with ReadUncommitted that absolutely cannot happen with Serializable: Read/write conflicts can happen if your transaction reads some data that is then changed by another transaction before your transaction commits. The other transaction might have modified or even deleted the data. Write/write conflicts can occur if two transactions are modifying the same data simultaneously. In either case, only one transaction can win, and the other must be rolled back.

As noted already, choosing the right isolation level is tricky. There are several options between the two extremes illustrated above. On one hand, you guarantee correctness at the cost of lost concurrency (pessimistic), while on the other, you guarantee better scalability at the risk of lost correctness (optimistic). This is a classic problem of tradeoff. Only careful analysis will make evident the one that is right for your scenario. Some conflicts are not cause for concern, such as reading data that is constantly in flux (e.g., a stock ticker). Many of the tradeoffs are the same that you'd need to make when using locks in multi-threaded code.

Forgetting to Dispose

As with any set of paired operations, most people erect TransactionScopes inside C# using blocks to eliminate the chance that Dispose won't be called, for example:

 using (TransactionScope tx = new TransactionScope()) {     // Work with transacted resources... } 

This body of code of course expands to the logical equivalent to the following C# in the emitted IL:

 {    TransactionScope tx = new TransactionScope();    try    {         // Work with transacted resources...    }    finally    {        txScope.Dispose();    } } 

If the programmer fails to call Dispose altogether, the transaction completes when the scope becomes unreachable and its Finalize method is run by the CLR's finalizer thread. For a variety of reasons, it is a very bad practice to rely on finalization for final processing, not the least of which include the following: this will occur at some indeterminate point in the future, possibly holding resource locks (in pessimistic cases) for longer than necessary. Worse than that, it's likely that the transaction will time out before the finalizer is able to commit it, which can lead to an exception on the finalizer thread (which will crash the process).

Transactional Database Access Example (ADO.NET)

As a brief example, this code wraps some calls to a database inside a transaction. ADO.NET's SQL Server database provider automatically looks for an ambient transaction, instead of you having to call CreateTransaction and associated methods on the connection manually:

 using (TransactionScope tx = new TransactionScope()) {     IDbConnection cn = /*...*/;     cn.Open();     // ADO.NET detects the Transaction erected by the TransactionScope     // and uses it for the following commands automatically.     IDbCommand cmd1 = cn.CreateCommand();     cmd1.CommandText = "INSERT ...";     cmd1.ExecuteNonquery();     IDbCommand cmd2 = cn.CreateCommand();     cmd2.CommandText = "UPDATE ...";     cmd2.ExecuteNonquery();      // A call to Complete indicates that the ADO.NET Transaction is safe      // for commit. It doesn't actually complete until Dispose is called.      tx.Complete(); } 

Similar things were possible with version 1.x of the Framework, but of course it required a different programming model for each type of transacted resource you worked with. And it didn't automatically span transactions across multiple resource enlistments.

Nesting and Flowing

Just as COM+ transactions do, the System.Transactions infrastructure supports a variety of transaction nesting options. The manner in which this occurs can be indicated through one of the TransactionScopeOption enumeration values. There is a subtle difference between a transaction scope and a transaction itself, but nonetheless it is crucial to understand.

Let's first take a look at the three possible values for TransactionScopeOption and then see some examples that show precisely the difference between nested scopes and transactions:

  • Requires: A transaction must be present for the duration of the scope. If a transaction exists at the time of the call, a new scope will be constructed inside the existing transaction. All reads and writes will participate with the containing transaction, and a commit or rollback is processed by the existing root scope. If an existing transaction does not exist, a new transaction and root scope is generated and used. This is the default value if a specific value is not supplied.

  • RequiresNew: A new transaction and root scope is always created, regardless of the existence of an ambient transaction. While the transaction is considered "nested," it is only in a lexical sense. Once the inner transaction is processed, its reads and writes are no longer isolated from other code inside the outer transaction. But the transaction does restore the previous transaction and scope once it is complete. This is sometimes called an orthogonal transaction. This transaction may also abort without forcing its parent to abort.

  • Suppress: If a transaction exists when called, the scope suppresses it entirely. This has the effect of turning off the transaction for the duration of the new transaction scope. For operations that do their own compensation in the face of failure, this is an appropriate setting. But it should be used with care in your programs; it is primarily for systems-level code that must (for some reason) step explicitly outside of the protection of a transaction.

The following code demonstrates the various types of scopes. We have a method A that uses a Requires transaction scope. Assuming that it is called from a body of code not already inside a transaction, this constructs a new ambient transaction, T1, and root scope. A then constructs a new nested scope, again using the default of Requires. We can see through the lexical layout of the code that this will result in reusing the existing ambient transaction T1. We then make calls to B, which uses RequiresNew to create a new transaction T2 regardless of its caller, and then C, which uses Suppress to temporarily run outside of the context of any transaction that might be active (bypassing both T1 and T2 in this case):

 void A() {     using (TransactionScope scope1 = new TransactionScope())     {          // TransactionScopeOption.Requires is implied.          // Ambient tx T1 is erected and is active here.          using (TransactionScope scope2 = new TransactionScope())          {              // Requires is again implied, reusing the existing ambient              // tx T1. All tx activity is logged to T1.              // A call to Complete "votes" for T1 to commit.              // If it isn't called, T1 is doomed and will roll back.              // Dispose doesn't physically process the tx, since ‘scope2'              // is not T1s root tx scope.           }           B(); // B constructs a new tx T2 inside its tx scope.           // Ambient tx T1 is active here.           C(); // C suppresses T1 inside its scope.           // Ambient tx T1 is active again.           // If we call Complete here, we vote for T1 to commit.           // Dispose on ‘scope1' causes the tx to be processed physically           // since ‘scope1' is the root tx scope for T1.     } } void B() {     using (TransactionScope scope2 =         new TransactionScope(TransactionScopeOption.RequiresNew))     {         // Always generates a new tx.        // No nested tx's were found, so ‘scope2' is responsible for        // sole voting and physical processing of the new tx.     } } void C() {     using (TransactionScope scope3 =         new TransactionScope(TransactionScopeOption.Suppress))     {         // No ambient tx is active here.     } } 

Figure 15-4 might help to illustrate some of the constituent parts and how flow occurs.

image from book
Figure 15-4: Transaction flow and scope nesting.

You should note based on the above picture a few things: First, each transaction has a root scope that is responsible for working with the TM to physically commit the effects of the entire transaction. It will only do so provided that all child scopes have successfully completed by calling Complete, sometimes referred to as voting. Second, each transaction has an abort bit, and each scope has its own consistent and done bit. If a single scope fails to become consistent and causes the abort bit to be set, the entire transaction will abort. Each scope must become consistent and complete itself independently for the root scope to physically commit the effects. A single scope that fails to vote results in a rolled back transaction.

There is a common misconception that nested transactions participate in and enjoy the isolation protection of their immediate parent transaction, much like the way nested scopes work. This is incorrect. A root scope inside a new transaction is responsible for physically committing or rolling back the contents of its transacted operations and all nested scopes. But outer transactions do not inspect child transactions and their scopes for voting or abort status before processing. Using the default of Requires often leads to the expected semantics, while RequiresNew is used for special circumstances where the transaction must execute in an orthogonal fashion.

Dependent Transactions

The Transaction class enables you to generate a dependent transaction with its DependentClone method. This permits you to form more complex dependencies between the commit protocols of multiple transactions and also to coordinate work with the transaction itself. For example, it can be used to ensure that a set of other work completes before the transaction itself completes.

DependentClone takes as input an enumeration value of type DependentCloneOption, the only two options of which are BlockCommitUntilComplete and RollbackIfNotComplete. This is used to tell the original transaction how it responds to an incomplete dependent transaction at commit time. BlockCommitUntilComplete instructs the original transaction to wait for the new dependent transaction to be completed before it proceeds, while RollbackIfNotComplete indicates that the parent transaction should roll back if it reaches the end of its execution and the dependent transaction is not done yet.

The resulting DependentTransaction object can then be passed to the constructor of TransactionScope to form a nesting inside of it. A brief example of where this might be useful is the following scenario where a single transaction is shared among more than one thread:

 void ParentWorker() {     using (TransactionScope tx = new TransactionScope())     {         DependentTransaction dtx = Transaction.Current.DependentClone(             DependentCloneOption.BlockCommitUntilComplete);         System.Threading.ThreadPool.queueUserWorkItem(ThreadPoolWorker, dtx);         // Some transactional work...         // If we reach Complete here before the ThreadPool worker does, this         // call will block.         tx.Complete();     } } void ThreadPoolWorker(object state) {     DependentTransaction dtx = (DependentTransaction)state;     using (TransactionScope tx = new TransactionScope(dtx))     {         // We are operating inside the same transactional context as the         // one from which the dtx was created.         // Some transactional work...         tx.Complete();     } } 

Coordinating transactions across multiple threads is difficult to do correctly. But it's a very powerful feature.

Enterprise Services Integration

Most applications will use the explicit programming model described above. It is incredibly simple, and coupled with direct programmatic access to the ambient Transaction object, it offers all of the flexibility and functionality you likely need. Prior to 2.0, however, the Enterprise Services (ES) feature provided a way to perform declarative transaction management using custom attributes. The System.Enterprise Services offers a managed extension of COM+ and, in a nutshell, does things like:

  • Ensures that each ES component instance — defined as an object of a managed class deriving from the ServicedComponent class — abides by COM apartment rules. For example, an ES object created on a thread inside a Single Threaded Apartment (STA) can only be accessed from that STA. Other threads trying to access it will be automatically transitioned by the CLR, reducing concurrency but helping to write correct thread-safe code. See Chapter 10, on threading, for more details.

  • Pooling and sharing of ES components through just-in-time activation and retirement. Instances can furthermore be hosted and accessed across remoting boundaries using a decoupled communication protocol, just as COM+ does over DCOM.

  • Transacted access to ES instances. Prior to 2.0, if you wanted transacted objects ES was the only option that provided easy integration with existing transaction managers, including the Distributed Transaction Coordinator (DTC) for cross-machine transactions. This is generally still the case without authoring your own custom transaction provider. We discuss this in more detail later on.

On this last point, consider this type as an example:

 [Transaction(TransactionOption.RequiresNew,     Isolation = TransactionIsolationLevel.Serializable,     Timeout = 30)] class MyComponent : ServicedComponent {     [AutoComplete]     public void Foo()     {         // Do some transacted operation; assuming we complete         // successfully, the COM+ transaction will commit.     }     public void Bar()     {         // Some transacted operations. We manually commit this time.         ContextUtil.SetComplete();     } } 

Notice that we use the ES TransactionAttribute type to declare that all of our type's operations are protected by a transaction. It uses the same set of options to indicate flow as the explicit transactions described above, the same set of isolation levels, and also provides the capability for a deadlock timeout. Notice also that the Foo method has an AutoCompleteAttribute; this indicates that the transaction should be set to consistent and done upon successful exit of the method (i.e., as long as an unhandled exception is not generated). Much like the explicit transactions examples above, the Bar method uses the ContextUtil class to set the complete and done bits.

Integration Options

The System.Transactions feature has been written to integrate seamlessly with ES transactions. This means that constructing a new TransactionScope and then making a method call on a transacted ES component will abide by the same nesting and flow rules that would ordinarily take place. It's conceptually as if each of the ServicedComponent's methods is wrapped in a using (TransactionScope tx = new TransactionScope(...)) {} block automatically for you, meaning that calling into it from within an existing scope will enlist the ES's RM into your existing transaction.

For TransactionScopes generated inside of an existing ES transactional context, you may use an enumeration value from the EnterpriseServicesIntegrationOptions type to specify precisely how the new scope functions with respect to the ES transaction. Specifying a value of None means that the existing ES transaction will not be used if one exists, instead the scope will act as though you used RequiresNew in such cases and generate a completely new transaction. The Full and Automatic values enable you to choose the level of participation with ES contexts. Full means that the block acts just as though it were an ordinary ES transactional context, while Automatic acts like None when called from the default ES context, and like Full when called from any other transactional context.

Declarative Transaction Compromises

Using declarative transactions does unfortunately imply a few compromises. The primary noteworthy concern is that using a declarative transaction always incurs the overhead of a distributed transaction. Even if the object is local to the AppDomain in which the transaction is executed, the DTC comes into play. Please refer to the "Distributed Transactions" section below for the full implications of this. Furthermore, the type of transaction used may only be accessed from a single thread. While most transactions are not accessed from multiple threads, this can be limiting in scenarios that call for it.

A complete discussion of ES is outside of the scope of this book. There are other resources recommended in the "Further Reading" section below should you care to learn more about them. While their architecture is very COM-based, they represent a great way to build distributed, scalable systems with today's technology. Admittedly, web services are becoming the more appropriate way to do this moving forward, for example with the Windows Communication Foundation, but COM will certainly be around for some time to come.

Transaction Managers

The transactional programming model uses two styles of transaction managers (TMs) to coordinate with the respective resource managers (RMs). When a new transaction is generated using a TransactionScope, it will begin life in the Lightweight Transaction Manager (LTM), a lightweight RM wrapper on top of the TM. As long as the transaction only deals with volatile resources (EnlistVolatile) and at most one durable resource (EnlistDurable) that supports single-phase notifications (ISinglePhase Notification), the transaction stays under the control of the LTM. This is an extremely fast, low-overhead transaction manager that relies on native CLR method calls to do its job. If you stay inside the LTM, you should be extremely happy with the performance — especially if you've worked with the overhead of distributed transactions imposed onto local resources in the past.

On the other hand, if a transaction either (1) uses resources not local to the AppDomain — meaning they can be in other parts of the process, other processes on the same machine, or even on another machine altogether — or (2) enlist a durable resource that doesn't support single-phase notifications, or (3) enlist more than one durable resource, the distributed OleTx Transaction Manager is used. Furthermore, enlistment of resources from more than one RM in the same transaction requires two-phase commit (2PC) to guarantee reliable and indivisible commit of both resources. In these cases, the OleTx TM is used as well. This TM masks all of the difficulties of communicating across distributed machines, dealing with failures, and ensuring that the ACID properties of your transactions are preserved among multiple remote parties.

The OleTx TM is built on top of the Windows Distributed Transaction Coordinator (DTC) and uses RPC to communicate among multiple RMs. (This is because the DTC is running inside of its own process.) The OleTx TM is less efficient than the LTM, simply because the LTM is able to use native in-memory method calls to work with RMs, while the OleTx TM must use remote procedure call mechanisms (even for RMs local to the AppDomain). If you are using Enterprise Services all transactions will start out using the OleTx TM, even those working with a single RM (even a single component!) local to the AppDomain. Furthermore, any child scopes that work with resources usually handled by the LTM will simply join the transaction using the OleTx TM.

Promotion

A benefit of System.Transactions is that you rarely need to think about TMs or RMs at all. All of the magic that goes into creating TMs, enlisting and coordinating with RMs, and deciding which type of TM is necessary for a given set of RMs is encapsulated under the covers of the TransactionScope programming model. Life is usually simple. Understanding how it all works underneath is often useful. When things go wrong, for example, understanding the involved components can give you a head start on debugging and fixing them. Unfortunately, we don't have space to discuss custom RMs here (which are a perfect way to learn the ins and outs of the infrastructure); please look at the "Further Reading" section below for some interesting follow ups.

One thing that happens transparently by the infrastructure is transaction promotion. When a simple transaction begins life in the LTM and suddenly encounters an enlistment for a resource controlled by, say, an RM that must be coordinated with the OleTx TM, it will promote the transaction. Promoting the transaction moves it from under the LTM's control into the OleTx TM's control. Here is a brief example of this:

 using (TransactionScope txScope1 = new TransactionScope()) {     // Right now, we are in the LTM.     IDbConnection dbc = /*...*/null;     IDbCommand cmd = dbc.CreateCommand();     // Work with 'cmd'...     MyComponent2 component = new MyComponent2();     component.Foo();     // The call to 'Foo' causes us to be promoted to use the OleTx TM.     IDbCommand cmd2 = dbc.CreateCommand();     // Work with 'cmd2'...     // We're still in the OleTx TM. It needs to coordinate the commit     // with both IDbCommand and ‘MyComponent2's RM for a successful 2PC. } 

If we were to step through this code and watch it in the DTC's monitoring utility, you could see the DTC promotion happening. This monitoring utility is an MMC snap-in and can be found in your computer's Administrative Tools, under Component Services.

After being promoted to the OleTx TM, the remainder of the transaction executes as though it had always lived inside of it. The commitment or rollback must be coordinated with all enlisted RMs at the end of the transaction. For long-running transactions that enlist a single OleTx TM-protected resource for a short period of time, for example, this can have a negative impact on the overall transaction performance.

Two-Phase Commit (2PC)

The so-called two-phase commit (2PC) protocol is not specific to the System.Transactions technology, nor is it even specific to the DTC. The problem that 2PC solves is ensuring that each RM has a chance to reliably vote in the transaction — for example, to detect inconsistencies that would prohibit a successful commit — before the overall TM permits any one RM to physically commit its changes. Otherwise, some RMs might commit, while others later on have to reject the transaction (at which point the other RMs would need to compensate somehow or else the ACID properties do not hold). Moreover, some protocol must be established to ensure that — once all RMs have been told to commit — they have been successful in doing so.

The general algorithm is as follows: First, the TM permits all transactional activity to complete. Assuming that all scopes have voted that the transaction may be processed, the TM then contacts each RM and instructs it to commit. Each RM is responsible for contacting the TM after a successful commitment to acknowledge the operation. Assuming that each RM acknowledges, the transaction can be marked completed. If a single RM fails to acknowledge within a timeout period (usually configurable in the RM, for example with DTC), the TM again contacts each RM and asks it to roll back the changes.




Professional. NET Framework 2.0
Professional .NET Framework 2.0 (Programmer to Programmer)
ISBN: 0764571354
EAN: 2147483647
Year: N/A
Pages: 116
Authors: Joe Duffy

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net