At first glance, MTS can be difficult to categorize. It doesn't seem to map neatly to an equivalent product or technology used by existing distributed computing environments. MTS represents a new breed of technology that combines features from traditional distributed object technologies and traditional online transaction processing (OLTP) to provide the following services to applications:
An ORB is quite simply a broker of objects. When a call comes in to a server requesting an object, the ORB handles the call, checks for availability, and ultimately gives the caller an object. A TP-Monitor is basically an environment that inserts itself between clients and server resources so that it can manage transactions and system resources. A traditional TP-Monitor knows nothing about objects—it only knows how to optimize the use of system resources. An obvious evolutionary path is to combine the features of ORBs and TP-Monitors. This new TP-Monitor is sometimes called an Object Transaction Monitor (OTM). An OTM treats objects as just another kind of server resource that can be managed within a transaction boundary. By itself, an OTM isn't very interesting. Ultimately, the OTM is just a tool that developers can use to build scalable distributed applications. MTS is the world's first publicly available OTM.
MTS provides its object brokering services by intercepting object creation. As we saw in Chapter 2, COM looks in the system registry to determine where a component is located. So to intercept object creation, all MTS needs to do is modify the registry settings to point to code that belongs to MTS.
All components that will use MTS services are required to be in-process components. When a component is registered using the MTS administrative tools, the InprocServer32 registry key for each COM class in the component is replaced by a LocalServer32 key that points to the MTS Surrogate, MTX.EXE, and identifies the component's package ID. We will talk about packages in more detail in the section "Packages" later in this chapter. For now, you can think of the package ID as a key that identifies additional configuration information about how the Surrogate should be run and which components should be loaded into the running process.
MTS intercepts object creation so that it can associate an object context with every object under its control. The object context keeps track of information about the current activity, which is simply a logical thread of execution through a set of objects that starts when a client calls into an MTS process. MTS also creates a context wrapper for each object. The context wrapper lets MTS sit between an object and its client(s) and observe all calls to the object. MTS provides its resource management services via the object context and context wrapper.
Figure 4-1 illustrates how object creation works under MTS. A client application, C, requests a new object in the usual way—for example, by calling CoCreateInstance. The COM run time looks in the registry to determine the location of the required component, M. It sees the LocalServer32 key and launches MTX.EXE, passing it a command-line parameter identifying the package ID. When the surrogate process is launched, it loads the MTS Executive, MTXEX.DLL, to initialize MTS services and the specified package. Then COM looks for the class factory for the object. However, the class factory COM gets is actually provided by the MTS Executive, not by the component M. When COM calls IClassFactory CreateInstance to create the new object, the MTS-provided class factory uses information provided by component M to create a context wrapper and passes the wrapper back to COM (and ultimately to the client) as the new object. As far as client C is concerned, the context wrapper is the new object. The context wrapper exposes all the interfaces exposed by the real object. The MTS-provided class factory also creates an object context, which is associated with the context wrapper.
Figure 4-1. Object creation in MTS.
Notice that nowhere in this description is anything said about using the component to create the actual object. In fact, it isn't necessary for the real object to be created until a client calls it. MTS uses this fact to manage resource usage by objects in its server processes.
The first time a client makes a method call other than QueryInterface, AddRef, or Release, the context wrapper will create a real object. This process is called activating the object. The real object stays in memory until the object itself indicates that it can be deactivated. An object indicates that it can be deactivated by calling one of two special methods exposed by the object context, IObjectContext SetComplete or IObjectContext SetAbort. We'll see why two methods are provided in the section "The Application Server Programming Model" later in this chapter. For now, all we need to know is that both methods simply indicate that the object has finished its work. When an object is deactivated, MTS can choose to reclaim all resources associated with the object—including the memory allocated for the object itself. The next time a client makes a method call on the object, if the real object has been destroyed, the context wrapper creates another real object. This process is known as Just-In-Time (JIT) activation.
As mentioned, all components using MTS services must be in-process components so that all objects can be created within MTS-managed server processes. A server process uses the MTS Executive to provide the run-time support required by component-based, scalable application servers.
As a surrogate, MTS provides context management and JIT activation, as described earlier. It manages threads for the process and correctly synchronizes access to objects, so developers do not need to write multi-threaded components. It determines when the process should terminate. It performs security checks on calls into the process to prevent unauthorized clients from using components. In addition, MTS helps manage other resources, such as database connections, using resource pooling so that applications scale and perform well.
MTS also provides transparent access to a distributed transaction manager known as the Microsoft Distributed Transaction Coordinator (MS DTC). Components can be registered with the MTS administrative tools as supporting or requiring transactions. When an object is created, MTS will check whether transaction support is needed. If so, MTS creates a new transaction if necessary and enlists the object in the transaction automatically.
Because MTS is managing all the objects used in its environment, it imposes certain rules on components. In addition to being in-process, each component must provide a class object that exposes the IClassFactory interface. Objects running in MTS cannot be aggregated, nor can they create their own threads.
Components in MTS cannot use custom marshaling. MTS cannot create a context wrapper for a custom-marshaled object. Components that expose Automation or dual interfaces must provide a type library. MTS will use the information in the type library to generate the context wrapper. Components that expose interfaces that cannot be marshaled by the Automation marshaler must provide a proxy/stub DLL built in a special way so that MTS can determine what the interface looks like even if no type library is available.
A TP-Monitor is system software for creating, executing, and managing transaction processing applications. Since transaction processing might be unfamiliar to developers who've worked primarily on PCs, let's do a quick review before we look at how transactions are handled by MTS.
A transaction is simply a way to coordinate a set of operations on multiple resources. Transactions have the following four properties, known as the ACID properties:
Atomicity means that a single transaction has all-or-nothing behavior. In the bank account example mentioned earlier, Atomicity is the property that ensures that a transfer from your credit card to your checking account has one of two results. If the transfer succeeds, your credit card is debited and your checking account is credited with the amount of the transfer. If the transfer fails, your credit card is not debited and your checking account is not credited. Debiting the credit card without crediting the checking account is not permitted, nor is crediting the checking account without debiting the credit card. The transfer is all-or-nothing. A transaction that succeeds is said to commit. A transaction that fails is said to abort.
Consistency ensures that at the end of a transaction, no integrity constraints on the resources it updates are violated. For example, an ATM won't let you withdraw more money than you have in your bank account.
Isolation ensures that concurrent transactions do not interfere with each other. If transactions T1 and T2 are concurrent, when both have completed the results will be the same as if T1 completed before T2 started or as if T2 completed before T1 started. No other results are acceptable. In our bank account example, the Isolation property ensures that neither the results of the credit card transfer nor the results of the check processing disappear.
Durability means that changes to resources survive system failures, including process faults, network failures, or even hardware failures. Once a transaction has completed successfully, the results will never be lost.
TP-Monitors ensure that transactions are Atomic, Isolated, and Durable. TP-Monitors work with transaction processing programs to ensure that transactions are also Consistent.
Let's look now at a generic model for a distributed transaction processing (DTP) system. In this model, applications use a transaction manager to coordinate (create, commit, abort, or monitor) transactions. The transaction manager is responsible for ensuring that all parties in the transaction are notified of the outcome (commit or abort) and coordinating recovery from failures. Each server machine in a distributed system can have a transaction manager. Transaction managers can talk to other transaction managers or to local resource managers, as shown in Figure 4-2.
Figure 4-2. A generic transaction processing model.
A resource manager guarantees that the ACID properties of a particular resource remain consistent. The resource manager consists of some server code that knows how to commit updates to a resource, roll back changes in the case of an aborted transaction, and recover from system failures. The most common example of a resource manager is a database management system (DBMS), but this is not the only kind of resource manager, as we'll see in the section "Resource managers in MTS" later in this chapter.
You might recognize this model as a simple form of the X/Open DTP model. X/Open is part of The Open Group, an international consortium of hardware and software vendors that defines vendor-independent standards for distributed computing. The X/Open DTP model was introduced in 1991 and encompasses the standard features of most TP-Monitors.
When all the resources modified in a transaction are owned by a single resource manager, the resource manager itself can perform all the work required to commit the transaction. The resource manager ensures that either all the resource modifications occur or none of them occur, and it notifies the transaction manager of the outcome. For example, if two tables in the same Microsoft SQL Server database are updated, SQL Server, the resource manager, coordinates the updates.
When a transaction updates resources owned by two or more resource managers and perhaps located on different machines, the transaction manager needs to get into the act to ensure the Atomicity property of the transaction. This requirement is difficult to meet because machines can fail and recover independently. The solution is a special protocol called two-phase commit that is coordinated by the transaction manager. The two-phase commit protocol is illustrated in Figure 4-3. For simplicity, we will look at the single-machine case, with one transaction manager (known as the coordinator) coordinating multiple local resource managers (known as the participants).
Figure 4-3. The two-phase commit protocol.
The protocol consists, appropriately enough, of two phases: prepare and commit. In the first phase, the transaction manager notifies each resource manager enlisted in the transaction to prepare to commit. At this point, each resource manager tries to save its results in durable storage, without actually committing the changes. Because the resource values have not yet been changed, the resource manager is now prepared to either commit the changes or roll back to the previous values. If the resource manager was able to save its results, it sends a message back to the transaction manager voting to commit the transaction. Otherwise, the resource manager sends a message to the transaction manager voting to abort the transaction.
When the transaction manager receives votes from all the resource managers, it proceeds to the second phase. (If a transaction manager does not receive a vote within a reasonable amount of time, it assumes that the resource manager has voted to abort the transaction.) At this point, the transaction manager tallies the votes. If everyone votes to commit, the transaction will be committed. If anyone votes to abort, the transaction will be aborted. The transaction manager sends a second round of messages to all the resource managers, informing them of the transaction outcome. The resource managers are responsible for honoring the decision of the transaction manager. If the transaction is committed, the resource manager commits the changes to its resources. If the transaction is aborted, the resources are rolled back to their prior values. If a resource manager fails before it is notified of the outcome of a transaction, it contacts the transaction manager as part of error recovery, asking for the outcome of its pending transactions. At this point, it can continue with its part of the two-phase commit protocol.
The protocol works in essentially the same way across multiple machines, except that multiple transaction managers are involved, usually one per machine. One transaction manager is selected as the transaction coordinator. The remaining transaction managers act as intermediaries between the transaction coordinator and the local resource managers that are enlisted in the transaction.
If you are interested in a more comprehensive overview of transaction processing, see Bernstein and Newcomer, "Principles of Transaction Processing," listed in the bibliography.
Now that you have a general idea of transaction processing, let's examine how MTS handles transactions. We've already looked at the MTS Surrogate, MTX.EXE, and the MTS Executive, MTXEX.DLL. These parts of MTS provide the server process that MTS components run in. They also provide the process for auxiliary components such as resource dispensers (more on these in a moment) and in-process COM components that are not controlled by MTS, such as ADO. Multiple server processes can run on a single machine. The MTS Executive is also responsible for creating the context wrappers and object contexts for its objects, so it can monitor calls to the object and manage system resources appropriately.
The MS DTC The transaction manager in an MTS environment is the MS DTC. The MS DTC runs as a Windows NT service and is usually started at system start-up. A copy of the MS DTC runs on each machine that will participate in transactions, but only one copy coordinates a particular transaction.
The MS DTC uses a COM-based protocol, OLE Transactions, to communicate with resource managers. OLE Transactions defines the interfaces that applications, resource managers, and transaction managers use to perform transactions. Applications use OLE Transactions interfaces to initiate, commit, abort, and inquire about transactions. Resource managers use OLE Transactions interfaces to enlist in transactions, to propagate transactions from process to process or from system to system, and to participate in the two-phase commit protocol.
The X/Open Distributed Transaction Processing group has also defined protocols for communicating with transaction managers. The TX standard defines the API that an application uses to communicate with the transaction manager. Applications use the API to initiate, commit, and abort transactions. The XA standard defines an API for resource managers to communicate with transaction managers. The XA API enables resource managers to enlist in transactions, to perform two-phase commits, and to recover in-doubt transactions following a failure.
The MS DTC uses OLE Transactions instead of the X/Open protocols for several reasons. First, Microsoft's computing model is based on distributed, transaction-protected, object-based components that communicate using COM interfaces. To fit this model, the transaction interfaces needed to be object-based, unlike the X/Open protocols. Second, Microsoft intends to extend the transaction model to support a wide variety of transaction-protected resources beyond the usual database resources. Since OLE Transactions is based on COM, it can easily be extended to provide richer transaction capabilities. Third, OLE Transactions supports multi-threaded programs, whereas XA is oriented toward a single thread of control. Finally, unlike OLE Transactions, the X/Open standard does not support recovery that is initiated by the resource manager; therefore, all recovery must be initiated by the transaction manager.
Resource managers in MTS Resource managers in MTS fulfill the same role as resource managers in the X/Open model—they are services that manage durable state and understand how to participate in the two-phase commit protocol. Resource managers are responsible for logging all changes, so they can handle rollbacks when a transaction aborts or recovers from a system failure. The most commonly used resource managers for MTS are DBMSs such as SQL Server. However, MTS does not impose any constraints on the type of resource, as long as there is a resource manager that meets the criteria defined by OLE Transactions. For example, Microsoft Message Queue Server (MSMQ) provides a resource manager.
Because the X/Open protocols are widely supported by other transaction and resource managers, the MS DTC provides some interoperability with products that comply with the XA standard. OLE Transactions provides a mapping layer to convert XA functions to OLE Transactions functions. OLE Transactions-compliant resource managers can be controlled by XA-based transaction managers, such as Tuxedo or Encina, and XA-based resource managers can be controlled by the MS DTC.
Resource dispensers in MTS Resource dispensers are another part of MTS that can choose to participate in transactions. Resource dispensers manage nondurable state that can be shared—for example, database connections, network connections, and connections to queues as well as threads, objects, and memory blocks. A resource dispenser is implemented as a DLL that exposes two sets of interfaces. One set of interfaces is an API that applications can call to use the resources. The other set is used to connect the resource dispensers with an MTS component called the Resource Dispenser Manager (DispMan). Figure 4-4 illustrates the position of DispMan in the MTS architecture.
Figure 4-4. The MTS architecture.
DispMan provides resource pooling for the resource dispensers. Resource pooling improves performance, since applications usually don't need to wait for a new resource to be created from scratch. Instead, they just get an existing resource from the pool. Resource pools can also improve performance by limiting the number of resources that exist at a given time, which can help keep the server from getting bogged down. In MTS, resource pools are managed per-process.
If DispMan is running in an MTS environment, it works with the MTS Executive and the resource dispensers to ensure that resources supplied by the resource dispensers are correctly enlisted in transactions. Resource dispensers are not required to support transactions, but if they do, they must be able to enlist in an OLE Transactions transaction with the MS DTC. The MTS Executive notifies DispMan when a transaction is complete so that DispMan can move resources used by the transaction back into the resource pool. The MTS Executive also notifies DispMan whenever an object is destroyed so that DispMan can reclaim any resources used by the object, eliminating the possibility of resource leaks.
MTS 2.0 supplies two resource dispensers that are visible to developers. The ODBC Driver Manager is a resource dispenser for ODBC database connections. MTS also supplies the Shared Property Manager (SPM, pronounced "spam"), which is used to manage shared data. We will look at these resource dispensers in further detail when we discuss building components in Chapters 8 and 9.
In MTS, objects (the real objects, not the context wrappers the client is talking to) can participate in only one transaction at a time. Objects are enlisted in transactions on activation and are recycled when the transaction commits or aborts. Objects call IObjectContext SetComplete to indicate that they have finished their work and vote to commit the transaction. Objects call IObjectContext SetAbort to indicate that they have finished their work and vote to abort the transaction. Either way, MTS will reclaim an object's resources—including its internal memory—when the transaction ends.
Failfast behavior MTS-managed processes exhibit a behavior known as failfast. Failfast behavior means that if the MTS Executive detects an error in its internal data structures or a problem with one of the objects under its control, it immediately terminates the process. This design decision protects MTS and its components from data corruption. When the MTS process is restarted, MTS does not attempt to re-create the state of objects that might have been running when the process shut down. Instead, MTS relies on the fact that transactions using those objects will time out and then abort. Applications that detect a transaction failure can elect to retry the transaction at a later time, using new objects in the new process.
Nested Transactions or Chained Transactions
Developers familiar with other TP-Monitors often ask whether MTS supports nested or chained transactions. In transaction processing theory, nested transactions permit transactions to have subtransactions. The parent transaction does not commit until all subtransactions have committed, and subtransaction results are not made durable until the parent commits. Effectively, objects in a subtransaction would belong to both the parent transaction and the subtransaction. MTS does not support nested transactions. MTS does support starting a new transaction from within an existing transaction, but the transactions succeed or fail independently.
Chained transactions permit a transaction program to commit a transaction and immediately start another, maintaining its internal state across the transaction boundary. MTS does not support chained transactions either. In practice, this usually isn't a problem, but it might require a different way of thinking about transaction boundaries than you have used with other TP-Monitors.
In addition to the MTS run-time components, MTS also provides administrative tools for deploying and monitoring applications. We will examine the administrative tools in detail (well, in as much detail as developers can probably stand) in Part Two, when we talk about packaging, debugging, and testing applications.