"Now let's look at the major features that help expand your range of design choices are Queued Components, Loosely Coupled Events or LCE, the In-Memory Database or IMDB, Transactional Shared Property Manager, object pooling, and dynamic load balancing.
"Queued Components is a new feature of COM+ that wraps message queuing behind the normal COM programming model. Instead of explicitly writing code to send a message to a queue, parse this message, and send it to the correct receiving application, you would simply expose a designated component's interface so that it can be queued. No time dependence is needed to receive the message and return a response. But, if an application calls a method from a queued client—one that understands how to do queuing—eventually the application will receive the message and understand how to process it. Regardless of its appearance, the message should be available to perform some work. This is the way that you would get asynchronous or time-independent behavior in COM+.
"If you want to determine how to work through a lengthy operation, or how to reduce the total time for a client request to return a response back to the client, you should try to move part of the processing out of the real-time client and application interaction timeline. We recommend that you send a message to a queued component, have it perform some work, and maybe update a data store with some status information. Otherwise, you could send another message back to a specialist application that knows how to complete the operation that you're performing out of line.
"At this point, you'll have to make a development decision about whether or not you want to change the message format. If you want to maintain the format, Queued Components is advantageous because you don't have to learn the MSMQ API. You would simply use the familiar COM programming model. On the other hand, if you want to change the message format—perhaps you need to communicate with an external application that has a defined message format—then you would need to use the MSMQ APIs directly.
"Another way that you can combine unfamiliar applications is to use LCE, Microsoft's publish-and-subscribe event model. This model works well with Queued Components. In the typical implementation of LCE, your application would be tied up from the time you launch an event until the time the recipients receive the message and return a response. If you don't want to wait for all subscribers to finish processing a particular event, you would use Queued Components to take that processing out of line. You would launch an event using a message, which would allow the event system to do work at its leisure.
"One advantage of LCE is that, unlike connection points in ActiveX controls, LCE allows persistent subscriptions, which don't have to be implemented within every component. So there's no logic managing the subscription within your components; the system performs subscription management for you.
"However, if you consider using LCE, you should keep in mind that its one limitation is its inability to perform true multicast. For example, if you have tens of thousands of subscribers to every event, LCE is probably not the mechanism that you want to use. On the other hand, if you have a handful or a fairly limited number of subscribers, then you can use LCE to broadcast information efficiently. Microsoft also provides a means for parallel firing, in which separate threads are sent to the trigger events of each subscriber as quickly as possible. Although this procedure is not true multicast, it does give you a way to send information to a number of subscribers, without waiting for each one to complete processing. Again, this is an ideal way to combine unrelated elements when they are developed.
"One of the key uses for this parallel firing is monitoring. In monitoring, you would instrument your components to trigger explicit events about unusual occurrences, such as access violations. If you have elements that need to be audited, you might want to fire those off. If you see that resources are being constrained, or that a problem is causing the system to perform worse than it does normally, you might want to fire off an event. Then you can have a separately developed monitoring application watch for particular events so that you can try to solve any problems that arise. The component doesn't distinguish what element performs the work, or when, or how—the component only alerts you to a potential problem.
"Another feature in COM+ that was previously unavailable in MTS is IMDB, which is essentially a data cache that helps you retrieve data from data stores and move it closer to your business objects. IMDB is geared toward read-mostly scenarios, where you have a table of data such as ZIP codes, cities, and states—information that doesn't change often. You retrieve the information from the data store, get it onto your application server—your middle tier where some business logic is happening—and then access the data out of memory quickly. Because memory is inexpensive, you have a readily available means to reduce network traffic back to your data store. For items that are read-mostly, rather than perform a query multiple times, you would simply cache the information into main memory and use it from there. IMDB is optimized for read-mostly scenarios because a distributed cache coordinator is not available in the version that ships with Windows 2000. So a separate cache on every computer reads a set of data out of the data store. Data changes have to go through the IMDB layer.
"It's important to note that you can't update the database and have it propagate back to every information cache possible. If you intend to change data, you have to make sure that a single computer will handle the necessary changes. Alternatively, you could provide a mechanism that can refresh the cached information, regardless of how many copies there are. But you'd have to implement the refresh mechanism yourself, because it's not built into IMDB in the initial version. You can, however, use it as a write-through cache. So, if you can work on one computer—you've partitioned your data and you've partitioned your client, so that every client and every data change is only going to happen on one computer—you can run the information through IMDB and then pass it on to the data store at a later time.
"In MTS, the Shared Property Manager provided a way to manage the shared transient state that you might need to maintain across transaction boundaries, or between components, or between clients. In COM+, we introduced Transactional Shared Property Manager, which is built on top of IMDB because it uses the familiar Shared Property Manager interfaces to access IMDB functionality. You now have a choice of using IMDB directly through either OLE DB or ADO to implement a database approach. Also, you can use the object-based approach of the Shared Property Manager interfaces to access your state information. A benefit of the Shared Property Manager in COM+ is that it's computer-wide, not per-process. So in COM+, you can share information across several COM+ applications, whereas in MTS, all information was specific to a particular process or server package.
"Again, you'll need to manage data that's maintained across transactions, but doesn't necessarily need to be kept in a persistent store. Transactional Shared Property Manager is ideal for data hot spots such as Web page counters or IDs that you need to generate frequently. These Web elements would otherwise require that the database be updated every time a new number was needed or information needed to be stored away. The updates could cause the data store to be overwhelmed, causing performance to suffer. Transactional Shared Property Manager enables you to cache information into your computer's main memory to perform updates and then write the updates out to the database on an as-needed basis. This way, you avoid having to update your database every time you want to access particular data.
"At one time, MTS was going to provide a feature that would emulate pooling objects. The idea was that MTS would create objects before the applications needed them and keep them on reserve. When these objects were needed, MTS would retrieve them quickly, rather than forcing users to create and initialize new objects. Unfortunately this feature never materialized. Every time applications deactivated an object, the object was destroyed by MTS. The next time users tried to recreate or access the object, a new physical object would be created on the server.
"COM+ now supports object pooling, which will allow applications to build a pool of similar objects, per-process. The objects can be put in the pool, and when an application asks for a particular object or calls a method on an object that has been deactivated, COM+ first looks in the pool. If there's a potential problem, the application should quickly remove it from the pool.
"If you have objects that are expensive to create, object pooling is a great way to create objects in the background before they are required for use and then quickly access them. Object pooling also allows you to control the maximum number of objects that can be created. Otherwise, if 10,000 clients request a particular object, COM+ would try to create the 10,000 objects, whether the physical computer systems were able to support them or not. With object pooling, you can also designate a maximum number of objects to be created, to help you manage resource usage on your server.
"A disadvantage of object pooling is that objects in a pool can't have thread affinity, and seemingly 90 percent of objects ever created to date do have thread affinity. For example, every Visual Basic component has thread affinity. If you use Visual Basic versions 5.0 or 6.0 to build your components, your components can't support object pooling. In future versions of COM+, this restriction will hopefully be eliminated so that you can use any language to build components that support pooling. Until then, you will be able to use the 6.0 versions of Visual C++ or Visual J++ to build objects that can be pooled.
"When implementing object pooling, you should remember three important conditions. One, if you're familiar with threading models, you must remember to support the free-threaded model, the neutral model, or both models at once, to access objects that support pooling. Second, object pooling is not beneficial if you apply it blindly. Third, object pooling does not always guarantee optimal performance.
"Considering these conditions, you should research your performance requirements to see if they are met without object pooling, and then evaluate whether you derive any benefits at all from activating object pooling. You may want to build a simple version of your component, particularly if you're a Visual Basic programmer (recall that Visual Basic doesn't support object pooling), to see if the component meets your performance requirements. If it doesn't, consider implementing the component using another language and then enabling pooling to see if that gives you the performance benefits that you need.
"Object pooling will most likely benefit organizations in which object creation costs (and initialization costs for particular components) are extremely high, or in which resources are scarce enough to necessitate components automatically creating objects.
"With MTS, when you want to scale up to support large numbers of clients, additional computers are needed. So multiple copies of a particular application or a package run on separate computers, and load is balanced across those computers in a static fashion. Particular clients always target one computer, or DNS round robin arbitrarily associates a particular request with a particular computer. In essence, it is difficult to maintain a configuration, particularly as the number of clients gets larger. Also, MTS is not sensitive to issues such as a particular computer failing. So if a client is constantly targeting one computer and that computer fails or goes off-line, the client is stuck. There's no way for the client to determine which computers to target to finish its work.
"COM+ adds genuine dynamic load balancing, where you specify a group of computers that all have identical components installed and then enable the client to access any of those computers based on an algorithm that runs on a router computer. The algorithm used in COM+ in Windows 2000 is a response-time algorithm. This algorithm collects statistics in the background and determines which computer is likely to give optimal performance when an object is created.
"Load balancing doesn't happen every time a component or an object is activated and deactivated. This is actually beneficial, because a load balancing operation on every activation with stateless components results in excessive overhead per call. If you just do the load balancing when you create the object, from that point on, the client, or a particular computer where the object was created, can communicate directly. So every time there's a particular method call, the method call just goes to the same computer, activates an object on that computer, does its work, and returns. That tends to give better performance than if you actually did a load balance on every single method call.
"The main reasons you perform load balancing in this manner are, first, to improve scalability, and second, to give additional availability when a particular computer is offline. Having this computer offline is acceptable because you can simply use another computer in the COM+ load-balancing cluster. Most COM+ components that you write are going to be load-balanceable. What you must watch for are any implicit computer affinities—in other words, dependence on a particular path, a particular server name, or state information that you're keeping on a particular computer. You need to make sure that, if the application gets routed to a different computer, it can still operate if state information is not available.
"At this point, we've covered different Application Services and the new features of COM+ at a high-level. For the remainder of my presentation, we'll examine the things you need to consider and the questions you need to ask when designing your applications.