The Distributed Systems patterns cluster focuses on two primary concepts: remote invocation and coarse-grained interfaces.
The Broker pattern describes how to locate a remote object and invoke one of its methods without introducing the complexities of communicating over a network into the application. This pattern establishes the basis for most distributed architectures, including .NET remoting.
Figure 5.1: Patterns in the Distributed Systems cluster
One of the guiding principles of the .NET Framework is to simplify complex programming tasks without taking control away from the programmer. In accordance with this principle, .NET remoting allows the developer to choose from a number of remoting models, as described in the following paragraphs.
The simplest remoting model creates a local copy of the remote object in the client process. Any subsequent method invocations on this object are truly local calls. This model avoids many of the complications inherent in distributed computing but has a number of shortcomings. First, computing is not really distributed because you are running a local copy of an object in your own process space. Second, any updates you make to the object’s state are lost because they occur only locally. Finally, an object is usually remote because it requires a remote resource or because the provider of the remote object wants to protect access to its internals. Copying the object instance to the local process not only defeats both of these goals but also adds the overhead of shipping a complete object over a remote channel. Because of these limitations, the only application of object copying that this chapter discusses is the Data Transfer Object pattern.
Invoking the methods directly on the remote object is a better model than working on a local copy. However, you can invoke a method on a remote object only if you have a reference to it. Obtaining a reference to the remote object requires the object to be instantiated first. The client asks the server for an instance of the object, and the server returns a reference to a remote instance. This works well if the remote object can be viewed as a service. For example, consider a service that verifies credit card numbers. The client object submits a credit card number and receives a positive or negative response, depending on the customer’s spending (and payment) habits. In this case, you are not really concerned with the instance of the remote object. You submit some data, receive a result, and move on. This is a good example of a stateless service, a service in which each request leaves the object in the same state that it was in before.
Not all remote object collaborations follow this model, though. Sometimes you want to call the remote object to retrieve some data that you can then access in subsequent remote calls. You must be sure that you call the same object instance during subsequent calls. Furthermore, when you are finished examining the data, you would like the object to be deallocated to save memory on the server. With server-activated objects, you do not have this level of control over object instances. Server-activated objects offer a choice of only two alternatives for lifetime instance management:
Create a new instance of the object for each call.
Use only a single instance of the remote object for all clients (effectively making the object a Singleton).
Neither of these options fits the example where you want to access the same remote instance for a few function calls and then let the garbage collector have it.
Client-activated objects give the client control over the lifetime of the remote objects. The client can instantiate a remote object almost as it would instantiate a local object, and the garbage collector removes the remote objects after the client removes all references to the object instance. This level of control comes at a price, though. To use client activation, you must copy the assembly available to the client process. This contradicts the idea that a variety of clients should be able to access the remote objects without further setup requirements.
You can have the best of both worlds, though, by creating a server-activated object that is a factory object for server objects. This factory object creates instances of other objects. The factory itself is stateless; therefore, you can easily implement it as a server-activated singleton. All client requests then share the same instance of the factory. Because the factory object runs remotely, all objects it instantiates are remote objects, but the client can determine when and where to instantiate them.
Invoking a method across process and network boundaries is significantly slower than invoking a method on an object in the same operating system process.
Many object oriented design practices typically lead to designing objects with fine-grained interfaces. These objects may have many fields with associated getters and setters and many methods, each of which encapsulates a small and cohesive piece of functionality. Because of this fine-grained nature, many methods must be called to achieve a desired result. This fine-grained interface approach is ideal for stand-alone applications because it supports many desirable application characteristics such as maintainability, reusability, and testability.
Working with an object that exposes a fine-grained interface can greatly impede application performance, because a fine-grained interface requires many method calls across process and network boundaries. To improve performance, remote objects must expose a more coarse-grained interface. A coarse-grained interface is one that exposes a relatively small set of self-contained methods. Each method typically represents a high-level piece of functionality such as Place Order or Update Customer. These methods are considered self-contained because all the data that a method needs is passed in as a parameter to the method.
The Data Transfer Object pattern applies the coarse-grained interface concept to the problem of passing data between components that are separated by process and network boundaries. It suggests replacing many parameters with one object that holds all the data that a remote method requires. The same technique also works quite well for data that the remote method returns.
There are several options for implementing a data transfer object (DTO). One technique is to define a separate class for each different type of DTO that the solution needs. These classes usually have a strongly typed public field (or property) for each data element they contain. To transfer these objects across networks or process boundaries, these classes are serialized. The serialized object is marshaled across the boundary and then reconstituted on the receiving side. Performance and type safety are the key benefits to this approach. This approach has the least amount of marshaling overhead, and the strongly typed fields of the DTO ensure that type errors are caught at compile time rather than at run time. The downside to this approach is that a new class is created for each DTO. If a solution requires a large number of DTOs, the effort associated with writing and maintaining these classes can be significant.
A second technique for creating a DTO is to use a generic container class for holding the data. A common implementation of this approach is to use something like the ADO.NET DataSet as the generic container class. This approach requires two extra translations. The first translation on the sending side converts the application data into a form that is suitable for use by the DataSet. The second translation happens on the receiving side when the data is extracted from the DataSet for use in the client application. These extra translations can impede performance in some applications. Lack of type safety is another disadvantage of this approach. If a customer object is put into a DataSet on the sending side, attempting to extract an order object on the receiving side results in a run-time error. The main advantage to this approach is that no extra classes must be written, tested, or maintained.
ADO.NET offers a third alternative, the typed DataSet. ADO.NET provides a mechanism that automatically generates a type-safe wrapper around a DataSet. This approach has the same potential performance issues as the DataSet approach but allows the application to benefit from the advantages of type safety, without requiring the developer to develop, test, and maintain a separate class for each DTO.