Benefits of Distributed Application Development
Because you ve chosen to read a book on .NET Remoting, you probably have some specific scenarios in mind for distributing an application. For completeness, we should mention a few of the general reasons for choosing to distribute applications.
Fault Tolerance
One benefit of distributed applications which arguably is also one challenge of using them is the notion of fault tolerance. Although the concept is simple, a wide body of research is centered on fault-tolerant algorithms and architectures. Fault tolerance means that a system should be resilient when failures within the system occur. One cornerstone of building a fault-tolerant system is redundancy. For example, an automobile has two headlights. If one headlight burns out, it s likely that the second headlight will continue to operate for some time, allowing the driver to reach his or her destination. We can hope that the driver replaces the malfunctioning headlight before the remaining one burns out!
By its very nature, distributed application development affords the opportunity to build fault-tolerant software systems by applying the concept of redundancy to distributed objects. Distributing duplicate code functionality or, in the case of object-oriented application development, copies of objects to various nodes increases the probability that a fault occurring on one node won t affect the redundant objects running at the other nodes. Once the failure occurs, one of the redundant objects can begin performing the work for the failed node, allowing the system as a whole to continue to operate.
Scalability
Scalability is the ability of a system to handle increased load with only an incremental change in performance. Just as distributed applications enable fault-tolerant systems, they allow for scalability by distributing various functional areas of the application to separate nodes. This reduces the amount of processing performed on a single node and, depending on the design of the application, can allow more work to be done in parallel.
The most powerful cutting-edge hardware is always disproportionately expensive compared to slightly less powerful machines. As mentioned earlier in the three-tier design discussion, partitioning a monolithic application into separate modules running on different machines usually gives far better performance relative to hardware costs. This way, a few expensive and powerful server machines can service many cheaper, less powerful client machines. The expensive server CPUs can stay busy servicing multiple simultaneous client requests while the cheaper client CPUs idle, waiting for user input.
Administration
Few IT jobs are as challenging as managing the hardware and software configurations of a large network of PCs. Maintaining duplicate code across many geographically separated machines is labor intensive and failure prone. It s much easier to migrate most of the frequently changing code to a centralized repository and provide remote access to it.
With this model, changes to business rules can be made on the server with little or no interruption of client service. The prime example of this model is the thin-client revolution. Thin-client architectures (usually browser-based clients) have most, if not all, of the business rules centrally located on the server. With browser-based systems, deployment costs are virtually negligible because Web servers house even presentation-layer code that the clients download on every access.
The principle of reduced administration for server-based business rules holds true even with traditional thick-client architectures too. If thick clients are primarily responsible for presenting data and validating input, the application can be partitioned so that the server houses the logic most likely to change.