Distributed Architecture

Distributed architecture rose to prominence as a way to respond to the scalability challenges inherent in the client/server model. The basic concept behind distributed applications is that at least some of the functionality runs remotely, in the process of another computer. The client-side application communicates with this remote component, sending instructions or retrieving information. Figure 1-3 shows a common design pattern, which uses a remote data access component. Notice that it incorporates three computers the client and two other computers called servers, although the distinction between client and server here is really only skin-deep. In some distributed systems, these other computers might be ordinary workstations, in which case they're usually called peers.

Figure 1-3. One example of a distributed application

graphics/f01dp03.jpg

Depending on the type of application, you can add multiple layers in this fashion. For example, you can add separate components for various tasks and spread them over different machines. Alternatively, you can just create several layers of components, which enables you to separate them to dedicated computers later. Similarly, you can decide how much of the work you want to offload to a given component. If you need to support a resource-starved client such as a mobile device, you can move all the logic to one or more servers, as illustrated in Figure 1-4. Generally, components are clearly divided into logical tiers that separate user services from business logic, which is in turn separated from data processing services (although this is not always the case in more freely structured peer-to-peer applications). Determining the best design for your needs is a difficult task and one we return to in the second part of this book.

Figure 1-4. One way to implement three-tier design

graphics/f01dp04.jpg

Don't make the mistake of thinking you need to subdivide your program into dozens of layers to use multiple computers. Microsoft provides a clustering service with its Application Center 2000 product. This service allows a component to be hosted on a cluster of computers, which appear like one virtual computer to the client. When the client uses a component, the request is automatically routed to an available computer in the cluster. This design requires some additional configuration (mainly the installation and configuration of Application Center 2000), but it enables you to provide ironclad scalability for extremely high-volume applications. You can also use partitioned views and federated database servers to split database tables over a group of servers, which can then work together (simulating a single larger machine with the combined resources of all the constituent servers).

Advantages of Distributed Architecture

Allowing components to run on other computers provides some new capabilities, such as support for thin clients, cross-platform code integration, and distributed transactions. However, the key benefit is scalability. New computers can be incorporated into the system when additional computing power is needed. Stability is generally better (particularly if you're using clustering) because one computer can fail without derailing the entire application. The most obvious benefit appears when you need to access limited resources, such as database connections. With the help of some prebuilt Microsoft plumbing, you can enable features such as just-in-time activation and object pooling for your components. These features are part of the COM+ component services built into the Windows operating system. With pooling, an expensive resource such as a database connection isn't destroyed when the client finishes using it. Instead, it's maintained and provided to the next client, saving you the work of having to rebuild it from scratch.

Object Pools and Taxicabs

By far the best analogy I've heard to describe object pooling comes from Roger Sessions in his classic book on COM and DCOM. You can think of limited components as taxicabs with the sole goal of transporting you to the airport. If you dedicate a single taxicab to every passenger who needs to get to the airport, you'll quickly hit a traffic jam as the parking lot fills and streets become crowded with traffic. This situation is the equivalent of a poorly written client/server program that maintains a database connection over the life of the application. Other users need a taxi, but the application refuses to release them.

Another approach might be to create single-use taxis. You use the taxi to get to the airport and immediately destroy it when you arrive there. The parking lot remains empty, and new cars can continue to arrive. However, cars can be built only so quickly, and eventually the demand outstrips the ability to create and send new cars. This scenario is the equivalent of the well-written client/server application that releases database connections properly as soon as it doesn't need them.

The best approach, and the one closest to real life, is to create a pool of taxis. When a taxi finishes driving a passenger, it travels back to serve a new client. A dispatcher takes care of tracking new requests and matching them to available taxis. This dispatcher plays the same role as COM+ component services.

Traditionally, distributed programming also introduces programmers to the benefits of writing and reusing components in different languages, using tiers to make certain separate data services from business logic and creating components that can serve a variety of different platforms (such as Web applications and desktop applications). As the client/server example shows, however, all these features are available in client/server programs. They become more critical in distributed applications, but good application design is always worthwhile.

DCOM and the History of Distributed Applications

In most cases, the programming model for a distributed application is similar to the programming model for a client/server application because the remote communication process is abstracted away from the programmer. Microsoft has gone to extraordinary lengths to ensure that a programmer can call a method in a remote component just as easily as a method in a local component. The low-level network communication is completely transparent. The disadvantage is that the programmer can easily forget about the low-level reality and code components in a way that isn't well suited to distributed use.

The most important decision you make when creating a distributed application is determining how these components talk together. Traditionally, remote components are exposed through COM, and the network communication is handled by the DCOM standard a protocol that carries its own baggage. DCOM has at least two fundamental problems: its distributed garbage collection system and its binary communication protocol.

Distributed Garbage Collection

DCOM uses keep-alive pinging, which is also known as distributed garbage collection. Under this system, the client automatically sends a periodic message to the distributed component. If this message isn't received after a certain interval of time, the object is automatically deallocated.

This system is required because DCOM uses ordinary COM components, which use reference counting to determine their lifetime. Without distributed garbage collection, a client could disconnect, fail to destroy the object, and leave it orphaned in server memory. After a while, these abandoned objects can drain precious memory from the server, gradually bringing it to a standstill.

In short, DCOM needs keep-alive pinging because it's based on ordinary COM objects, and COM doesn't provide any other way to manage object lifetime. However, distributed garbage collection increases network traffic needlessly and results in difficulty scaling to large numbers, particularly over slower wide area networks (WANs).

Complexity

DCOM is based on an intimidating multilayered protocol that incorporates services such as transport-level security. Microsoft programming languages and tools do a remarkable job of shielding developers (and their applications) from these complexities, but they still exist. Distributed DCOM components often require careful configuration on multiple computers and firewalls, over which they can't usually communicate. For these reasons, DCOM is poorly suited to distributed networks or heterogeneous networks that introduce computers from other platforms.

.NET Distributed Technologies

.NET addresses the problems that distributed application programmers have been grappling with for several years. However, .NET also recognizes that, depending on the application, there are several possible best approaches. For that reason, .NET includes a variety of distinct ways to use remote components. Developers who are adopting the .NET Framework for the first time face a good deal of confusion because they won't know where each technology fits into the overall picture of application design.

.NET Remoting

One of the most important distributed technologies in .NET is called .NET Remoting. .NET Remoting is really the .NET replacement for DCOM. It allows client applications to instantiate components on remote computers and use them like local components. However, .NET Remoting introduces a slew of much-needed refinements, including the capability to configure a component in code or through simple XML files, communicate using compact binary messages or platform-independent SOAP, and control object lifetime using flexible lease-based policies. .NET Remoting is also completely customizable and expandable if you choose, you can write your own pluggable channels and sinks that can allow .NET Remoting to communicate according to entirely different standards.

.NET Remoting is the ideal choice for intranet scenarios. It typically provides the greatest performance and flexibility, especially if you want remote objects to maintain state or you need to create a peer-to-peer application. With .NET Remoting, you face a number of design decisions, including the protocol you want to use to communicate and the way your objects are created and destroyed. Figure 1-5 and Figure 1-6 show two sides of .NET Remoting: one in which components are created on a per-client basis and the other where a single component handles all client requests, which would be an impossible feat for a client/server application to duplicate.

Figure 1-5. .NET Remoting with per-client objects

graphics/f01dp05.jpg

Figure 1-6. .NET Remoting with a singleton object

graphics/f01dp06.jpg

.NET Remoting can also be used across the Internet and even with third-party clients. It's at this point, however, that .NET Remoting starts to become blurred with another .NET technology: XML Web services. The following sections describe some of the differences.

XML Web Services

XML Web services are among the most wildly hyped features of Microsoft .NET. They allow a type of object communication that differs significantly from .NET Remoting (although it shares some of the low-level infrastructure):

  • XML Web services are designed with the Internet and integration across multiple platforms, languages, and operating systems in mind.

    For that reason, it's extremely easy to call an XML Web service from a non-.NET platform.

  • XML Web services are designed with interapplication and interorganization use in mind.

    In other words, you'll often expose a component through .NET Remoting to support your own applications. An XML Web service is more likely to provide basic functionality for other businesses to "plug in" to their own software. Toward that end, XML Web services support basic discovery and documentation. You can also use the Universal Description, Discovery, and Integration (UDDI) registry to publish information about XML Web services to the Internet and make them available to other interested businesses.

  • XML Web services are designed with simplicity in mind.

    XML Web services are easy to write and don't require as much developer investment in planning and configuring how they work. This simplicity also means that XML Web services are more limited for example, they're best suited to stateless solutions and don't support client notification or singleton use.

  • XML Web services always use SOAP messages to exchange information.

    This means they can never communicate as efficiently as an object using .NET Remoting and a binary channel. It also means they can be consumed by clients on other platforms.

Figure 1-7 shows a remote XML Web service in use. Notice that the outline indicates that XML Web service objects are single-use only. They're created with each client call and destroyed at the end of a call. .NET Remoting objects can also work in this fashion; unlike XML Web services, however, they don't have to.

Figure 1-7. XML Web service calls

graphics/f01dp07.jpg

ASP.NET Applications

Finally, there are also ASP.NET Web applications, which are inherently distributed. ASP.NET applications rely on the ASP.NET worker process, which creates the appropriate Web page object with each client request and destroys it after it has finished processing. In that way, ASP.NET pages follow the exact same pattern as XML Web services. The only difference is that XML Web services are designed for middle-tier use and require another application to consume them. You can design this application, or a third-party developer can create it. With ASP.NET applications, however, the client is a simple Web browser that receives an ordinary HTML page.

The basic process for an ASP.NET application is as follows:

  1. Microsoft Internet Information Services (IIS) receives a Web request for an ASP.NET (.aspx) file and passes the request on to the ASP.NET worker process. It compiles the file (if needed), caches a copy, and executes it.

  2. The compiled file acts like a miniature program. It starts and runs all appropriate event handlers. At this stage, everything works together as a set of in-memory .NET objects.

  3. When the code is finished, ASP.NET asks every control in the Web page to render itself.

  4. The final HTML output is sent to the client.

ASP.NET programming does the best job of hiding the underlying reality of distributed programming from the user. Compilation takes place automatically, and the platform provides built-in services for handling state and improving performance through caching. In essence, an ASP.NET application is just a set of compiled ASP.NET pages that are continually started, executed, and completed on the Web server. The client has the illusion of interacting with a Web server but in reality sees only the final HTML output returned by the code after it has completed.

This book doesn't cover ASP.NET coding directly because many excellent books are dedicated to ASP.NET (including my own ASP.NET: The Complete Reference). Instead, this book concentrates on how to create components for a distributed system whether it's a Web or desktop application. If you're creating a high-volume ASP.NET Web site that requires a transactional component to commit an order to a sales database, for example, you'll need to consider issues such as connection pooling and COM+ services, which this book examines. However, this book does not examine how to write the upper layer of ASP.NET user interface code.

Combining Distributed Technologies

Of course, you're also free to combine .NET Remoting, XML Web services, and ASP.NET pages according to your needs. For example, you might create an XML Web service that communicates through .NET Remoting with another component or create an application that uses XML Web services for some of its features and remote components for others. You'll see several examples of these blended solutions in the case study portion of this book (Part III).

Note

Unfortunately, the DCOM discussion in this chapter isn't just a history lesson. Depending on your .NET needs, you might need to rely on DCOM again. The most common reason is that you're using COM+ services, which require a COM/DCOM wrapper around your component. This situation is far from ideal and will likely change in the future as Microsoft creates a .NET version of COM+. Until then, you need to pay careful attention to these issues. Chapter 9 discusses when you can avoid these problems and how you can manage them.




Microsoft. NET Distributed Applications(c) Integrating XML Web Services and. NET Remoting
MicrosoftВ® .NET Distributed Applications: Integrating XML Web Services and .NET Remoting (Pro-Developer)
ISBN: 0735619336
EAN: 2147483647
Year: 2005
Pages: 174

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net