Physical Architecture

One topic we haven't examined in much detail is designing the physical infrastructure for a distributed application. The focus of this book is on distributed application programming, not networking, firewalls, or Web servers although the best programmers will know more than a little about all of these topics. If you are interested in taking a closer look at the hardware side of things, you might want to consider referring to a book such as Deploying and Managing Microsoft .NET Web Farms by Barry Bloom (Sams, 2001) or one of the titles from Microsoft Press. Here, we'll just look at the absolute basics.

Scaling

When improving a system's hardware, system architects generally distinguish between the following two types of scaling:

  • Scaling up (vertical scaling)

    This is the process of upgrading server hardware for example, adding new memory or upgrading the CPU.

  • Scaling out (horizontal scaling)

    This is the process of adding additional computers to the system, usually as load-balanced servers that share part of the workload.

Scaling up can improve performance and scalability. Scaling out can only improve scalability because each individual client still makes use of only one computer. However, scaling out is generally the best approach and is the Holy Grail of distributed applications because scaling up is expensive and constrained by absolute limits. (For example, you can't yet purchase a 4000 GHz CPU.) Scaling up also encourages a single point of failure. If the server fails, there might be no other server to offer even reduced capacity. Scaling out, on the other hand, is by comparison quite cheap. You just need to add another inexpensive Windows server.

To support horizontal scaling, you need to implement some sort of load-balancing solution. You also need to use .NET Remoting or XML Web services to move the logic of your application out of the client and to the server, where it can be load-balanced.

Load Balancing

Enterprise systems use load balancing to distribute the burden of an application over multiple servers. You can implement load balancing in countless ways, and some are more efficient than others. For instance, you can implement a crude form of load balancing by deploying two versions of a client application that differ only by a single configuration file setting. One group of clients is directed to connect to server A, and the other uses server B. This is a type of static load balancing, and it tends to perform poorly because there is no guarantee that the workload will be evenly shared. It's possible that a large number of clients in the first group will connect at the same time, taxing server A while server B remains underused. Even worse, if one server fails, the other clients won't be redirected to the available computer.

Using a server-side list, you can implement an improved form of load balancing called round-robin load balancing. In round-robin load balancing, the client retrieves the name of the server to use when it initializes. It then uses that server for the duration of the current session. The server is chosen randomly from an available list. This approach gives you the freedom to dynamically modify the server configuration, adding or removing computers. It doesn't guarantee that the load will always be evenly distributed, however, because some clients might perform much more intensive tasks or hold a connection for much longer periods of time than others. In addition, as with static load balancing, this system has a single point of failure the computer that hosts the server lookup service.

Other forms of load balancing require dedicated hardware or software. Hardware load balancing works by analyzing network traffic and is effective but expensive. Software load balancing is most commonly used through Microsoft Application Center 2000. This product provides a robust clustering service that allows a group of computers to be exposed under a single IP address. The Application Center software dynamically routes requests to the server with the lightest load. The only concern you should have when programming a server component for an Application Center environment is ensuring that it is stateless, because subsequent requests from the same client are likely to be routed to different computers. For more information (or to download a trial version), you can visit the Application Center site at http://www.microsoft.com/applicationcenter.

Finally, you also can use features in your RDBMS to scale out the database server. The current version of SQL Server (SQL Server 2000) can't be scaled in a manner analogous to Application Center load balancing. Instead, you must use federated database servers to partition the database over multiple servers. After they have been configured, these servers work together to satisfy a query and retrieve information. Ideally, the data will be partitioned in such a way that related information is kept together. You might simply divide tables (for example, putting a product catalog on one server and an employee list on another), or you might partition large database tables according to criteria such as geographic region or date. Microsoft has achieved extremely good performance results using federated database servers (see, for example, http://www.microsoft.com/sql/evaluation/compare/benchmarks.asp), but these systems are still exposed to a single point of failure because none of the data is shared. Microsoft also promises that subsequent SQL Server versions will add support for shared clustering as well as partitioning.



Microsoft. NET Distributed Applications(c) Integrating XML Web Services and. NET Remoting
MicrosoftВ® .NET Distributed Applications: Integrating XML Web Services and .NET Remoting (Pro-Developer)
ISBN: 0735619336
EAN: 2147483647
Year: 2005
Pages: 174

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net