11.4 Efficient Distributed Computing Architecture

I l @ ve RuBoard

High-performance architecture for enterprise systems consists of one or more frontend load balancers (the Front Controller design pattern) distributing requests to a cluster of middle-tier web application servers running J2EE, with a database behind this middle tier . Components designed to operate asynchronously with message queues holding requests for the components optimize the throughput of the system (the Message Fa §ade design pattern). Server management of client socket connections should use the java.nio package classes, and in particular sockets should be multiplexed with the Selector class. Load balancing is efficiently supported using multiplexed I/O (the Reactor design pattern). (Typically, the application server manages sockets transparently in the application, but if your application manages sockets directly, do it with NIO multiplexing.)

In addition, these main architectural components should be supported with caches, resource pools, optimized database access, and a performance-monitoring subsystem, and should have no single point of failure. Caches and resource pools should be made tunable by altering configuration parameters ”i.e., they shouldn't require code tuning. Resource pools are recommended only for resources that are expensive to replace or need limiting, such as threads and database connections. Database access optimization is made simpler if there is a separate database access layer (the Data Access Object design pattern).

11.4.1 Know Distributed Computing Restrictions

All forms of distributed computing have two severe restrictions:

Bandwidth limitations

The amount of data that can be carried per second across the communications channel is limited. If your application transfers too large a volume of data, it will be slow. You can either reduce the volume transferred by compression or redesign, or accept that the application will run slowly. Some enterprise applications provide the additional option of upgrading the bandwidth of the communications channel, which is often the cheapest tuning option.


Any single communication is limited in speed by two factors:

  • How fast the message can be transferred along the hardware communications channel

  • How fast the message can be converted to and from the electrical signals that are carried along the hardware communications channel

The first factor, transferring the actual signals, is limited ultimately by the speed of light (about 3.33 milliseconds for every 1,000 kilometers), but routers and other factors can delay the signal further. The second factor tends to dominate the transfer time, as it includes software data conversions, data copying across buffers, conversion of the software message to and from electrical signals, and, potentially , retransmissions to handle packets lost from congestion or error.

The performance of most enterprise applications is constrained by latency. Data volumes are a concern, but mainly because of the cost of serializing large amounts of data rather than limited bandwidth. The primary mechanism for minimizing latency is to reduce the number of messages that need to be sent across the network, normally by redesigning interfaces to be coarser. That is, you should try to make each of your remote calls do a lot of work rather than requiring many remote calls to do the same work.

Some common techniques help to reduce the number of network transfers:

  • Combine multiple remotely called methods into single wrapper methods (Session Fa §ade and Composite Entity design patterns).

  • Cache objects (Service Locator design pattern and its variations), in particular JNDI lookups (EJBHome Factory design pattern and variations).

  • Batch messages and data.

  • Move execution to the location of the data.

The Value Object design pattern also helps to reduce the number of message transfers by combining multiple results into one object that requires only one transfer. You can reduce serialization overheads by using transient fields and implementing the java.io.Externalizable interface for classes causing serialization bottlenecks.

There is also one common case in which data volume is an issue: when large amounts of data can be returned by a query. In this case, the Page-by-Page Iterator design pattern (and variations of that pattern, such as the ValueListHandler pattern) should be used to ensure that only the data that is actually needed is transferred.

I l @ ve RuBoard

The OReilly Java Authors - JavaT Enterprise Best Practices
The OReilly Java Authors - JavaT Enterprise Best Practices
Year: 2002
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net