Design Patterns and Best Practices

As application developers, we have control over some aspects of the system and no control whatsoever over other aspects. For example, even though XML introduces a lot of network and processing overhead, a developer cannot unilaterally decide to use another language for data representation and still interoperate with other Web services. On the other hand, the developer is free to use any XML parser (or even implement her own parsing scheme).

In this section, we discuss design patterns and best practices for developing Web services and applications with high measures of QoS. Our intention is not to suggest replacements to the technologies that make up the Web services platform. Instead, our goal is to describe common or recurring issues in the development of Web services and Web services-based applications, as well as the preferred solutions.

Before we begin, one thing to keep in mind is that applications as well as the environments in which they are deployed must be analyzed within the context of an entire end-to-end system before trying to optimize any one subsystem. As we have already seen, a variety of technologies and systems contribute to the overall QoS measures of an application. We must first understand where the problems lie before we can start to address them.

After all, Amdahl's Law holds true for Web services as well. Amdahl's Law states that the amount of speedup achieved by a system by optimizing any one subsystem is bounded by the percentage of time that subsystem is used by the overall system. That is, in developing an application that consumes Web services, do not spend months optimizing the performance of a specific part of the application until you are certain of the need.

One mistake common to developers of Web services is they often assume that the much talked about weighty nature of XML is the source of their performance problems. After spending weeks or months re-architecting their application and incorporating more efficient XML parsers, they find that their application's performance has marginally improved.

When analyzing an application, look at not only the application but also the infrastructure on which the application is deployed (e.g., application server, hardware), the communications network, as well as other systems that are on the critical path, such as the Web server. Optimizing the implementation of one part of an application while neglecting the slow network connection or the overloaded Web server will only have marginal benefit. It is imperative to first understand where the inefficiencies lie.

Having said this, let's now turn to some important design patterns and best practices for developing Web services and applications that have a high QoS measure from the get go.

Use Coarse-Grained Web Services

As we have seen, there is significant overhead to invoking remote Web services using SOAP. Incurring that large amount of overhead to retrieve just a few bits of information is not efficient and results in reduced measures of QoS.

For example, consider the classic example of using Web services to retrieve stock price information. A Web service is first invoked to get the stock ticker symbol from the company name. Then a second Web service is invoked to get the latest stock price of a company given its stock ticker symbol. Ironically, this commonly cited example results in low performance. The large amount of overhead of a SOAP invocation weighed against the relatively small amount of information that is returned by each Web service simply a single ticker symbol or a single number representing the current stock price results in reduced application performance.

Mobile applications that consume such fine-grained Web services additionally increase their wireless network usage, resulting in increased network usage cost. Moreover, since the amount of power consumed in wireless communications is proportional to the number of bits transmitted or received, the use of fine-grained Web services results in wasted energy through the overhead of fine-grained Web service invocations.

The consumption of coarse-grained Web services by applications can increase their QoS measures along a variety of metrics, including performance, cost, and power consumption.

A coarse-grained Web service is one that implements an interface with operations, each of which performs considerable work and minimizes the need for other related operation invocations. This could mean that each operation performs a lot of computation or work. Alternately, each operation may return a large amount of relevant information, thus minimizing the need for follow-on calls.

Consider developing a stock price calculator application. The application allows a user to input the name of a company and then displays its current stock price. The application relies on one or more Web services for the actual stock price data.

Figure 9-3 illustrates three different architectures for implementing the stock quote calculator application. The first architecture, depicted in Figure 9-3(a), uses two fine-grained Web services and requires the application to make two separate invocations. The first Web service is needed to map the actual company name to its stock ticker symbol, which the second Web service is required to access the stock price data.

Figure 9-3. Using coarse-grained Web services increases an applications QoS measures along a variety of metrics, including performance, cost, and power consumption.


The second architecture, depicted in Figure 9-3(b), uses a single coarse-grained Web service that only requires a single call by the application. This Web service can handle both company names or the company symbols as input, and removes the need for the company name to company ticker symbol lookup invocation.

Finally, the third architecture, depicted in Figure 9-3(c), again uses two fine-grained Web services but only requires a single call by the application. This is accomplished by using an intermediary server that accepts incoming calls from the application and in turn invokes a number of Web services to fulfill the application's request. The intermediary could be a Web service itself, which is a result of wrapping a number of underlying Web services. Or, it could also be a proxy that calls Web services on behalf of the application. In many cases, this proxy may be a J2EE Servlet that accepts HTML or XML data (not necessarily SOAP), parses the data, calls the required Web services, and then returns the results again as HTML or XML.

This type of intermediary architecture is particularly useful in mobile environments where the connection between the mobile device and the intermediary is over a wireless network, while the connection between the intermediary and the Web services is over wire line networks. By minimizing the data transmitted over the wireless network (by using a coarse-grained interface and by using messaging that is less verbose than SOAP), the power consumed by the mobile application, the application latency, and the network usage costs are all reduced. Chapter 10 has more information on developing efficient mobile systems with Web services.

Do not fall into the pitfall of doing fine-grained remote procedure calls (RPCs).

  • As a developer of Web services, do not expose all of your application's methods as Web services.

  • Many utilities that take existing applications and make them available as Web services automatically expose all the operations.

  • When using an application-to-Web-service utility, configure it to expose only a few of the application's operations. Otherwise, build a "wrapper" application that combines the operations of the underlying application(s) into a few coarse-grained operations and then runs the application-to-Web-service utility on the wrapper application.

Build the Right Client Application

Using Web services within an application is not an excuse to neglect developing any business logic for the client application itself. Web services should be used to augment the application or to provide functionality that is difficult to implement within the client application. Web services should not be used to replace the client application's business logic.

As a first-order analysis, look at the frequency of Web service calls from the client application. A high frequency may suggest that the application is over-utilizing remote Web services instead of local application business logic. For the most frequent Web service invocations, ask yourself whether it is practical to implement the functionality within the application. Can it be implemented correctly? Are the algorithms well known? How long will it take to implement it?

For example, consider an application that uses Web services to calculate the square root of a number. If the application continuously invokes this Web service, it will usually be worthwhile to implement the square root function within the client application itself. If implementing it locally is not possible (for time-to-market reasons or a lack of development resources), then locate a Web service that takes as input a list of numbers and returns another list of the square root of each number of the input list. You will have to re-architect and re-code parts of the client application to support such a Web service. But either way, the frequency of Web service invocations will be reduced.

For Web services that do not provide functionality but instead return data (stock quotes or the latest quarterly sales data), consider using coarse-grained Web services that return a large amount of data, which is cached by the client application. Prior to making subsequent Web service calls, the client application first checks the cached data. If the data is available and is valid (not expired), that data is used. Only if the data is not found locally is a remote Web service invoked. This technique is similar to synchronizing data between the client application and the back-end data store.

Consider a client application that allows a manager to analyze the quarterly sales results for his organization. The application provides multiple views of the data, including graphical illustrations of the percentage of revenue from each vertical market, profit-loss analyses, and so on. Based on what the user wants to see, the application invokes a Web service that returns the appropriate information and the application simply displays it. For instance, if the manager wants to view a pie chart of the percentages of revenue from each vertical market, the application invokes a Web service that returns the vertical names and the percentage contribution to overall revenue. The application simply displays this information as a pie chart.

Alternately, the application can invoke a Web service operation to return all of the latest sales data for the organization. Then, the client application itself can compute the required percentages for each vertical's contribution to overall revenue, as well as additional analyses that the user may want. Each of these additional analyses is completed by the client application itself without accessing any network-based Web service. By using a coarse-grained Web service and accessing a larger amount of data, the application makes more efficient use of the network, reduces application latency, and enables network-disconnected usage of the application (e.g., on an airplane where network connectivity may not be easily available).

Do not use Web services as an excuse to not develop any business logic for the client application.

  • A high frequency of Web service calls may suggest that the application is over-utilizing remote Web services instead of local application business logic.

  • For Web services that provide functional capabilities, ask yourself whether implementing the functionality locally within the client application is possible. Alternately, look for more coarse-grained Web services that implement a larger functionality within a single service, and build local application business logic to accommodate the new Web service.

  • For Web services that provide data, consider using Web services that return a large amount of data, which can be cached by the client application and processed for later use.

Cache Web Service Results

Whether you are consuming coarse- or fine-grained Web services within your application, there is a chance that other applications (or even your application at a later time) will need to make the same Web service call. If the information returned by the Web service is not time-sensitive or only changes after a set time period, the return data from Web services can be cached and reused.

Continuing with our example of stock price information, real-time stock prices fluctuate constantly and are not suitable for caching. However, many stock price Web services only update their stock quotes every fifteen to thirty minutes. This data can easily be cached by a server on the local network and eliminate the need for remote Web service calls. Mutual fund prices are usually calculated and updated daily. Again, this information can be cached somewhere on the local network close to the application.

Caching data from Web service invocations not only increases the performance of Web services-based applications but also improves the performance of already congested networks. This will free valuable network bandwidth for other applications.

Web service caches can be utilized anywhere along the application fulfillment chain. Figure 9-4 illustrates that caches that temporarily store the results from previous Web service invocations can be deployed on the Local Area Network (LAN) where the application resides, on the networks (e.g., the Internet) that connect the application to the Web service, or on the network close to the Web service endpoint.

Figure 9-4. Caching can be utilized anywhere along the Web services-based application fulfillment chain to reduce network traffic and application latencies.


Caches that are located closer to the application provide increased performance and eliminate unnecessary network traffic. However, such co-located or close-proximity caches are not always available or are cost prohibitive. Caches located on networks with a high volume of traffic can cache results from a large number of Web services and can amortize their costs over a high volume of application calls.

Caching for improved performance of Web service invocations is not limited only to the caching of results of Web services. If the business logic underlying the Web service is complex or time-intensive, the results of portions of the business logic or database accesses can be cached. This cached data can be used to rapidly compute the result of a Web service invocation. The caching in this case is not of the results of the Web service, but instead of portions of the implementation of the Web service. This approach improves the latency of the Web service, but does not reduce the traffic on the networks.

A Web services caching product that addresses this need is provided by Chutney Technologies. Chutney's Apptimizer product provides caching mechanisms not only for Web services, but also for business logic components, database accesses, as well as HTML documents.

Use Resources Efficiently

It may seem like common sense to use resources efficiently, but a number of QoS measures of an application can be immediately improved by simply using the right infrastructure or the right tool for the job.

When discussing issues surrounding improving the performance of Web services and applications that consume Web services, XML is on the tip of everybody's tongue. But XML used properly and supported with the right infrastructure does not have to significantly reduce the QoS measures of an application.

An XML parser is a key infrastructure component for applications that deal with XML data. A variety of parsers exist today, each of which has different characteristics and application models. The most common flavors of XML parsers are the Document Object Model (DOM) and the Simple API for XML (SAX) types.

The SAX type of parser is based on the notion of a fast, forward-only and low memory footprint method of processing XML documents. SAX parsers traverse an XML document one element at a time and hand the data back to the application through a defined interface. SAX parsers are lightweight and have low memory requirements. As a result of its lightweight nature, applications that use SAX parsers may end up parsing a single document multiple times to access various document nodes.

DOM parses the entire XML document all at once and builds a tree-based object model representation of the entire XML document. When programming with a DOM parser, our application code interacts with an in-memory tree representation of the XML document. As such, DOM parsers only parse the XML document once. After the object representation is in memory, the DOM parsers also allow a tremendous amount of navigational capability throughout the document. As a result of creating the in-memory representation of the entire document as well as the additional navigational facilities, DOM parsers are usually more heavyweight processors than their SAX counterparts and require more memory.

Other important differences between XML parsers are whether the parser supports data validation and the language in which they are implemented. Parsers also differ in their speed and runtime footprint.

Selecting the right XML parser for each application is often a difficult decision. The general rules of thumb that can be used to help the decision process are:

  • If the application only needs to locate tags and then extract the tag values, use a non-validating parser.

  • If the XML document is large, but your application only needs to access a few elements of the document, use a SAX parser.

  • If a large number of the elements of the XML document will be accessed, or if the document itself must be accessed multiple times, use a DOM parser.

Even with DOM and SAX parsers, for some applications and environments an even more stripped down XML parser may be required. For example, consider an application that calculates the real-time value of a mutual fund by repeatedly invoking a single Web service to find the latest price of each stock in the mutual fund portfolio. In this case, a general-purpose XML parsing and document manipulation infrastructure is unnecessary. Since the same SOAP request message (except each company's name) is generated and the same SOAP response message is parsed for each invocation, the XML can be hard coded within the application. The company name, which is the only part of the message that changes, can be generated for each invocation. Similarly, the price of each stock can be extracted from each response message based on a hard-coded pattern.

For those situations where the same or a similar XML document is generated or parsed repeatedly, consider hardcoding the XML elements within the application itself or use a stripped down XML parsing and generation infrastructure that supports your application's needs. This technique is useful not only for client applications but also for Web services. A popular Web service that is the only XML service deployed on a server can use a more bare-bones XML infrastructure (or handle the XML parsing and generation within the application itself). The only caveat here is to architect the system in a modular fashion so the system is easy to maintain and more standard infrastructure can be swapped in if necessary.

For applications that interact with XML, think carefully about your choice of XML handling infrastructure. Some good rules of thumb to remember are:

  • Use a SAX type of parser if the XML document is large, but your application only needs to access a few elements and speed is important.

  • Use a DOM type of parser if the XML document will be manipulated repeatedly, a large number of the XML elements needs to be accessed, or random access into the XML document is necessary.

  • The Java API for XML (JAX) type of XML binding infrastructure provides development convenience, but often at the cost of performance.

  • If an application generates or parses the same or similar XML document repeatedly, consider hard coding the XML elements into the application itself.

Large XML documents can be more efficiently transported over networks by simply using compression. The tradeoff is between bloating the network (e.g., power and transmission time) versus bloating the processor (on both the server and client sides) to uncompress the document.

As we discuss in Chapter 10, Mobile and Wireless, the amount of energy required to transmit or receive a bit of information over a wireless network is quickly reaching its theoretical minimum. What this means is that over time and with improvements in technology, the amount of energy expended in transmitting or receiving a bit of information will be roughly constant.

On the other hand, improvements in transistor technologies continue to drive down the amount of energy expended per unit work by processors and other integrated circuits. What this means is that over time, the amount of energy used in performing a unit of work (e.g., compressing messages) will continue to decrease. A graphical illustration of this is shown in Figure 10-2 in Chapter 10.

Given that the amount of energy expended by integrated circuits such as processors continues to drop while the amount of energy expended by wireless networks is roughly constant, it behooves mobile application developers to compress messages (even if it is a fast and simple compression scheme) prior to sending them over the network. Based on the actual message as well as the compression technique used, compression ratios of 20% to 70% are not uncommon. One thing to keep in mind is that if the client compresses Web service requests, compression itself and the same compression scheme must also be supported by the Web service.

Although our discussion here about SOAP message compression has dealt with energy efficiency, message compression also affects other QoS metrics. Uncompressed messages transmitted over already overloaded networks can result in decreased performance (increased latency) both from the transmission time of the larger messages and also from retries caused by dropped packets. Accordingly, although compression is useful for mobile applications communicating with Web services over low bandwidth wireless networks, it is also relevant for non-mobile environments as well.

Additional techniques can be employed to further improve QoS measures, including energy efficiency and performance of Web services-based applications. One such technique can be used when Web services are not directly accessed by a client application, but instead are accessed indirectly through an intermediary server. Rather than use XML-based SOAP messaging between the client application and the intermediary server, a more succinct data encoding and representation can be used.

Figure 9-5 illustrates the use of HTTP parameters between the client application and the intermediary server, which in this case is a J2EE servlet. Instead of generating and transmitting a SOAP request message, the client application simply performs an HTTP POST to the URL with the form parameter sym and the parameter value HPQ.

Figure 9-5. Improving QoS by eliminating the use of XML between the client application and an intermediary server, which proxies Web services invocations between the client application and the actual Web service.


The servlet simply interprets this as an invocation of the getQuote operation of the StockQuoteService Web service with the argument sym and argument value of HPQ. The servlet then maps the unique identifier StockQuoteService to an actual Web service (shown in this case as FastStockQuote). Then the servlet parses the WSDL description of the FastStockQuote Web service to determine the input argument type of the getQuote operation. Assuming that the required argument type for getQuote is String, the servlet processes the argument name and value (sym and HPQ, respectively) and generates a SOAP request message. The SOAP request message is then sent to the actual endpoint of the FastStockQuote Web service.

By using this technique, we are able to effectively compress the entire SOAP request message (in this case 849 bytes) to a few additional characters appended to the end of the URL. The client application not only saves precious network bandwidth and transmission time, but also does not require infrastructure for XML parsing and generation.

This technique of using an intermediary server between the client application and the remote Web service is quite useful in mobile environments. More information about this and other architectures for indirect access to Web service from mobile application is described in Chapter 10.

For improved measures of QoS for Web services-based application environments, carefully utilize limited resources such as network bandwidth and processor cycles. Key issues to keep in mind are:

  • Compress SOAP messages whenever possible, and remember that compression (and the actual algorithm used) must be supported by both the client application as well as the Web service.

  • Use intermediary servers to offload processing and network bandwidth utilization by the client application.

  • For severely resource-limited applications (e.g., mobile applications), use a servlet as the intermediary server and pass Web service invocation information to the servlet as HTTP parameters.

Developing Enterprise Web Services. An Architect's Guide
Developing Enterprise Web Services: An Architects Guide: An Architects Guide
ISBN: 0131401602
EAN: 2147483647
Year: 2003
Pages: 141

Similar book on Amazon © 2008-2017.
If you may any questions please contact us: