Caching to Improve Performance

Data caching is an important architectural issue in many J2EE applications, especially as web applications tend to read data much more often than they update it.

Caching is especially important in distributed applications, as without it the overhead of remote invocation is likely to prove far more of a problem. However, it's valuable even in collocated web applications such as our sample application, especially when we need to access data from persistent stores.

Thus, caching is more than just an optimization; it can be vital to making an architecture work.

Caching can offer quick performance wins and significantly improve throughput by reducing server load. This will benefit the entire application, not just those use cases that use cached data.

However, caching can also lead to complex issues of concurrent code and cluster-wide synchronization. Do not implement caching (especially if such implementation is non-trivial) without evidence that it is required to deliver adequate performance; it's easy to waste development resources and create unwarranted complexity by implementing caching that delivers no real business value.

Caching Options

The sample application requires a moderate amount of heavily-accessed reference data. We will definitely want to cache this data, rather than run new Oracle queries every time any of it is used. Let's consider the caching options available to this type of application, and some common issues surrounding caching.

Caching will deliver greater performance benefit the closer the cache is to the user. We have the following choices in the sample application (moving from the RDBMS, from which data is obtained, towards the client):

  • Rely on RDBMS caching
    Databases such as Oracle cache data if configured correctly, and hence can respond to queries for reference data very fast. However, this will not produce the desired performance or reduction in server load. There is still a network hop between the J2EE server and the database; there is still the overhead of talking to the database through a database connection; the application server must allocate a pooled connection and we must go through all layers of our J2EE application.

  • Rely on entity bean caching implemented by the J2EE server
    This requires that we use entity beans to implement data access, and that we choose to access reference data through the EJB container. It also caches data relatively far down the application call path: we'll still need to call into the EJB server (which involves overhead, even if we don't use RMI) to bring the cached data back to the web tier. Furthermore, the effectiveness of entity bean caching will vary between J2EE servers. Most implement some kind of "read-only" entity support, but this is not guaranteed by the EJB specification.

  • Use another O/R mapping solution such as JDO, that can cache read-only data
    This doesn't tie us to data access in the EJB tier and offers a simpler programming model than entity beans. However, it's only an option if we choose to adopt an O/R mapping approach to data access.

  • Cache data in the form of Java objects in the web tier
    This is a high-performance option, as it enables requests to be satisfied without calling far down the J2EE stack. However, it requires more implementation work than the strategies already discussed. The biggest challenge is the need to accommodate concurrent access if the data is ever invalidated and updated. Presently there is no standard Java or J2EE infrastructure to help in cache implementation, although we can use packages such as Doug Lea's util.concurrent that simplify the concurrency issues involved. JSR 107 (JCache – Java Temporary Caching API) may deliver standard support in this area (see http://www.jcp.org/jsr/detail/107.jsp).

  • Use JSP caching tags
    We could use a JSP tag library that offers caching tags that we can use to cache fragments of JSP pages. Several such libraries are freely available; we'll discuss some of them in Chapter 13. This is a high-performance option, as many requests will be able to be satisfied with a minimum of activity within the J2EE server. However, this approach presupposes that we present data using JSP. If we rely on the use of caching tags for acceptable performance, we're predicating the viability of our architecture on one view technology, subverting our MVC approach.

  • Use a caching filter
    Another high performance option is to use a caching Servlet 2.3 filter to intercept requests before they even reach the application's web tier components. Again, there are a variety of free implementations to choose from. If a whole page is known to consist of reference data, the filter can return a cached version. (This approach could be used for the first two screens of the sample application – the most heavily accessed – which can be cached for varying periods of time). Filter-based caching offers less finely grained caching than JSP tags; we can only cache whole pages, unless we restructure our application's view layer to enable a filter to compose responses from multiple cached components. If even a small part of the page is truly dynamic, filtering isn't usually viable.

  • Use HTTP headers to minimize requests to unaltered pages
    We can't rely solely on this approach, as we can't control the configuration of proxy caches and browser caches. However, it can significantly reduce load on web applications and we should combine it with those of the above approaches we choose.

JSP tags and caching filters have the disadvantage that caching will benefit only a web interface. Both these types of caches don't "understand" the data they cache, only that they may hold the information necessary to respond to an incoming request for content. This won't be a problem for the sample application's initial requirements, as no other interfaces are called for. However, another caching strategy, such as cached Java objects in the web tier, might be required if we need to expose a web services interface. On the positive side, "front caching" doesn't care about the origin of data it caches, and will benefit all view technologies. It doesn't matter whether data was generated using XML/XSLT, Velocity, or JSP, for example. It will also work for binary data such as images and PDF.

Whatever caching strategy we use, it's important that we can disable caching to verify that we aren't covering up appalling inefficiency. Architectures that rely on caching to conceal severe bottlenecks are likely to encounter other problems.

A Caching Strategy for the Sample Application

To implement a successful caching strategy in our sample application, we need to distinguish between reference data (which doesn't change, or changes rarely) and dynamic data, which must always be up-to-date. In this application, reference data, which changes rarely, but should be no more than one minute out of date, includes:

  • Genres.

  • Shows.

  • Performance dates.

  • Information about the types of seats, their name and price for each show.

  • The seating plan for each performance: which seats are adjacent, how many seats there are in total, etc.

The most heavily requested dynamic data will be the availability of seating for each type of seat for each performance of a show. This is displayed on the "Display Show" screen. The business requirements mandate that this availability information can be no more than one minute out of date, meaning that we can only cache it briefly. However, if we don't cache it at all, the display show screen will require many database queries, leading to heavy load on the system and poor performance. Thus we need to implement a cache that is capable of accommodating relatively frequent updates.

We could handle caching entirely by using "front caching". The "Welcome" and "Display Show" screens could be protected by a caching filter set to a one minute timeout. This approach is simple to implement (we don't need to write any code) and would work whatever view strategy (such as JSP or XSLT) we use. However, it has serious drawbacks:

  • While the requirements state that data should be no more than one minute out of date, there is business value in ensuring that it is more up to date if possible. If we use a filter approach, we can't do this; all data will need to be updated at once regardless of what has changed. If we use a finer-grained caching strategy, we may be able to ensure that if we do hold more recent data as part of this page (for example, if a booking attempt reveals that a performance is sold out) this information appears immediately on the display show screen.

  • A web services interface is a real possibility in the future. A filter approach won't benefit it.

These problems can be addressed by implementing caching in Java objects in the web tier.

There are two more issues to consider before finalizing a decision: updates from other applications and behavior in a cluster. We know from the requirements that no other processes can modify the database (The one exception is when an administrator updates the database directly; we can provide a special internal URL that administrators must request to invalidate application caches after making such a change). Thus, we can assume that unless our application creates a booking, any availability data it holds is valid. Thus, if we are running the application on a single server and caching data in Java objects, we never need to requery cached data: we merely need to refresh the cached data for a performance when a booking is made.

If the application runs in a cluster, this optimization is impossible, unless the cache is cluster-wide; the server that made the booking will immediately reflect the change, but other servers will not see it until their cached data times out. We considered the idea of sending a JMS message that will be processed by all servers when a booking is made, getting round this problem. However, we decided that caching with a timeout offered sufficiently good performance that the overhead of JMS message publication was hard to justify.

Such data synchronization issues in a cluster of servers are a common problem (they apply to many data access technologies, such as entity beans and JDO). Ideally we should try to design our sample application so that it can achieve optimum performance on a single server (there is no requirement that it ever will run in a cluster if it's fast enough) yet be configurable to work correctly in a cluster. Thus we should support both "never requery" and "requery on timeout" options, and allow them to be specified on deployment, without the need to modify Java code. As the one minute timeout value may change, we should also parameterize this.



Expert One-on-One J2EE Design and Development
Microsoft Office PowerPoint 2007 On Demand
ISBN: B0085SG5O4
EAN: 2147483647
Year: 2005
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net