3.2 Caching with Servlets

I l @ ve RuBoard

Here are some tips to consider that will help things move quickly with your servlets.

3.2.1 Pregenerate Content Offline and Cache Like Mad

Pregeneration and caching of content can be key to providing your site visitors with a quality experience. With the right pregeneration and caching, web pages pop up rather than drag, and loads are reduced ”sometimes dramatically ”on the client, server, and network. In this section I'll provide advice for how best to pregenerate content and cache at the client, at the proxy, and at the server. By the end of this section you'll feel compelled to generate new content during request handling only in worst-case scenarios.

There's no need to dynamically regenerate content that doesn't change between requests . Yet such regeneration happens all the time because servlets and JSPs provide an easy way to template a site by pulling in headers, footers, and other content at runtime. Now this might sound like strange guidance in a chapter on servlets, but in many of these situations servlets aren't the best choice. It's better to "build" the content offline and serve it as static content. When the content changes, you can build the content again. Pull the content together once it is offline rather than during every request.

Take, for example, an online magazine, newspaper, or weblog ('blog). How do the pros handle templatization without burdening the server? By pregenerating the content. Articles added to a site are written and submitted in a standard format (often XML-based) which, when run through a build process, produces a comprehensive update to the web site. The build reformats the article into HTML, creates links to the article from other pages, adds the content into the search engine (before the HTML reformatting), and ultimately prepares the site to handle heavy loads without extreme resources. You can see this in action with 'blog tools such as MovableType. It's a Perl application, but it generates content statically, so Perl doesn't even need to run on the production server.

As another example, think of an e-tailer with millions of catalog items and thousands of visitors. Clearly, the content should be database- backed and regularly updated, yet because much of the content will be identical for all visitors between updates, the site can effectively use pregeneration. The millions of catalog item description pages can be built offline, so the server load will be greatly reduced. Regular site builds keep the content fresh.

The challenge comes where the static and dynamic content meet. For example, the e-tailer might need servlets to handle the dynamic aspects of its site, such as an item review system or checkout counter. In this example, the item review can invoke a servlet to update the database, but the servlet doesn't necessarily need to immediately update the page. In fact, you see just such a delay with Amazon.com updates ”a human review and subsequent site build must take place before you see new comments. The checkout pages in our example, however, can be implemented as a fully servlet-based environment. Just make sure special coordination is implemented to ensure that the template look and feel used by the servlets matches the template look and feel implemented during the offline build.

3.2.1.1 Pregeneration tools

Unfortunately, few professional-quality, reasonably priced standard tools are available to handle the offline build process. Most companies and webmasters either purchase high-end content management systems or develop custom tools that satisfy their own needs. Perhaps that's why more people don't practice offline site building until their site load requires it.

For those looking for a tool, the Apache Jakarta project manages its own content using something called Anakia. Built on Apache Velocity, Anakia runs XML content through an XSL stylesheet to produce static HTML offline. Apache Ant, the famous Java build system, manages the site build. Others have had success with Macromedia Dreamweaver templates. Dreamweaver has the advantage of viewing JSP, WebMacro, Velocity, and Tea files as simple template files whose HTML contents are autoupdated when a template changes, providing a helpful bridge between the static and the dynamic.

There's a need here for a good common tool. If you think you have the right tool, please share it or evangelize it. Maybe it's out there and we just haven't heard of it yet.

3.2.1.2 Cache on the client

Pregeneration and caching go hand in hand because caching is nothing more than holding what you previously generated. Browsers (a.k.a. clients ) all have caches, and it behooves a servlet developer to make use of them. The Last-Modified HTTP header provides the key to effective caching. Attached to a response, this header tells the browser when the content last changed. This is useful because if the browser requests the same content again, it can attach an If-Modified-Since header with the previous Last-Modified time, telling the server it needs to issue a full response only if the content has changed since that time. If the content hasn't changed, the server can issue a short status code 304 response, and the client can pull the content from its cache, avoiding the doGet( ) or doPost( ) methods entirely and saving server resources and bandwidth.

A servlet takes advantage of Last-Modified by implementing the getLastModified( ) method on itself. This method returns the time as a long at which the content was last changed, as shown in Example 3-6. That's all a servlet has to do. The server handles issuing the HTTP header and intercepting If-Modified-Since requests.

Example 3-6. The getLastModified( ) method
 public long getLastModified(HttpServletRequest req) {   return dataModified.getTime(  ) / 1000 * 1000; } 

The getLastModified( ) method is easy to implement and should be implemented for any content that has a lifespan of more than a minute. For details on getLastModified( ) , see my book, Java Servlet Programming , Second Edition (O'Reilly).

3.2.1.3 Cache at the proxy

While implementing getLastModified( ) to make use of client caches is a good idea, there are bigger caches to consider. Oftentimes, especially at companies or large ISPs, browsers use a proxy to connect to the Web. These proxies commonly implement a shared cache of the content they're fetching so that if another user (or the same user ) requests it again, the proxy can return the response without needing to access the original web server. Proxies help reduce latency and improve bandwidth utilization. The content might come from across the world, but it's served as if it's on the local LAN ”because it is.

The Last-Modified header helps web caches as it does client caches, but web caches can be helped more if a servlet hints to the cache when the content is going to change, giving the cache a timeframe during which it can serve the content without even connecting to the server. The easiest way to do this is to set the Expires header, indicating the time when the content should be considered stale. For example:

 // This content will expire in 24 hours. response.setDateHeader("Expires",                        System.currentTimeMillis(  ) + 24*60*60*1000); 

If you take my earlier advice and build some parts of your site on a daily basis, you can set the Expires header on those pages accordingly and watch as the distributed proxy caches take the load off your server. Some clients can also use the Expires header to avoid refetching content they already have.

A servlet can set other headers as well. The Cache-Control header provides many advanced dials and knobs for interacting with a cache. For example, setting the header value to only-if-cached requests the content only if it's cached. For more information on Cache-Control , see http://www.servlets.com/rfcs/rfc2616-sec14.html#sec14.9. A great overview of caching strategies is also available at http://www.mnot.net/cache_docs. This site also includes tips for how to count page accesses even while caching.

3.2.1.4 Cache on the server

Caching at the client and proxy levels helps if requests came from the same person or organization, but what about the multitude of requests from all the different browsers? This is why the last level of caching needs to happen on the server side. Any content that takes significant time or resources to generate but doesn't generally change between requests is a good candidate for server-side caching. In addition, server-side caching works for both full pages and (unlike other caching technologies) parts of pages.

Take, for example, an RSS text feed. In case you're not familiar, RSS stands for Rich Site Summary and is an XML-based file format with which publishers advertise new content. Affiliates can pull the RSS files and display links between sites. Servlets.com (http://www.servlets.com), for example, pulls and links to servlet- related articles, using RSS with O'Reilly's Meerkat (http://meerkat.oreillynet.com) as the RSS newsfeed hub.

Suppose you want to display an RSS feed for an affiliate site, updated every 30 minutes. This data should absolutely be cached on the server side. Not only would it be slow to pull an RSS feed for every request, but it's also terribly poor form. The cache can be implemented using an internal timer or, more easily, a simple date comparator. The code to pull the stories can check on each access if it's time to refetch and reformat the display. Because the data is small, the formatted content can be held in memory. Example 3-7 demonstrates sample code that would manage the story cache. On top of this cache might be another cache holding the actual String (or bytes) to be displayed.

Example 3-7. Caching RSS feeds
 public Story[  ] getStories(String url) {     Story[  ] stories = (Story[  ]) storyCache.get(url);     Long lastUpdate = (Long) timeCache.get(url);     long halfHourAgo = System.currentTimeMillis(  ) - 30*60*1000;         if (stories =  = null  stories.length =  = 0          lastUpdate =  = null  (lastUpdate.longValue(  ) < halfHourAgo)) {       refetch(  );     }     return stories;   } 

As a second example, take a stock chart diagram as found on any financial site. This presents a more significant challenge because such a site can support thousands of stock symbols, each offering charts of different sizes with different time spans and sometimes even allowing graphical comparisons between stocks. In this application caching will be absolutely necessary because generating a chart takes significant time and resources, and dynamically generating every chart would balloon the server requirements.

A good solution would be multifaceted. Some charts can be statically built offline (as discussed earlier in this section). These charts would be served as files. This technique works for most commonly accessed charts that don't change more than once a day. Other charts, perhaps ones that are accessed heavily but change frequently, such as a day's activity charts for popular stocks, would benefit from being cached in memory and served directly. They might be stored using a SoftReference . Soft references free the memory if the Java Virtual Machine (JVM) gets low. Still other charts, perhaps ones that are less popular or actively changing, would benefit from being cached by servlets to the filesystem, stored in a semirandom temporary file whose contents can be pulled by a servlet instead of generated by the servlet. The File.createTempFile( ) method can help manage such files.

Many potential solutions exist, and this shouldn't be taken as gospel. The main point is that memory caches, temporary file caches, and prebuilt static files are good components of any design for caching on the server.

Beyond the server cache, it's important to remember the client and proxy caches. The chart pages should implement getLastModified( ) and set the Expires and/or Cache-Control headers. This will reduce the load on the server and increase responsiveness even more.

3.2.1.5 . . . Or don't cache at all

Even though caching makes sense most of the time and should be enabled whenever possible, some types of content are unsuitable for caching and must always be refreshed. Take, for example, a current status indicator or a "Please wait . . . " page that uses a Refresh tag to periodically access the server during the wait. But because of the sheer number of locations where content might be cached ”at the client, proxy, and server levels ”and because of several notorious browser bugs (see http://www.web-caching.com/browserbugs.html), it can be difficult to effectively turn off caching across the board.

After spending hours attempting to disable caching, programmers can feel like magicians searching in vain for the right magic spell. Well, Harry Potter, Example 3-8 provides that magic spell, gathered from personal experience and the recommendations of the Net Wizards.

Example 3-8. The magic spell to disable caching
 // Set to expire far in the past.   res.setHeader("Expires", "Sat, 6 May 1995 12:00:00 GMT");       // Set standard HTTP/1.1 no-cache headers.   res.setHeader("Cache-Control", "no-store, no-cache, must-revalidate");       // Set IE extended HTTP/1.1 no-cache headers (use addHeader).   res.addHeader("Cache-Control", "post-check=0, pre-check=0");       // Set standard HTTP/1.0 no-cache header.   res.setHeader("Pragma", "no-cache"); 

The Expires header indicates that the page expired long ago, thus making the page a poor cache candidate. The first Cache-Control header sets three directives that each disable caching. One tells caches not to store this content, another not to use the content to satisfy another request, and the last to always revalidate the content on a later request if it's expired (which, conveniently, it always is). One directive might be fine, but in magic spells and on the Web, it's always good to play things safe.

The second Cache-Control header sets two caching "extensions" supported by Microsoft Internet Explorer. Without getting into the details on nonstandard directives, suffice to say that setting pre-check and post-check to indicates that the content should always be refetched. Because it's adding another value to the Cache-Control header, we use addHeader( ) , introduced in Servlet API 2.2. For servlet containers supporting earlier versions, you can combine the two calls onto one line.

The last header, Pragma , is defined in HTTP/1.0 and supported by some caches that don't understand Cache-Control . Put these headers together, and you have a potent mix of directives to disable caching. Some programmers also add a getLastModified( ) method that returns a time in the past.

I l @ ve RuBoard


The OReilly Java Authors - JavaT Enterprise Best Practices
The OReilly Java Authors - JavaT Enterprise Best Practices
ISBN: N/A
EAN: N/A
Year: 2002
Pages: 96

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net