Caching in Web Farms


Caching is inherently incompatible with a multiserver environment such as a Web farm. The cache mechanism is specific to an individual application instance, which cannot be shared between multiple servers. Cached items may have application-level scope, but they are limited to a single server. Whether this is an issue or not depends on two main factors:

  • The Web Application architecture

  • The type of information being stored in the cache

Internet Protocol (IP) redirectors, such as Cisco's LocalDirector and F5 Network's BIGIP, are popular components for managing requests to servers in a Web farm. IP redirectors can ensure that a client's requests get routed to the same server for the duration of their session if you enable the option for so-called sticky sessions . However, if that server happens to crash, or become unavailable, then the user will be routed to another server that will have no record of their cached information. This should not really be a problem if you consider the philosophy behind cached content. As pointed out earlier, cached content is, by definition, re-creatable. The Application object, and the Web.config file, store information that must be available at all times for the application to function correctly. The Cache object, however, stores transient information and improves performance. Cached items are not guaranteed to remain in the cache, so the application should never assume that they will always be there. This point was clearly illustrated in the code examples for the Cache API: An application may attempt to retrieve a DataSet from the cache before making a database call. But the code must run a null check on the existence of the cached item before automatically referencing it.

On a side note, keep in mind that SQL Server provides its own caching mechanism that is independent of ASP.NET. SQL Server caches data pages in memory in order to speed up its response to queries. For example, if multiple users execute the same query, then SQL Server will attempt to return the resultsets by assembling data pages from the cache. This approach can have significant performance benefits. Cached data pages may be served on the order of nanoseconds, whereas disk reads may take on the order of milliseconds. It is not uncommon to run a SQL trace on a stored procedure call and to observe a response time of zero milliseconds . Of course, the exact response times depend greatly on the type and amount of information you are retrieving.

Regardless of how fast SQL Server may be, the important point is this: SQL Server typically plays a central role in a multiserver Web farm environment. Effectively, SQL Server is a centralized cache. Multiple servers may run their own application instances, but they retrieve their information from a central SQL Server database. The database server maintains a cache that is influenced by all user requests, not just those that originate from a single server. Essentially, Web farm architectures that are built around a centralized database already benefit from a centralized cache. (We are not intentionally ignoring other, non-Microsoft database servers, many of which provide similar benefits).

For those readers who prefer to centralize their cache using ASP.NET, you can design a centralized cache using a Web service component, as follows :

  1. Determine what kind of information needs to be cached. For example, let's say your application generates a large DataSet to assemble the same start page for every user.

  2. Create a Web service that contains a method for generating the common DataSet.

  3. Compile the Web service and install it on a server that is accessible to the Web farm. Ideally, this server should be in the same domain, and on the same physical network, in order to minimize communication time.

  4. Add a Web reference to the application code to retrieve the DataSet from the Web service.

Keep in mind that this approach only works for individual cached items, which are managed using the Cache API. Page-level output caching cannot be centralized in a Web service because the Page object cannot be marshaled using a SOAP envelope. There is a cost associated with marshaling large SOAP envelopes across the wire and with serializing the data, especially for complex data. On the positive side, SOAP envelopes are, after all, just text, so the envelope would have to be very large to generate a significant number of bytes.

The critical part of this workaround approach is installing the Web service in close proximity to the Web servers to minimize response times. (Alternatively, you could write a .NET component in place of the Web service and communicate with it using binary formatting over TCP. This approach would eliminate the parsing penalties associated with calling a conventional Web service. However, you also lose the flexibility of a Web service, which may be called from a wider variety of consumers.)

This chapter has presented much of what you need to develop the Web service and to plug it in to the Web client. The next step is for you to run performance tests on your application to determine if the resulting performance is acceptable. Chapter 7, "Stress Testing and Monitoring ASP.NET Applications," provides a detailed discussion for how to run application performance tests.

In summary, you have three options for implementing caching in a Web farm environment:

  • Take no action: You can allow each server to maintain its own cache.

  • Use an IP Redirector: You can do this to handle user sessions on the same server.

  • Implement a workaround: You can use a Web service to maintain a centralized cache for every application instance in the Web farm.




Performance Tuning and Optimizing ASP. NET Applications
Performance Tuning and Optimizing ASP.NET Applications
ISBN: 1590590724
EAN: 2147483647
Year: 2005
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net