Although previous sections explained which settings might prevent Apache from scaling, the following are some techniques for proactively increasing the performance of your server.
Mapping Files to Memory
As explained previously, accesses to disk affect performance significantly. Although most modern operating systems keep a cache of the most frequently accessed files, Apache also enables you to explicitly map a file into memory so that access to disk isn't necessary. The module that performs this mapping is mod_file_cache. You can specify a list of files to memory map by using the MMapFile directive, which applies to the server as a whole. An additional directive in Apache 2.0, CacheFile, takes a list of files, caches the file descriptors at startup, and keeps them around between requests, saving time and resources for frequently requested files.
Distributing the Load
Another way to increase performance is to distribute the load among several servers. This can be done in a variety of ways:
The fastest way to serve content is not to serve it at all! This can be achieved by using appropriate HTTP headers that instruct clients and proxies of the validity in time of the requested resources. In this way, some resources that appear in multiple pages but don't change frequently, such as logos or navigation buttons, are transmitted only once for a certain period of time.
Additionally, you can use mod_cache in Apache 2.0 to cache dynamic content so that it doesn't need to be created for every request. This is potentially a big performance boost because dynamic content usually requires accessing databases, processing templates, and so on, which can take significant resources.
By the Way
As of this writing, mod_cache is still experimental. You can read more about it at http://httpd.apache.org/docs-2.0/mod/mod_cache.html.
Reduce Transmitted Data
Another method for reducing server load is to reduce the amount of data being transferred to the client. This in turn makes your clients' Web site access faster, especially for those over slow links. You can do a number of things to achieve this:
HTTP 1.1 allows multiple requests to be served over a single connection. HTTP 1.0 enables the same thing with keep-alive extensions. The KeepAliveTimeout directive enables you to specify the maximum time in seconds that the server will wait before closing an inactive connection. Increasing the timeout means that you increase the chance of the connection being reused. On the other hand, it also ties up the connection and Apache process during the waiting time, which can prevent scalability, as discussed earlier in the chapter.