Recipe 13.3. Hosting Rails with Apache 2.2, mod_proxy_balancer,
You want to run the latest stable version of Apache (currently 2.2.2) to serve your Rails application. For performance reasons you want to
some kind of load balancing. Because of financial limitations, or just preference, you're willing to go with a software-based load balancer.
Use the latest version of Apache (currently 2.2.2) along with the
module, and proxy
to a cluster of Mongrel processes running on a single server, or on several physical servers. Start by downloading the latest version of Apache from a local mirror and unpacking it into your local source directory. (See http://httpd.apache.org/download.cgi for details.)
tar xvzf httpd-2.2.2.tar.gz
A useful convention when installing Apache (or any software where you anticipate working with different versions) is to create an installation directory named after the Apache version, and then create symbolic links to the commands in the bin directory of the version you are currently using. Another timesaver is to create a build script in each Apache source directory; this script should contain the specifics of the
command that you used to build Apache. This script allows you to recompile quickly and also serves as a reminder of what options were used for your most recent Apache build.
To enable proxying of HTTP traffic, install
. For load balancing, install
. For flexibility, you can choose to compile these modules as shared objects (DSOs) by using the option
. This allows you to load or unload these modules at runtime. Here's an example of a build script:
./configure --prefix=/usr/local/www/apache2.2.2 \
Remember to make this script executable:
chmod +x 1-BUILD.sh
Make sure that the directory used with the
option exists (
in this case). Then proceed with building Apache by running this script. When configuration finishes, run
sudo make install
Once Apache is compiled and installed, you configure it by editing the
file. First, make sure the modules you enabled during the build are loaded when apache starts. Do this by adding the following to your
(the comments in this file make it clear where these directives go if you're unsure):
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
You'll need to define a balancer cluster directive that lists the
that will share the load with each other. In this example, the cluster is named
, and consists of four processes, all running on the local host but listening on different ports (4000 through 4003). To specify a member, specify its URL and port number:
# cluster members
Note that the members of the cluster may be on different servers, as long as the IP/PORT address is available from the server hosting Apache.
Next, create a
directive that contains ProxyPass directives to forward incoming requests to the
ProxyPass /balancer-manager !
ProxyPass /server-status !
ProxyPass / balancer://blogcluster/
ProxyPassReverse / balancer://blogcluster/
The two optional
directives provide some status information about the server, as well as a management page for the cluster. To access these status pages without the ProxyPass
(/) attempting to forward these requests to the cluster, use a ! after the
that these are exceptions to the proxying rules (these rules also need to be defined before the / catchall).
Now configure the cluster. You can do that with one command; the following command creates a configuration for a four-server cluster, listening on consecutive ports starting with port 4000:
mongrel_rails cluster::configure -e production -p 4000 -N 4 \
-c /var/www/blog -a 127.0.0.1
This command generates the following Mongrel cluster configuration file:
Start the cluster with:
Then start Apache with:
sudo /usr/local/www/apache2.2.2/apachectl start
Once you have Apache running, test it from a browser or view the balancer-manager to verify that you have configured your cluster as expected and that the status of each node is "OK."
The balancer-manager is a web-based control center for your cluster. You can disable and re-enable cluster nodes or
the load factor to allow more or less traffic to specific nodes. Figure 13-1 shows the status of the cluster configured in the solution.
Figure 13-1. Apache's balancer-manager cluster administration page
While the balancer-manager and server-status utilities are informative for site administrators, the same information can be used against you if they are
available. It's best to disable or restrict access to these services in a production environment.
To restrict access to balancer-manager and server-status to a list of IP addresses or a network range, modify the location directives for each service to include network access control (using
Deny from all
# allow requests from localhost and one other IP
Allow from 127.0.0.1, 192.168.0.50
Deny from all
# allow requests from an IP range
Allow from 192.168.0