Section 5.5. Configuration

5.5. Configuration

5.5.1. Mongrel[7]

[7] Much of this section is based on documentation donated by Austin Godber to the Mongrel project.

We're going to use mongrel_cluster to manage your small set of Mongrel applications. We'll need the mongrel_cluster gem, and we'll first just set up the testapp so you can see how it's done. This will lay the groundwork for your real production application.

5.5.2. A Simple Test Rails Application

There's a bit of a chicken and eggs problem with this kind of setup. In order to test that your web1 machine works with your app1 machine, you need to have a working application on app1. But you can't really get your application working without web1 and db1 working. The best thing to do is to create a small "test" application and serve it manually until you're ready for the big time:

$ cd /var/www/apps $ rails testapp $ cd testapp $ mongrel_rails start

You should then be able to visit http://localhost:3000/ and see the Rails test page. That means your Mongrel setup is working, but you'll want something that also shows Rails is working. Stop Mongrel with CTRL-C and do:

$ script/generate controller test

Edit app/controllers/test_controller.rb and add this to the TestController class:

def index   render :text => "test" end

Then start up Mongrel again and go to http://localhost:3000/test to see your little tester Rails application.

5.5.3. mongrel_cluster

If you haven't yet, stop your Mongrel instance with CTRL-C and install the mongrel_cluster gem:

$ gem install mongrel_cluster

Next we'll want to configure our testapp instance so that it is running three instances of mongrel starting at port 8000:

$ sudo mongrel_rails cluster::configure -e production \    -p 8000 -N 3 -c /var/www/apps/testapp -a \    --user mongrel --group mongrel

This creates a configuration file mongrel_cluster.yml and puts it in /var/www/apps/testapp/config. Now you can start this cluster using:

$ cd /var/www/apps/testapp $ mongrel_rails cluster::start

And of course the command cluster::stop will stop it. You should now (with the cluster still running) try to hit each port (8000, 8001, 8002) with your browser and make sure the test pages are present.

We're going to leave this running in this for now while we set up the web1 Apache server. Then we'll come back and finalize the install by putting your app on, configuring it, and putting it into the boot process for app1.

5.5.4. Apache

Refer to Section 4 to learn how to get your Apache 2.2.x install up and running. If you're using virtual hosts then make sure they are configured correctly and serving the files for your domain name.

Once you have everything running you'll need to change your configuration to point your virtual host at the app1 servers:

<Proxy balancer://mongrel_cluster>    BalancerMember http://app1:8000    BalancerMember http://app1:8001    BalancerMember http://app1:8002 </Proxy>

Change app1 to be either your real hostname or the IP address. Restart your Apache instance according to your system's process restart method and hit http://web1/ to see if your application comes up. If not, double-check the following:

  1. Can you access ports 8000, 8001, 8002 from web1 to app1? Use curl http://app1:8000/ to test it quickly.

  2. Are you using fancy rewrite rules? Disable or remove your rewrite rules for now. They aren't doing you any good at this stage since the app1 server is in a different location and they only add complexity.

  3. Can you even access http://web1/ normally?

  4. Is it plugged in? No, seriously, are all the machines turned on and networked properly? Hopefully you aren't trying to do this in the midst of a firewall configuration, so try to ssh to each machine and make sure they are accessible.

  5. Did you check the Apache server logs for errors?

  6. Did you check the Mongrel server mongrel.log and production.log files? There are usually clues in there.

  7. If you get really desperate, shut down your cluster and refer to Dash-Bee Logging in Section 7, then shut down your testapp cluster. With the cluster down you can start up one Mongrel instance on port 8000 with: mongrel_rails start -e production -p 8000 -B. Run this and then try accessing your Apache server again. See if you get any information in log/mongrel_debug/rails.log about the requests.

5.5.5. MySQL

You now have your web1 server balancing against your app1 Mongrel cluster and you're ready to get your database configured on db1. First thing, though, is you should have been using Rails migrations this whole time. It was mentioned in Section 5.1 as something you should do, but it's pretty much a requirement at this point. Later you'll have to deploy your schema to MySQL and then manage versioning of the schema, so if you want to be a hero and do them manually, make sure your SQL scripts are set up properly.

If you don't have migrations, you're on your own (and probably very much in trouble). You'll have to go through the MySQL configuration and just make it up as you go.

What you need to do is first make sure MySQL is running and configured properly and that you can use the mysql tool to connect to localhost on db1. Once you can do that, follow these steps:

  1. Set up the MySQL user you plan to use for your production deployment on db1.

  2. From app1 use the mysql tool to try connecting to db1 and run a few queries. You may have to install this tool on app1 if you went the purist's route and only installed the mysql client libraries.

  3. Once these two machines are talking mysql protocol, you're ready to install your production application and get it working.

Until you can connect manually from app1 to db1, you shouldn't continue. Permissioning and user management in MySQL is rather weird and painful (well, with any DBMS), so we usually break down and install phpMyAdmin or webmin to manage it. Yeah, we're evil.

5.5.6. Last Step: Your Application in Production

These instructions are intended to get you up and running quick with a basic best practice deployment, but also teach you how everything is installed and managed. In reality everyone who's half smart about anything uses Capistrano to do their deployments. Capistrano is excellent software that uses Rake and ssh to automate application deployments. It can manage fairly large installations, restart servers, roll back failed deployments, and provide status as well.

The problem with Capistrano is that until you've done one deployment manually it's difficult to use Capistrano to automate your installation. We usually do the first deployment by hand in a new location, and then we step back and plan the automation with Capistrano. As you gain experience doing this you'll be able to start off right away with Capistrano, but for now we're going to do this first deployment by hand.

It is a sin of the highest degree to continue doing deployments manually once you've done your first one by hand. Capistrano is a fantastic tool that is easy to understand if you know even a small amount about build tools (Rake, Make, SCons, nmake). Taking the time after this deployment to automate your work will save you much pain and anger in the future. Get Your Application On

You've got two approaches to follow to get your code onto the app1 server:

  1. From your development system just copy it over to app1:/var/www/apps/myapp using scp or rsync. You should really only do this if your app1 machine cannot access your version control repository because of firewall issues.

  2. Log onto app1 and checkout your application from version control directly into the /var/www/apps/myapp directory. This is the preferred way since this is how Capistrano does things. In fact, only do option #1 in desperation since you'll have to change things later to accommodate Capistrano.

Let us assume that you've got your code in Subversion and you're going to set up your application just like your testapp:

$ cd /var/www/apps $ svn co http://mysvn/repository/myapp $ sudo mongrel_rails cluster::configure -e production \     -p 8000 -N 3 -c /var/www/apps/myapp -a \     --user mongrel --group mongrel $ cd testapp; mongrel_rails cluster::stop $ cd ../myapp; mongrel_rails cluster::start

Your application should start up and you should see tons of errors. What you must do now is configure this deployment so that it uses your db1 database, runs your rake db:migrate task, and completes any configurations your application specifically needs. It might be a good idea during this stage to not use mongrel_cluster but to just run the application manually with mongrel_rails start -p 8000 and test against http://app1:8000/ until the application works.

Halt! Before you start to change the files you just checked out, take a step back. You have a revision control tool that helps you keep track of your source changes, but if you edit files outside of this process then you'll run into conflicts later. Since you've checked the source out, you have three choices that make sense:

  1. Consider your checkout a "development action" and plan on checking your changes in when you are done. This means you have to coordinate with the other developers and make sure your changes don't break their build.

  2. Consider this a "deployment action" and that it should remain untouched. In this case you would do development on another machine, check the changes in, and then svn update on the app1 machine.

  3. Don't do a svn co and instead do an svn export to get a copy of the source that can't be changed. This is the safest approach but also kind of a pain.

Our general process is to do #1 for the first deployment since it's quicker and more direct, then when we automate with Capistrano we use #3 to keep things sane and safe.[8]

[8] When you svn co your source, it leaves the .svn directories lying around where your Web server can serve them to the world for proper viewing. This can lead to security problems, especially if you do silly things like put passwords into your source or have other sensitive items viewable. Systemize Your Deployment

The mongrel_cluster gem comes with an /etc/init.d startup script that works on most linux systems. There's a small process you can do so that your newly minted and functioning application will be started whenever the machine reboots.

$ mkdir /etc/mongrel_cluster $ ln -s /var/www/apps/testapp/config/mongrel_cluster.yml \   /etc/mongrel_cluster/testapp.yml  $ cp /path/to/mongrel_cluster_gem/resources/mongrel_cluster \   /etc/init.d/ $ chmod +x /etc/init.d/mongrel_cluster

Then you'll need to configure your system to run /etc/init.d/mongrel_cluster on start/stop and you're set. You should try rebooting your machine to make sure the application actually does start properly. The Final Get-Together

The only remaining task is to actually make sure all the pieces work. It's a good idea to test each component in order from back to front, and then test them as one whole. If you were smart you'd also automate this test process and include it in your post-production deployment process. Cheap Simple Caching

Normally you'll need to set up Apache mod_rewrite rules of varying complexity levels to take advantage of Rails' page caching. Since you have a front-facing web1 server you'll have to do some research to find a way to share disk with app1. This is fairly complex but there is a quick and dirty way around it for many people's applications: mod_cache.

<IfModule mod_cache.c>   LoadModule mem_cache_module modules/   CacheEnable mem /   MCacheSize 4096   MCacheMaxObjectCount 100   MCacheMinObjectSize 1   MCacheMaxObjectSize 2048 </IfModule>

This configuration is only a sample taken from the official mod_cache documentation, but it demonstrates the trick. This will cache the majority of your content and will work as long as your content doesn't require immediate updating. There are ways to also exclude some locations, to cache to disk, and to control the cache organization. Using the memory cache often works well, as you can just bounce the server after a deployment to reset it, but the details of your application will determine if it is good fit for you.

Another trick you should investigate is using an asset server in Ruby on Rails. You make this setting in config/environments/production.rb:

config.action_controller.asset_host = "http://assets.web1"

This tells Rails to write URIs for assets (Javascript, images, stylesheets, etc.) so that they point to a server named "assets.web1." If you then set up a virtual host for web1 to map this, you will be able to put a majority of your assets on that server and have them served directly rather than from app1.

Mongrel. Serving, Deploying, and Extending Your Ruby Applications
Mongrel. Serving, Deploying, and Extending Your Ruby Applications
ISBN: 9812836357
Year: 2006
Pages: 48 © 2008-2017.
If you may any questions please contact us: