Section 9.1. Mongrel s Security Design

9.1. Mongrel's Security Design

9.1.1. Strict HTTP 1.1 Parsing

Today's Web servers use hand-written parsers that are incredibly lax about processing their input despite HTTP 1.1 having a fully specified allowed grammar. These Web servers are in effect using a blacklist to determine what's a valid request, "We accept everything, oh except that, oh and that, oh that too."

Mongrel uses a strict HTTP 1.1 parser generated by the fantastic Ragel generator. This parser is not so strict that slightly wrong clients are rejected, so don't worry about losing customers. It is strict, however, in the sizes of each element allowed, the grammar of the important parts, and its grammar is directly comparable with the HTTP 1.1 specification.[19]

[19] You can actually compare them side-by-side to make sure Mongrel is following the specification. That's not possible with a hand-written parser.

Mongrel's use of a parser changes the Web server input security policy from a blacklist to a whitelist. Mongrel is telling the world, "I reject everything except this exact grammar specification." Not only is Mongrel's parser able to reject large numbers of malicious requests without any special "security lists," it is also very fast thanks to Ragel.[20]

[20] Using Ragel also helped cut down development time, made it easier to validate against the RFC, made it possible for others to reuse the parser in their projects, and was just a really great idea.

Zed Sez

All right, pretty people, listen up. You went to school one day (or maybe you didn't, lucky you) and some professor told you that a parser written by hand is faster than what a parser generator (like Ragel) creates. You assumed he was right and never ever, ever questioned him again and will probably die still thinking, "Gee, parsers written by hand are super fast." Parser generators have been under twenty or thirty years of continuous development and research, and can now beat the pants off nearly anything some dip at an Apache group can write by hand. Why? Not because they are faster but because they are correct. Mongrel has shown that the speed is only marginally different, that the parser is not the bottleneck (it's IO, dumbass), and that having the ability to explicitly kill badly behaving clients is the best way to protect against malicious attacks. Most important, though, every hand-written parser (and HTTP clients or servers) starts off really simple, then blossoms into a giant morass of horribly twisted crap that houses all the major security holes. So, take your panties out of their tightly coiled bunch and move on.

9.1.2. Request Size Limitations

The HTTP 1.1 standard doesn't set any limits on anything except maybe the size of allowed cookies (and even that is very loose). You could put a full-length DVD in a header and still technically be correct (the best kind of correct). For a little server running in an interpreted language like Ruby, this is a murderous path to destruction. Without limits on requests, a malicious attacker could easily craft requests that exceeded the server's available memory and potentially cause buffer overflows.

In addition to strictly processing all input in exactly sized elements, Mongrel also has a set of hard-coded size limits. These limits are fairly large for practical purposes, but some people really like to stretch the limits of the standard. The limitations as of 0.3.x are:

  • field names 256 bytes

  • field values 80k bytes

  • request uri 2k bytes

  • request path 12k bytes

  • query string 10k bytes

  • total header 112k bytes

Since Mongrel's creation nobody has complained about these limits. It's quite possible someone may hit them in the future, but simply not using these elements to store their data is much easier than trying to get Mongrel's limits changed.

When you do exceed one of these limits, Mongrel reports an error message to the console or mongrel.log telling you what limit was triggered.

9.1.3. Limiting Concurrent Processors

Mongrel places a limit on the number of concurrently connected clients due mostly to a limitation in Ruby's ability to handle files. If a client connects and there are already too many connected, then Mongrel closes that client and starts trying to kill off any currently running threads that may be too old. It also logs messages saying this was necessary so that you can update your deployment to handle the new load factor.

The alternative to doing this is to simply let Mongrel die. By rejecting some clients and telling you that there's an overload problem, you can still service a portion of your user base until you can scale you deployment up.

9.1.4. No HTTP Pipelining or Keep-Alives

In HTTP 1.1 a feature called "pipelining" was introduced and the request/response model was changed to a "persistent connection" model using keep-alives. This was potentially useful back when making connection requests was expensive over phone lines, but in modern times the benefits are actually placing more load on the HTTP server, sucking up precious available concurrent processors so other people can't enjoy them.

Mongrel is also at the mercy of Ruby's file I/O limitations. There are plans to improve how Ruby processes files, but right now it can only keep 1,024 open files at a time on most systems (even fewer on some). Allowing clients to keep a connection constantly means that a malicious attacker can simply make a series of "trickled connects" until your Mongrel processes run out of available files. It literally takes seconds to do this.

Another questionable point about HTTP pipelining and keep-alives is that there seem to be no statistically significant performance benefits with modern networking equipment or localhost configurations (which Mongrel is almost always deployed in). The complexity increase implementing this part of the standard simply doesn't justify the almost non-existent benefit in today's deployment scenarios.

Rather than implement more and more complexity, Mongrel simply uses the special "Connection: close" response header to indicate to the client and any proxy servers that it's done. This reverts Mongrel back to the HTTP 1.0 behavior of one request per connection, and actually does improve its performance under heavy load for the above reasons. It's possible that this might change in the future, but for now it works out great and only HTTP RFC Nazis seem to care.[21]

[21] More than a few giant companies (including one that does-no-evil) and several proxy server vendors all were caught not honoring the "Connection: close" response in their proxy server software. Ironically they came to me claiming Mongrel didn't follow the RFC, when it was actually their proxy servers that didn't.

Zed Sez

Your application is slow as dirt and you are convinced that you need keep-alives and pipelining to make it go faster. You've got to have it or nothing will work, the world will crumble to dust, and we'll all be replaced by intelligent squids after the human race is long gone. It's that serious.

Look, you don't need any of that simply because this isn't 1996. Back in the day keep-alives and pipelining were added so that people on phone lines didn't have to wait, but the assumption that this is necessary in all networking situations has never been tested heavily. These parts of the RFC only add complexity, are ambiguous, and don't really make things fast enough to justify the development and maintenance overhead for a server that's run on localhost or controlled networks.

Most important, though, is that allowing them makes it so that a malicious attacker just has to start up more than 500 keep-alive or pipelined requests and your whole Ruby application eats it. Why? Because Ruby 1.8.x uses the select() function to do its IO and that only supports 1,024 open files on most systems.

Another problem, though, is that people use keep-alives and pipelining as a Band-Aid for horribly designed systems. I ran into one fellow who desperately needed keep-alives so he could send three characters per request to a client at 2,000 requests/second. Yes, just three characters. Here's a clue: batched processing. In almost every instance that people have claimed to need these features, a simple design change to batched processing or an improved network design removed the problem and simplified everything.

9.1.5. No SSL

Mongrel does not support SSL. Typically you will be running Mongrel behind a static Web server (see Section 4), in which case you'll have SSL support from there. Make sure you are sending headers to Mongrel properly so that it will respond with HTTPS instead of HTTP.

Zed Sez

You don't need SSL either. Well, you need it at some point in your deployment configuration, but why would you want a slow language like Ruby to do the SSL when there are already really great, fast Web servers? Ruby is a precious, slow commodity that you should use only when you absolutely must.

9.1.6. No [Name Your Must-Have Feature]

As you probably see, Mongrel say, "No" in many places where most Web servers say, "Yes, OK." Sometimes this is because no one using Mongrel has needed it yet, sometimes it's because there's a better, simpler way to accomplish the same goal. Mongrel is a different kind of Web server, and frequently you can solve your problem with a different solution.

Ready for another catchphrase? Constraints are liberating. When you are given a billion different options, you can become paralyzed by choice. When you are forced to work within reasonable constraints, you can focus on the problem of getting your task done rather than deciding how to configure your Web server. Mongrel's feature set and limitations work perfectly for about 95% of the current users. Those people who have additional requirements can easily extend Mongrel using the plugins, commands, and handlers. The rest should find a tool better-suited to their job or change their requirements.

Mongrel. Serving, Deploying, and Extending Your Ruby Applications
Mongrel. Serving, Deploying, and Extending Your Ruby Applications
ISBN: 9812836357
Year: 2006
Pages: 48 © 2008-2017.
If you may any questions please contact us: