Chapter 4. Open Source and Security


Ben Laurie

More than two years ago, in a fit of frustration over the state of open source security, I wrote my first and only blog entry[1] (for O'Reilly's Developer Weblogs):

[1] http://www.oreillynet.com/pub/wlg/2004.

June and July were bad months for free software. First Apache chunked encoding vulnerability,[2] and just when we'd finished patching that, we get the OpenSSH hole.[3] Both of these are pretty scarythe first making every single web server potentially exploitable, and the second makes every remotely managed machine vulnerable.

But we survived that, only to be hit just days later with the BIND resolver problems.[4] Would it ever end? Well, there was a brief respite, but then, at the end of July, we had the OpenSSL buffer overflows.[5]

All of these were pretty agonising, but it seems we got through it mostly unscathed, by releasing patches widely as soon as possible. Of course, this is painful for users and vendors alike, having to scramble to patch systems before exploits become available. I know that pain only too well: at The Bunker,[6] we had to use every available sysadmin for days on end to fix t`he problems, which seemed to be arriving before we'd had time to catch our breath from the previous one.

But I also know the pain suffered by the discoverer of such problems, so I thought I'd tell you a bit about that. First, I was involved in the Apache chunked encoding problem. That was pretty straightforward, because the vulnerability was released without any consultation with the Apache Software Foundation, a move I consider most ill advised, but it did at least simplify our options: we had to get a patch out as fast as possible. Even so, we thought we could take a little bit of time to produce a fix, since all we were looking at was a denial-of-service attack, and let's face it, Apache doesn't need bugs to suffer denial of serviceall this did was make it a little cheaper for the attacker to consume your resources.

That is, until Gobbles[7] came out with the exploit for the problem. Now, this really is the worst possible position to be in. Not only is there an exploitable problem, but the first you know of it is when you see the exploit code. Then we really had to scramble. First we had to figure out how the exploit worked. I figured that out by attacking myself and running Apache under gdb. I have to say that the attack was rather marvelously cunning, and for a while I forgot the urgency of the problem while I unravelled its inner workings. Having worked that out, we were in a position to finally fix the problem, and also, perhaps more importantly, more generically prevent the problem from occurring again through a different route. Once we had done that, it was just a matter of writing the advisory, releasing the patches, and posting the advisory to the usual places.

The OpenSSL problems were a rather different story. I found these whilst working on a security review of OpenSSL commissioned by DARPA[8] and the USAF.[9] OpenSSL is a rather large and messy piece of code that I had, until DARPA funded it, hesitated to do a security review of, partly because it was a big job, but also partly because I was sure I was going to find stuff. And sure enough, I found problems (yes, I know this flies in the face of conventional wisdommany eyes may be a good thing, but most of those eyes are not trained observers, and the ones that are do not necessarily have the time or energy to check the code in the detail that is required). Not as many as I expected, but then, I haven't finished yet (and perhaps I never will, it does seem to be a never-ending process). Having found some problems, which were definitely exploitable, I was then faced with an agonising decision: release them and run the risk that I would find more, and force the world to go through the process of upgrading again, or sit on them until I'd finished, and run the risk that someone else would discovered them and exploit them.

In fact, I dithered on this question for at least a monththen one of the problems I'd found was fixed in the development version without even being noted as a security fix, and another was reported as a bug. I decided life was getting too dangerous and decided to release the advisory, complete or not. Now, you might think that not being under huge time pressure is a good thing, but in some ways it is not. The first problem came because various other members of the team thought I should involve various other security alerting mechanismsfor example, CERT[10] or a mailing list operated by most of the free OS vendors.[11] But there's a problem with this: CERT's process is slow and cumbersome and I was already nervous about delay. Vendor security lists are also dangerous because you can't really be sure who is reading them and what their real interests are. And, more deeply, I have to wonder why vendors should have the benefit of early notification, when it is my view that they should arrange things so that their users could use my patches as easily as I can. I build almost everything from original source, so patches tend to be very easy to use. RPMs[12] and ports[13] make this harder, and vendors who release no source at all clearly completely screw up their customers. Why should I help people who are getting in the way of the people who matter (i.e., the users of the software)?

Then, to make matters worse, one of the more serious problems was reported independently to the OpenSSL team by CERT, who had been alerted by Defcon.[14] I was going, and there was no way I was delaying release of the patches until after DeFcon. So, the day before I got on a plane, I finally released the advisory. And the rest is history.

So, what's the point of all this? Well, the point is this: it was a complete waste of time. I needn't have agonised over CERT or delay or any of the rest of it. Because half the world didn't do a damn thing about the fact they were vulnerable, and because of that, as of yesterday, a worm is spreading through the Net like wildfire.

Why do I bother?

[2] http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2002-0392.

[3] http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2002-0639.

[4] http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2002-0651.

[5] http://cve.mitre.org/cgi-bin/cvename.cgi?name=CAN-2002-0656.

[6] Back in those days, The Bunker belonged to A.L. Digital Ltd., and it wasn't called The Bunker Secure Hosting.

[7] A hacker (or group of hackers, it is not known which).

[8] The United States Defense Advanced Research Projects Agency, responsible for spending a great deal of money on national securityin this case, for a thing known as CHATS, or Composable High Assurance Trusted Systems.

[9] Yes, I do mean the United States Air Force.

[10] CERT is an organization funded to characterize security issues and alert the appropriate parties a job they do not do very well, in my opinion.

[11] Apparently, I'm not one, so I'm not on this list.

[12] One of those recursive definitions programmers love: RPM Package Manager, a widely used system for distributing packaged open source software, particularly for various flavors of Linux.

[13] FreeBSD's package management system. Also used by other BSDs.

[14] DefCon is a popular hacker's convention, held annually in Las Vegas.

Two years later, I am still bothering, so I suppose that I do think there's some point. But there are interesting questions to ask about open source securityis it really true that "many eyes" doesn't work? How do we evaluate claims about the respective virtues of open and closed source security? Has anything changed in those two years? What is the future of open source security?



Open Sources 2.0
Open Sources 2.0: The Continuing Evolution
ISBN: 0596008023
EAN: 2147483647
Year: 2004
Pages: 217

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net