Section 4.2. Open Versus Closed Source


4.2. Open Versus Closed Source

Since I wrote my rant, Microsoft has decided that security is important (at least for sales), and as a result, there's been a sudden increased interest in the truth of the claim that open source is "more secure" than closed sourceand, of course, the counterclaim of the opposite.

But this claim is not easy to examine, for all sorts of reasons. First, what do we mean by "more secure"? We could mean that there are fewer security bugs, but surely we have to take severity of the bugs into account, and then we're being subjective. We could mean that when bugs are found, they get fixed faster, or they damage fewer people. Or we might not be talking about bugs at all. We might mean that the security properties of the system are better in some way, or that we can more easily evaluate our exposure to security problems.

I expect that, at some point, almost everyone with a serious interest in this question will choose one of these definitions, and at some other point a completely different one.

4.2.1. Who Is the Audience?

It is also important to recognize that there are at least two completely different reasons to ask the question "is A more secure than B?" One is that you are trying to sell A to an audience that just wants to tick the "secure" box on their checklist, and the other is because you actually care about whether your product/web site/company/whatever is secure, and are in a position to have an informed opinion.

It is, perhaps, unkind to split the audience in this way but, sadly, it appears to be a very real split. Most people, if asked whether they think the software they use should be secure will say, "Oh yeah, security, that's definitely a good thing, we want that." But this does not stop them from clicking Yes to the dialog box that says "Would you like me to install this Trojan now?" or running products with a widely known and truly dismal security record.

However, it is a useful distinction to make. If you are trying to sell to an audience that wants to tick the security box, you will use quite different tactics than if the audience truly cares about security. This gives rise to the kind of analysis I see more and more. For example, http://dotnetjunkies.com/WebLog/stefandemetz/archive/2004/10/11/28280.aspx has an article titled "Myth debunking: SQL Server vs. MySQL security 2003-2004 (SQL Server has less bugs!!)." The first sentence of the article gives the game away: "Seems that yet again a MS product has less bugs that (sic) the corresponding LAMP[17] product." What is this telling us? Someone found an example of a closed source product that is "better" at security than the corresponding open source one. Therefore, all closed source products are "better" at security than open source products. If we keep on saying it, it must be true, right?

[17] LAMP stands for Linux, Apache, MySQL, Perl (or PHP) and is common shorthand for the cluster of open source commonly used to develop web sites.

Even if I ignore the obviously selective nature of this style of analysis, I still have to question the value of simply counting vulnerabilities. I know that if you do that, Apache appears to have a worse record than IIS recently (though not over longer periods).

But I also know that the last few supposed vulnerabilities in Apache have been either simple denial-of-service (DoS) attacks[18] or vulnerabilities in obscure modules that very few people use. Certainly I didn't even bother to upgrade my servers for any of the last half-dozen or so; they simply weren't affected.

[18] In a DoS attack, the attacker prevents access by legitimate users of a service by loading the service so heavily that it cannot handle the demand. This is often achieved by a distributed denial of service (DDoS) attack, in which the attacker uses a network of "owned" (i.e., under the control of the attacker and not the legitimate owner) machines to simultaneously attack the victim's server.

So, for this kind of analysis to be meaningful, you have to get into classifying vulnerabilities for severity. Unfortunately, there's not really any correct way to do this. Severity is in the eye of the beholder. For example, my standard threat model (i.e., the one I use for my own servers, and generally advise my clients to use, at least as a basis) is that all local users[19] have root,[20] whether you gave it to them or not. So, local vulnerabilities[21] are not vulnerabilities at all in my threat model. But, of course, not everyone sees it that way. Some think they can control local users, so to them, these holes matter.

[19] That is, people with user accounts on the machine, rather than visitors to web pages or people with mail accounts, for example.

[20] Root is the all-powerful administrative account on a Unix machine.

[21] A local vulnerability is one that only a local user can exploit.

Incidentally, you might wonder why I dismiss DoS attacks; that is because it is essentially impossible to prevent DoS attacks, even on perfectly functioning servers, since their function is to provide a service available to all, and simply using that service enough will cause a DoS. They are unavoidable, as people subject to sustained DoS attacks know to their pain.

4.2.2. Time to Fix

Another measure that I consider quite revealing is "time to fix"that is, the time between a vulnerability becoming known and a fix for it coming available. There are really two distinct measures here, because we must differentiate between private and public disclosure. If a problem is disclosed only to the "vendor,"[22] the vendor has the leisure to take time fixing it, bearing in mind that if one person found it, so will othersmeaning "leisure" is not the same as "forever," as some vendors tend to think. The time to fix then becomes a matter of negotiation between vendor and discloser (an example of a reasonably widely accepted set of guidelines for disclosure can be found at http://www.wiretrip.net/rfp/policy.html, though the guidelines are not, by any means, universally accepted) and really isn't of huge significance in any case, because the fix and the bug will be revealed simultaneously.

[22] A term I am not at all fond of, since, although I am described as a "vendor" of Apache, OpenSSL, and so forth, I've never sold any of them.

What is interesting to measure is the time between public disclosures (also known as zero-days) and the corresponding fixes. What we find here is quite interesting. Some groups care about security a lot more than others! Apache, for example, has never, to my knowledge, taken more than a day to fix such a problem, but Gaim[23] recently left a widely known security hole open for more than a month. Perhaps the most interesting thing is that whenever time to fix is studied, we see commercial vendorsSun and Microsoft, for examplepitted against open source packagersRed Hat and Debian and the likebut this very much distorts the picture. Packagers will almost always be slower than the authors of the software, for the obvious reason that they can't make their packages until the authors have released the fix.

[23] A popular open source instant messaging client.

This leads to another area of debate. A key difference between open and closed source is the number of "vendors" a package has. Generally, closed source has but a single vendor, but because of the current trend towards packagers of open source, any particular piece of software appears, to the public anyway, to have many different vendors. This leads to an unfortunate situation: open source packagers would like to be able to release their packages at the same time as the authors of the packages. I've never been happy with this idea, for a variety of reasons. First, there are so many packagers that it is very difficult to convince myself that they will keep the details of the problem secret, which is critical if the users are not to be exposed to the Bad Guys. Second, how do you define what a packager is? It appears that the critical test I am supposed to apply is whether they make money from packaging or not![24] This is not only blatantly unfair, but it also flies in the face of what open source is all about. Why should the person who participates fully in the open source process by building from source be penalized in favor of mere middlemen who encourage people not to participate?[25]

[24] Of course, not all packagers make money, but I've only experienced this kind of pressure from those that do.

[25] This is because vendors tend to encourage users to treat them as traditional closed source businesseswith their own support, their own versions of software, and so forthinstead of engaging the users with the actual authors of the software they are using.

Of course, the argument, then, is that I should care more about packagers because if they are vulnerable, it affects more people. I should choose whom I involve in the release process on the basis of how many actual users will be affected, either positively or negatively, depending on whether I include the packager or not, by my choice. I should also take into account the importance of these users. A recent argument has been that I should involve organizations such as the National Infrastructure Security Co-ordination Centre (NISCC), a UK body that does pretty much what it says on the tin, and runs the UK CERT (see http://www.niscc.gov.uk for more information) because they represent users of more critical importance than mere mortals. This is an argument I actually have some sympathy with. After all, I also depend on our infrastructure. But in practice, we soon become mired in vested interests and commercial considerations because, guess what? Our infrastructure uses software from packagers of various kinds, so obviously I must protect the bottom line by making sure they don't look to be lagging behind these strange people who give away security fixes to just anyone.

If these people really cared about users, they would be working to find ways that enable the users to get the fixes directly from the authors, without needing the packager to get its act together before the user can have a fix. But they don't, of course. They care about their bank balance, which is the saddest thing about security today: it is seen as a source of revenue, not an obligation.

Incidentally, a recent Forrester Research report claims that packagers are actually quite slowas slow as or slower than closed source companiesat getting out fixes. This doesn't surprise me, because a packager generally has to wait for the (fast!) response of the authors before doing its own thing.

4.2.3. Visibility of Bugs and Changes

There is argument that lack of source is actually a virtue for security. Potential attackers can't examine it for bugs, and when vulnerabilities are found, they can't see what, exactly, was changed.

The idea that vulnerabilities are found by looking at the source is an attractive one, but is not really borne out by what we see in the real world. For a start, reading the source to anything substantial is really hard work. I knowI did it for OpenSSL, as I said earlier. In fact, vulnerabilities are usually found when software misbehaves, given unusual input or environment. The attacker follows up, investigating why that misbehavior occurred and using the bug thus revealed for their own evil ends. The "chunked encoding" bug I mentioned earlier is a great example of this. This was found by the common practice of feeding programs large numbers of the same character repeatedly. When Apache was fed A lots of times, it ended up treating it as a count of characters in hex, and it came out negative, which turns out to be a Bad Thing. In this case, all that was needed was eight characters, but the problem was found by feeding Apache several thousand.[26]

[26] This particular method is popular because it is so easy: perl -e "print 'A'x10000" | target.

So, not having the source might slow down an attacker slightly, but given the availability of excellent tools like IDA (a very capable disassembler) and Ollydbg (a powerful [and free] debugger), not by very much.

What about updates? The argument is that when source is available, the attacker can compare the old and new versions of the source to see what has changed, and then use that to craft software that can exploit unfixed versions of the package. In fact, because most open source uses version control software, and often has an ethos of checking in changes that are as small as possible, usually the attacker can find just the exact changes that fixed the problem without any clutter arising from unrelated changes.

But does this argument hold water? Not really, as, for example, Halvar Flake has demonstrated very clearly with his Binary Difference Analysis tool. What this does is take two versions of a program, before and after a fix, disassembles them, and then uses graph isomorphisms to work out what has changed. I've seen this tool in action, and it is very impressive. Halvar claims (and I believe him) that he can have an exploit out for a patched binary in one to eight hours from seeing the new version.

4.2.4. Review

Another important aspect to security is the ability to assess the risks. With closed source, this can be done only on the basis of history and reputation, but with open source, it is possible to go and look for yourself. Although you are not likely to find bugs this way, as I stated earlier, you can get a good idea about the quality of the code, the way it has been written, and how careful the author is about security. And, of course, you still have history and reputation to aid you.

4.2.5. Who's the Boss?

Finally, probably the most important thing about open source is the issue of who is in control. When a security problem is found, what happens if the author doesn't fix it? If the product is a closed source one, that generally is that. The user is doomed. He must either stop using it, find a way around the problem, or remain vulnerable. In contrast, with open source, users are never at the mercy of the maintainer. They can always fix the problem themselves.

It is often argued that this isn't a real choice for end users; usually end users are not programmers, so they cannot fix these problems themselves. This is true, but it completely misses the point. Just as the average driver isn't a car mechanic but still has a reasonably free choice of who fixes his car,[27] he can also choose a software maintainer to fix his software for him. In practice, this is rarely needed because (at least for any widely used software) there's almost always someone willing to take on the task.

[27] This is a metaphor that is rapidly going out-of-date, as car manufacturers make cars more and more computerized and harder and harder for anyone not sanctioned by the manufacturer to work on. Who knowsperhaps this will lead to an open source culture in the car world.



Open Sources 2.0
Open Sources 2.0: The Continuing Evolution
ISBN: 0596008023
EAN: 2147483647
Year: 2004
Pages: 217

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net