3.3 Special Design Issues

   

Apart from the security design process that we've outlined previously, there are several additional design issues that you're likely to face.

3.3.1 Retrofitting an Application

Although we have concentrated so far on how you can enhance security in an application as you develop it , we do not mean to imply that without access to source code you are powerless. In fact, several significant security techniques can be applied to existing applications. Some effectively allow you to "retrofit" security into an application as an element of overall system design.

The reasons for wanting to (or having to) retrofit security into an application are varied. Sometimes the reason can be as simple as necessity. Suppose that you're running an application found to have security flaws or perhaps lacking some important security features. You don't have access to its source code, and you have an overwhelming business need to continue to run it. If you find yourself in that position, the best solution is probably to engineer a security retrofit to the application. (A word of caution, though: you must be sure to treat the retrofit itself with the same level of scrutiny and care that you would for any business-critical software development effort.)

Although many of the approaches we present have been in use for decades, they have gained popularity and importance in the last few years . We think that is because of a widening awareness of just how hard it is to write vulnerability-free code.

The following sections describe several techniques of this kind, starting with the simplest and cleanestwrappers.

3.3.1.1 Wrappers

One way to make an existing application more secure is to use a wrapper . To do this, you first move the existing application to a special location (where it is, perhaps, invisible to ordinary users or unavailable for invocation in the usual way). You then replace the old application with a small program or script that:

  • Checks (and perhaps sanitizes) command-line parameters.

  • Prepares a restricted runtime environment for the target application to run (by trimming, for example, unnecessary or unsafe environment variables that will be passed to it).

  • Invokes the target application from its "new" location, supplying the sanity -checked command line, and then exits. (On some operating systems the wrapper can "chain to" the target and doesn't need to explicitly exit.)

When a user tries to invoke an application, the operating system starts up the wrapper instead. After its job is done, it vanishes, leaving the user running the intended application. If all goes well, the substitution is invisible; but the security work is done. Figure 3-2 shows program invocation with and without a wrapper.

Figure 3-2. A simple program wrapper for "FooBar"
figs/scpp_0302.gif

We have used program wrappers for many years. We like the method because it facilitates not only constraining a runtime environment, but also:

  • Logging the invocation of individual programs.

  • Adding to the existing operating-system mechanisms a new way to decide whether the application should be allowed (by this user, at this time, with the command line supplied, etc.).

  • Adding additional "prologue" and "postlogue" code to the application.

  • Intercepting, at a central point, the startup of an application. (This is a great advantage if you suddenly need to disable a program across the enterprise.)

And you get all these benefits without changing a line of the application itself!

Of course, wrappers can be much more complicated than we have described here. We have seen wrappers that were hundreds of lines long. To give you a better idea of what is possible (and some pitfalls), we describe two wrapper programs in the case studies at the end of this chapter.

The first, the program overflow_wrapper.c , was written by the AusCERT incident response team. It performs a simple function very well: it trims command lines to prevent buffer overflows.

The second was written as a security adjunct to Sendmail, the popular Unix mail transfer agent. It's called smrsh , for "Sendmail restricted shell." The job it sets out to do is much more complex, and as you will see, this complexity leads (eventually) to further security problems.

Here's a final thought: though the details are beyond the scope of this book, note that in many operating environments it is possible for a single unchanging canonical wrapper to front-end every individual application on your network! This can be a great security tool, and we encourage you to look into this possibility if the need arises in your environment.

3.3.1.2 Interposition

As an alternative to wrappers, a technique we call interposition inserts a program we control between two other pieces of software we cannot control. The object is to institute safety checks and other constraintsa process known as filtering in this context. Figure 3-3 shows a simple interposition.

Figure 3-3. Interposing a filter in front of program "FooBar"
figs/scpp_0303.gif

Network proxies are a good example of the interposition technique. They relay protocol-based requestsonce they've passed some requisite policy-based tests to try to ensure that the request is not harmfulbut they can do a bit of "sanity checking" if you need them to. [4] Consider how a network proxy might be used as a security retrofit to guard against SYN flood attacks on a critical system for which no patch against the flaw is available. (It's an older legacy system, say, that is vital to the business but no longer adequately maintained by its original producer.) In such a case, a network proxy can be interposed between the legacy system and the open network to which it provides service, retrofitting a security feature to a system in a completely indirect manner.

[4] Of course, if you decide to have your proxies and other relays start examining the content of the packets they are sending on, you need to be prepared for the increased performance demands you will be placing on the system that does the examination.

Once we successfully employed interposition to protect an important central database from malevolent SQL queries. In that case, we grafted together an older, "one- tier " GUI application (on the front end) with a fairly sophisticated database server (on the back end), and interposed a translator and sanity-checker in the middle to make sure they got along. In what was almost a parody of a man-in-the-middle attack, communication between the two programs was intercepted. Potential security problems were defused by the custom filter code.

3.3.2 Performing Code Maintenance

Although many people don't consider code maintenance to be design work, our experience (remember that Mark coordinated Sun's security patches for several years) is that the way maintenance is carried out can make or break the security of a design. Similar to retrofitting security enhancements onto existing software, maintaining code should be handled with due care, again applying the same level of design scrutiny and attention that you would to new code.

Opportunities for missteps abound. Here are some more common mistakes we've seen:

  • Race conditions introduced because a maintainer decided to store intermediate results in a temporary file in a world- writeable directory.

  • Database passwords hard-coded into a program (opening it to sniffer and memory-analysis attacks) during maintenance, because it seemed "too risky" to code up an encrypted, protocol-based authentication exchange.

  • Resource exhaustion attacks suddenly facilitated by the introduction of a large new cluster of data in memory.

How can you avoid such mistakes? We know of only one method, and that is to treat it as a (possibly) miniature software development effort and follow these steps:

  1. Do your best to understand the security model and measures that are in place already.

  2. Take the time to learn how the program you are maintaining actually works. Track its operation with profiling software. Find out what files it opens, how much memory it uses, and how it handles errors.

  3. Armed with that knowledge, proceed carefully as best you can along the lines of the original designer's intent.

This approach can be quite useful in another context, too. Suppose you have been charged with folding existing library code or third-party packages into your application. It's a good idea to find out how that software actually works (as opposed to what the manualor your colleaguesmay advise ). Remember: setting aside the mentality and assumptions of a program's users is an important step in design.

Similarly, here are two key errors that we would recommend taking great care to avoid:

Don't violate the spirit of the design

Unfortunately, this is easy to do. For example, many security issues we've observed in web software have arisen because web authors have grafted mechanisms that require the keeping of "state" information onto a stateless design.

Don't introduce a new trust relationship

Another mistake that we have often seen is to compromise security by introducing, during maintenance, a new trust relationship. For example, suppose that you are working in Unix and need to execute a system- related function, such as setting the access permissions on a file. Sure, you can look up the arguments and return codes for the chmod library call, but you'd find it a lot easier just to spawn the program itself or maybe use a command shell. Taking this route, however, now means that your program has to "trust" the spawned program as well. You may have introduced a new implementation-time, design-level risk that the original designers never contemplated.

Let's look at an example that highlights some of the difficulties maintainers face. In Chapter 1, we talked about the "Sun tarball" vulnerability and recounted how Mark located a vulnerability in a line of code such as:

 char *buf = (char *) malloc(BUFSIZ); 

and needed to modify it to ensure that the previous contents of the buffer were wiped out. What's the best approach to accomplish this? What would you do? Well, Mark chose something like this:

 char *buf = (char *) calloc(BUFSIZ, 1); 

This change is certainly economical. Only a few characters are changed, leaving the general structure and flow of the code alone. (The state of the program's stack, for example, would be virtually identical during and after the execution of this call, compared to the existing code.) But there are issues.

First of all, calloc is designed to be used in allocating space for arrays. The way it's used here works out because the number of array elements (the first argument) is set to the number of bytes in the buffer. The size of an array of elements is specified as a single byte. But that's unnecessarily confusing to future maintainers. A more standard way to zero out the contents of a buffer is to use the memset function. A line such as:

 memset( buf, 0, BUFSIZ); 

should do it. (Of course, you would have to change the inline declaration of buf to an explicit declaration at the same time.) That would give:

 char* buf;  buf = malloc(BUFSIZ);  memset( buf, 0, BUFSIZ); 

But wait. We neglected to check the return code from the malloc call! What if there is no more memory on the heap, so that the allocation fails? memset would try to "dereference the null pointer." An ungainly and intermittent crash would be the best result we could expect in that case. Here is a common approachwe call it "complain and die." (Of course, in real life, you would prefer to use whatever error-handling routines were defined for the program):

 char* buf;  buf = malloc(BUFSIZ);  if (buf == NULL) {     perror(argv[0]);     exit(0);  }  memset(buf, 0, BUFSIZ); 

Which code fragment would you choose? We would prefer some variant of the final example. Compared to the fix we first exhibited, it leaves the program both more secure and easier to maintain.

Our main point is this: frequently, code maintainers will strain to make minimal textual changes for the sake of perceived simplicity. That approach, instinctively, feels safer. But the only reliable way to make code safer is to take the time to understand the context you are working in, and then write good code that is consistent with the program's design.

3.3.3 Using Compartmentalization

Another tool at your disposal in designing your code is that of compartmentalization . Although compartmentalization has many meanings to different people, we're going to discuss it in the context of some of the designs and tools that are commonly found in production computing environments. The concept that is common among the approaches that we discuss here is to place untrustworthy users, programs, or objects into a virtual box for any of several reasons. The reasons include protecting the rest of the system from compromise, and observing and analyzing the untrustworthy users, programs, or objects.

While compartmentalization does not entail modifying the program you are trying to secure, it may require significant involvement with the operating system of the host on which the application is running. At the very least, compartmentalization tools will be highly dependent on the capabilities offered by the operating system you are using.

3.3.3.1 Jails

The key idea with a software jail is to allow a user to use a program that may be (or, perhaps, is known to be) compromisable. Safety is preserved because the userwho must, alas, be considered a potential attackeris kept in a so-called jail. The impact of the program's actions is constrained. File references, streams of characters moving in and out of serial lines, and especially the invocation of other programs are all tightly controlled. For this reason, attempts to modify system files or to execute forbidden programs fail because the references to the files are not resolved in the manner the user intends. The user has access only to a carefully selected subset of the entire filesystem hierarchya subset that has been prepared with "known-safe" versions of permissible programs. It's as if the user has been placed in a room from which the system designer has removed all sharp objects.

The two best examples we know of how to achieve this state of affairs are the Java "jail" and the Unix chroot mechanism. Java effects the limitation by having the runtime interpreter enforce a security management policy. The chroot facility works because Unix allows a system user with appropriate privileges to relocate the directory tree available to the managed processthat is, to "change root" for that tree. Figure 3-4 illustrates this concept.

Figure 3-4. A chroot jail
figs/scpp_0304.gif
3.3.3.2 Playpens

In a playpen [5] environment, an attacker is not only constrained but also actively misled. The malevolent user, once identified, is shunted to the side, similar to the way that an asylum must cordon off a violent patient into a rubber room. (In our favorite examplesee the ACE case study later in this chapterthe attacker could be kept busy for days.) The application system rewards attempts to compromise security or bring down the system with simulated success, while in reality, behind the scenes, alarms are sent to operators and further defenses deployed.

[5] We have also seen this mechanism referred to as a sandbox . We prefer playpen because of the constraining bars the image conjures up.

3.3.3.3 Honey pots

Like playpens, honey pots are deceptive, but the intent is different. As we employ the term , a honey pot is a dummy system or application specially designed to be attractive to miscreants, who are lured away from systems of real operational value. Mired in the honey pot, attackers poke, prod, and twist the prepared software; they accomplish no evil end, but rather pointlessly demonstrate their interests and techniques to the security experts secretly observing their maneuvers.

A note of caution is warranted here, particularly with regard to both playpens and honey pots. Although they may sound like fun and fascinating technologies to play with, they are serious tools that should be deployed for serious purposes or not at all.

   


Secure Coding[c] Principles and Practices 2003
Secure Coding[c] Principles and Practices 2003
ISBN: 596002424
EAN: N/A
Year: 2004
Pages: 81

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net