2.2 Principles of Security Architecture

   

We've defined 30 basic principles of security architecture:

  1. Start by asking questions

  2. Select a destination before stepping on the gas

  3. Decide how much security is "just enough"

  4. Employ standard engineering techniques

  5. Identify your assumptions

  6. Engineer security in from day one

  7. Design with the enemy in mind

  8. Understand and respect the chain of trust

  9. Be stingy with privileges

  10. Test any proposed action against policy

  11. Build in appropriate levels of fault tolerance

  12. Address error-handling issues appropriately

  13. Degrade gracefully

  14. Fail safely

  15. Choose safe default actions and values

  16. Stay on the simple side

  17. Modularize thoroughly

  18. Don't rely on obfuscation

  19. Maintain minimal retained state

  20. Adopt practical measures users can live with

  21. Make sure some individual is accountable

  22. Self-limit program consumption of resources

  23. Make sure it's possible to reconstruct events

  24. Eliminate "weak links"

  25. Build in multiple layers of defense

  26. Treat an application as a holistic whole

  27. Reuse code known to be secure

  28. Don't rely on off-the-shelf software for security

  29. Don't let security needs overwhelm democratic principles

  30. Remember to ask, "What did I forget?"

The following sections define these principles in greater detail, and subsequent chapters explain them in the context of different phases of software development.

2.2.1 Start by Asking Questions

Whether you're starting from scratch with a clean sheet of paper or have been handed a complex piece of software that needs fixing or updating, your first step in the road toward software security should be to ask questions. Here are several that have served us well in our own projects. It's just a start, of course; once you get the feel for this work, the questions will just keep on coming.

About our worries :

  1. What can go wrong?

  2. What are we trying to protect?

  3. Who do we think might be trying to compromise our security, and what might they be after?

  4. What is the weakest point in our defense?

About our resources :

  1. Do we have a security architecture? Is it really being used?

  2. Do we have access to a reusable software library or repository?

  3. What guidelines and standards are available to us?

  4. Who has some good examples we can use for inspiration?

About the software itself :

  1. Where does this piece of software stand in the chain of trust? Are there downstream critical applications that will rely on us for authentication? Are there upstream libraries or utilities that may or may not feed us reliable input?

  2. Who are the legitimate users of this software?

  3. Who will have access to the software, in both source and executable form?

  4. Do we see the usage and/or the number of users of this software expanding or contracting in the foreseeable future? What impact would such changes have on our initial assumptions?

  5. What is the environment in which this software run? That is, will it run on the big bad Web or inside a tightly-controlled enterprise network?

About our goals :

  1. What impact would a security compromise of this software have? How much money would we lose, directly and indirectly? What would be the impact on corporate operations, reputation, and morale ?

  2. Who are we willing to irritate or slow down with security measures, and to what degree?

  3. Do we have the support of the right "higher-ups" in the company for our security precautions ?

  4. If throngs of our users decide to work around or ignore our security measures, what are we prepared to do about it? And how will we know?

Some of these questions might be described as a kind of elementary risk analysis. Others are simply the kinds of questions any good software engineer might ask when beginning work on an architectural description. Think of them as a starting point. Any good book on software construction (McConnell's, for example) will give you many more ideas and possible topics to consider.

2.2.2 Select a Destination Before Stepping on the Gas

Of all our rules, this one may be the hardest for you to put into practice. Engineers tend to be problem solvers by nature. For that reason, if no other, it can be very hard to wait until you are sure you understand fully what needs to be done before you start making decisions.

Peter Stephenson makes the point well:

When we decide to build a house, for example, we talk to architects . They'll want to know what we are going to use the building for, say a single-family dwelling or an apartment building. How much space and what kind do we need? Where will we build itjust in case we need to make it, for example, hurricane -proof? In other words, architects will want to know a host of things. They won't just call the builder and say, "Get some bricks and wood over here and build these folks a house!" [4]

[4] Review by Peter Stephenson of the book, Information Security Architecture , 2001, http://www.scmagazine.com/scmagazine/sc-online/2001/review/005/product_book.html.

An even worse request is: "What kind of house can you build these folks out of this left-over heap of bricks and wood?" And yet, we've seen it many times.

2.2.3 Decide How Much Security Is "Just Enough"

We'll say it again. The degree of assurance required in your applications is very strongly related to the size and nature of your unique risks, as well as to the cost of the countermeasures you might program in. How secure does your application need to be? Just secure enough .

The idea is not to make an application as secure as possible. That's a guaranteed design failure, because it means you've invested too much of your resources. Furthermore, if you operate in a competitive, commercial environment, you will need to understand what your competition is doing by way of security and just how secure your application needs to be as compared to your competition. If you are fortunate, your decision may well be guided by standards or due-diligence guidelines such as those available to the financial sector.

Identify trade-offs and costs explicitly. Bring them in at the architecture level. If you have to compromise security somewhat for the sake of usability, be explicit about the issues, and make a decision that's appropriate to the business context.

2.2.4 Employ Standard Engineering Techniques

Speaking of software construction techniques, we believe that they are critical to developing secure software. We're not going to recapitulate the material found in the several good textbooks on the subject (see Appendix A for references). However, we will make the argument thatmore than most other concerns we are aware ofgood security requires good design and good design techniques.

A great many of the attacks over the last few years have been rooted in one or more of the following factors:

  • Lack of any design

  • Simple human weakness

  • Poor coding practices

Good security architecture can eliminate or mitigate the first and second factors, but if the code is an unprofessional mess, security is sure to suffer.

We mentioned homebuilders earlier. We envy them, as well as folks in similar trades and disciplines who have "building codes" (or other engineering aids) to consult and adhere to. We hope that someday the security profession will have such codes as well. Our cover design, depicting a bridge under construction, is a tip of the hat to that notion.

2.2.5 Identify Your Assumptions

One key part of any security engineering analysis is to decide on assumptions.

A trait that many good engineers have in common is the ability to stand apart from the problem at hand. They look objectively at such elements as the mental model in use, the level of system resources (e.g., disk space or memory) assumed to be available, and the possibility of processing being interrupted or suspended .

This principle relates to our discussion in Chapter 1 of the TCP SYN flood attacks that occurred back in 1996. How's this for an assumption:

If we receive a TCP packet with the SYN flag set, it means that the sender wants to start a dialog with us.

At least two assumptions in that sentence contributed to the vulnerability. The simplest one to see has to do with the sender's intent; today, it's certainly not safe to assume that the sender's goal is constructive. There is a kind of second-order assumption present as well here: how do we know who is really doing the sending?

Here is a less technical example:

The users of this software will be human beings.

We're serious! We've worked with many applications that accept input emitted from other software agents upstream in the execution chain. Before we can intelligently select authentication methodsthat is, decide which mechanisms are going to be used to decide whether someone is who he or she claims to bewe need to identify and resolve these assumptions. For example, we might choose a "shared secret" when authenticating a programmatic entity but a biometric method when dealing with flesh and blood.

Analyses such as these are also useful in avoiding software flaws such as resource exhaustion. For example, if you know that only human beings will be feeding input to your software, you might not work too hard on minimizing the amount of disk space or memory tied up while a command executes. On the other hand, if you think that now or in the future other pieces of software might cascade yours with several thousand commands a second, you might make different design decisions. For example, you might make it a design goal not to have to retain any data while a command executes. You might build in a mechanism to stop the flow of commands until the current one is completed. Or you might develop a way of passing on commands to an alternate processor or even diverting the flow of incoming commands until demand drops below a specified threshold. We have employed each of these techniques. In our own projects that we consider successful, we identified the need for them at the outset by performing a careful up-front analysis of assumptions.

2.2.6 Engineer Security in from Day One

We have seen many cases of software compromises resulting from the failure of last-minute security fix-ups. It may seem obvious to you that in order to be effective, security measures such as cryptography must be integrated into an application's design at an early stage. But remember: just as there are many folks who proclaim, "Of course our software is securewe used encryption!" there are many who believe that throwing in a set of security gimcracks late in development really is helpful.

In the real world, of course, retrofits can be necessary. But please , be careful. It's our experience that changes made in this spirit often reduce security in the long run because of their complicating and obscuring impact on future maintenance of the software. As Frederick P. Brooks points out in The Mythical Man-Month , lack of "conceptual integrity" is one of the main reasons for software failure.

Grafting on half-baked, unintegrated security technologies is asking for trouble. In Chapter 3, however, we do present some sound approaches to security retrofitting .

2.2.7 Design with the Enemy in Mind

Design your software as if your keenest adversary will attack it. J.H. Salzer, whose work we cited earlier in this chapter, called this the adversary principle . The GASSP ( Generally Accepted System Security Principles) group addressed it, too, saying that designers should anticipate attacks from " intelligent , rational, and irrational adversaries."

As we suggested in Chapter 1, it's important to try to anticipate how an attacker might approach solving the puzzle your security represents. Try to stand the software "on its head" to get a fresh perspective.

While you are doing all this, remember another point too: you can't design with the enemy in mind without a realistic sense of who might wish to attack your software. Attacks can come from either outside or inside your "trusted" networkor both. The more you think about who might attack your software or your enterprise, the better equipped you will be to design securely.

2.2.8 Understand and Respect the Chain of Trust

Don't invoke untrusted programs from within trusted ones. This principle is often violated when an application program, wishing to perform a chore such as sending a mail message, invokes a system utility or command to do the job. We have done it, and we expect that most experienced developers have, too. But this approach can easily introduce a critical security flaw, unless the program yours is "passing the torch" to is secure.

Here is an example that came to light while we were writing this chapter. In spring 2003, an announcement began circulating on the Internet about a newly discovered vulnerability relating to the popular (and venerable) Unix man utility. This program, which runs with privileges on some systems, has the weird undocumented "feature" of emitting the string "UNSAFE" in certain circumstances when it is presented with erroneous input. As a result, it's possible in some circumstances (the trick involves creating a file named, yes, "UNSAFE") for a malefactor to cause a set of arbitrary commands to be executed with privilege and compromise host security. If an otherwise secure application were to invoke this utility (perhaps to perform some housekeeping chore deemed "too tedious " to code directly), the security of the invoking application could be circumvented as well.

The general rule is that one program should never delegate its authority to take an action without also delegating the responsibility to check if the action is appropriate.

The chain of trust extends in the other direction as well. You will want to cleanse all data from outside the program before using it, such as initialization and configuration files, command-line arguments, filenames, and URLs. We cover those details in later chapters.

Note as well that your application is not really "respecting the chain of trust" unless it validates what is presented to it; does not pass tasks on to less-trusted entities; and is careful to only emit information that is as valid and as safe as can be produced from your software's resources.

2.2.9 Be Stingy with Privileges

Like individual users, a program must operate with just enough privilege and access to accomplish its objectives. If you only need to read a record, don't open the file for write access. If you need to create a file in a shared directory, consider making use of a "group access" feature or an access control list to manage file access, instead of just having the program run with elevated privileges. This idea is sometimes called the principle of least privilege .

If your operating system can handle it, make sure that you programmatically drop privileges to a low level when possible, then raise them back to a higher level if needed. We'll have more to say about this operation in subsequent chapters.

2.2.10 Test Any Proposed Action Against Policy

For stringent security, you must be sure to test every attempted action against policy, step by step, before you carry it out. Salzer called this idea the principle of complete mediation .

For example, if your application is a front end to a web shopping cart service, complete mediation would require that, before you add an item to the cart, you checkevery timeto ensure that the cart belongs to the person operating the purchase form. As we interpret it, this does not mean that you need to reauthenticate the user over and over. But it does require that you check to make sure that the session has not expired , that the connection has not been broken and reasserted, and that no change has been made to the rules governing use of the shopping cart since the last transaction was entered.

The best example of a violation of this rule that we are aware of is the Unix operating system. When a file is opened, access rights are checked. If the agent seeking access (the "user") has appropriate rights, the operating system grants access, supplying as the return value of the system call a so-called "file handle." Subsequent references to the file by the program use the handle, not the name . [5]

[5] This arrangement is good in one way: the operating system is resistant to the trick that attackers like to pull of changing a file after it has been opened.

Because the access check is performed only at the time the file handle is created, subsequent requests to use the file will be honored. This is true even if, say, access permissions were tightened moments after the initial check was done (at which point, the user may no longer have legitimate rights to use the file).

Complete mediation is necessary to ensure that the moment-to-moment "decisions" made by software are in accordance with the up-to-date security settings of the system. It's not just a function of the operating system, of course: applications using databases constantly mediate access as well.

2.2.11 Build in Appropriate Levels of Fault Tolerance

We like what the CERT Survivability Project has to say about an approach to fault tolerance and redundancy. They recommend that your enterprise should:

First, identify mission-critical functionality. Then, use the Three Rs:

  • Resistance (the capability to deter attacks)

  • Recognition (the capability to recognize attacks and the extent of damage)

  • Recovery (the capability to provide essential services and assets during attack and recover full services after attack)

You need to carefully think through the role your software is to play in your company's fault tolerance and continuity planning. Suppose that the software mediates access to a key corporate database. If the company would lose significant revenues (or suffer other significant loss) or if the application or resource were unavailable for a short period of time, the design for its use must take this into account. This could mean including plans to:

  • Limit access only to key authorized users, and limit the demands even those people can make on the resources of your network, to defend against a determined attempt to deny service.

  • Make use of alternate resources, such as falling back to paper reports and manual record keeping

  • Reproduce or restore information during the outage . (Remember that such a restoration may be complicated by such things as structural changes to the data schema over the range of time covered by the backup tapes.)

2.2.12 Address Error-Handling Issues Appropriately

We've lost count of the number of systems we've seen compromised as a result of improper handling of unexpected errors.

This is actually a mistake that can be made at any phase of the software lifecycleas part of a flawed architecture, flawed design, flawed implementation, or even flawed operations, as follows :

Architect

The architect should decide on a general plan for handling errors. For example, you might stop on really bizarre, unimaginable ones and log others, or complain and die at the first hint of trouble. These are two arrangements we've used.

Designer

The designer should devise a rule about how the application will detect failures; how it will discriminate between cases; and the mechanisms it will use to respond. For example, a Unix designer might choose to check the result of all system calls and make syslog entries about failures.

Coder

The coder should be careful to capture the decision-triggering conditions and actually carry out the design.

Operator

Finally, operations folks come into play because they may need to check to see if critical processes have, in fact, stopped , or whether console messages have appeared or log files have filled up.

In our experience, the best plan for typical applications is to stop when an unexpected error occurs. Of course, there are cases when that's a really bad idea! (See the upcoming discussion in Section 2.2.14.) Figuring out what to do in such cases can be very difficult, but some architectural-level plan about error handling is essential to application security.

2.2.13 Degrade Gracefully

Properly engineered and secure systems exhibit behavior known as graceful degradation . [6] This simply means that when trouble happensfor example, when a program runs out of memory or some other critical system resourceit doesn't just stop or panic, but rather continues to operate in a restricted or degraded way.

[6] This is an absolutely fundamental security engineering principle that, remarkably enough, is simply not included in much of the course material we have seen over the years.

The SYN flood attacks point out a great example of a design (in most TCP stacks, at least) that did not originally include graceful degradation. The reason the floods were so deadly is because the malformed TCP packets with which many networked systems were cascaded caused the systems to complain and die. Simply, the fix implemented on many systems was to temporarily clamp down, or " choke ," the number of simultaneous network connection attempts the system allowed. When the cascade attack stopped, the number of connection attempts to the system was "opened up" again. That's graceful degradation. Note, however, that this particular capability was not built into popular operating systems as part of the architecture. Rather, it had to be shoehorned in as a retrofit in response to yet another Internet security panic.

As another example, consider the use of "crumple zones" in cars . Any unfortunate soul who drives a car into a wall and manages to walk away because the front of the car has been designed to collapse and dissipate the kinetic energy has seen a kind of graceful degradation (or, at least, pre-selected failure points) at very close range.

A third example of good engineering was exhibited in the VMS operating system developed by Digital Equipment Corporation. As we heard the story, architects of that systemone was a friend of oursmade the deliberate decision that the factor limiting processing performance in almost any situation would be the amount of memory available to the CPU. As processes ran out of memory, the system would slow down but (hardly ever) crash. There were several results of this design: (a) Digital performance engineers could answer almost any performance complaint with the same answer, "Buy more memory"; (b) VMS was nicely crash-resistant; and (c) Digital sold a lot of memory.

2.2.14 Fail Safely

Is program failure a special case of degradation? Perhaps, but we think it deserves its own rule.

A good example of the need for safe failure is your corporate firewall. If the server that hosts the firewall dies, you want it to leave the network closed not opendon't you? How about the program that controls the time lock on a bank vault? Surely the safe action in that case, too, is to fail "closed."

But as you might suspect, we have some counter-cases on our minds. Imagine that you work in a computer room with one of those push-button digitally coded electronic locks. In the case of a power failure, would you want the door to be frozen in the locked or the unlocked position? (In truth, even in the firewall case, we can imagine an enterprise that would prefer to keep selling when the firewall failed, rather than having to shut their virtual doors.)

Our favorite example involves a consulting job one of us turned down once to implement the Harvard "brain death" criteria on intensive care ward equipment. The idea was to turn off an iron lung machine automatically when certain sensor inputs indicated that the patient was clinically dead. You might, in writing software for such a device, decide that it should shut down the equipment altogether if it encountered an unexpected error condition. What do you think?

It's these kinds of complications that keep security architects in business. You will need to decide which way"fail open" or "fail closed"is safer in your particular case, and then figure out to make the safe case the default result.

2.2.15 Choose Safe Default Actions and Values

The "fail-safe" considerations discussed in the previous sections lead us to a broader point: the need to provide for safe default actions in general.

First, let's look at a simple example. When you think about authorization, your first thought might be that your software will naturally decide that a user does not have access rights until such time as you can determine that he does . That way, if your application sends a request for an authorization over to the local server and gets no convincing reply within a reasonable time, you don't need to take any special action.

Fair enough. But can you think of a case where it's better to say "yes" unless you know you need to say "no"? Well, here's a case similar to the iron lung sensor failure scenario. What if you're writing firmware for a machine that delivers air to an incubator or water to a cooling tower in a nuclear plant? Suppose, further, that policy requires operators (or nurses) to sign in with a password when they come on shift, and your software receives several bad password tries in a row. Is the "safe" action to stop the flow of materials while you straighten out what's going on?

2.2.16 Stay on the Simple Side

If the essence of engineering is to transform problems we can't solve into problems that we can, the essence of security engineering is to build systems that are as simple as possible. Simple systems are easiest to design well and test thoroughly. Moreover, features that do not exist cannot be subverted, and programs that do not need to be written have no bugs .

We'll give the last word on this topic to Albert Einstein. "A good theory," he said, "should be as simple as possiblebut no simpler."

2.2.17 Modularize Thoroughly

Modularize carefully and fully. Define with specificity the points of interface between modulesarguments, common memory structures, and so forth. Then limit privileges and resource usage to the modules that really need them.

Program functions that require exceptional privilege or access should routinely be held in separate logical design compartments and coded in separate, simple modules. The idea is to isolate the functions that need privileges (like special file access) from other parts of the code. In that way, functions are executed with privilege only when a particular operation requires it and for just as long a time as it is needed.

For an excellent case study of the benefits of modularization , see the discussion of Wietse Venema's Postfix program in Chapter 3.

2.2.18 Don't Rely on Obfuscation

Back in the 1990s, the Internet security community engaged in quite a debate about the value of obfuscation as a design element. "Security through obscurity doesn't work!" was one rallying cry. To be sure, some of the heat derived from the aspect of the debate that touched on whether closed source or open source software tends to be safer. Many engineers argued passionately that concealing how something (such as an encryption algorithm) works or where in a registry a particular policy parameter is stored, can be short-sighted and dangerous.

We certainly agree that reliance upon concealment of design details is generally misplaced. Security should be intrinsic. On the other hand, we place a positive reliance on the value of deception . So, if you can mislead an attacker into misapplying energies, do so, by all means. But don't rely on secrecy as a sole defense. And don't forget the value of simplicity. We have seen many clever twists and turns ("Oh, we don't keep it in that directory, the software looks up an environment variable that tells it...") that resulted in operational chaos after the application had been in production a while.

2.2.19 Maintain Minimal Retained State

As we discussed earlier in this chapter, the decision of how much information your software retains while a transaction or command is executed can often turn out to be an important design element. In the case of the SYN flood attacks, it was mostly the fact that the system had to retain information about incomplete requests that made the attack possible. Security engineers often refer to such information (somewhat inaccurately, from a computer scientist's point of view) as the program's state . Will that TCP handler you are writing be "stateful" or "stateless"? It's an important consideration. Our experience says it's best to strive for statelessness.

If a program retains minimal state, it's harder for it to get into a confused , disallowed state. More importantly, it's harder for a bad guy to modify state variables and create a disallowed program state, thus generating or facilitating anomalous program actions. Some of the easiest attacks against CGI scripts (see Chapter 4) work just this way. If you take a kind of sideways look at "stack-smashing" buffer overflows, you might conclude that they also operate by attacking the program's state.

2.2.20 Adopt Practical Measures Users Can Live With

In theory, there should be no difference between theory and practice. But in practice, there is. And we think that security engineering is like medical practice in this respect: it tends to reward those who think realistically . Folks who practice self-deception in our field can often fail in spectacular ways. Engineers who fail to take into account the way users think and work make themselves part of the problem instead of the solution.

So, select a user interface that makes it easy to do the right thing. Use mental models and paradigms drawn from the real world and familiar to everyone. (Banks, safes, locks, and keys are useful exemplars.) This is sometimes called the principle of psychological acceptability .

If the security measures you build into a system are so onerous or so irritating that users simply work around themand trust us, they will if they need toyou won't have accomplished anything useful. Chapter 3 recounts a few of the impractical measures (such as "idle-terminal timeouts") we championed in the days before ( just before) we learned this lesson.

2.2.21 Make Sure Some Individual Is Accountable

A successful architecture ensures that it's possible to hold individuals responsible for their actions. This requirement means that:

  • Each user must have and use an individualnot "group"account.

  • It must be reasonably difficult for one person to pose as another.

  • Responsibility for the security of the assets involved must be clearly assigned. Everyone must be able to answer such questions as "Who is responsible for the integrity of this database?" There should never be a doubt.

Oh, and to be explicit: accountable individuals must be aware that they are being held accountable.

2.2.22 Self-Limit Program Consumption of Resources

A very common method of attack against application systems involves attempting to exhaust system resources such as memory and processing time. The best countermeasure is to use whatever facilities the operating system makes available to limit the program's consumption of those resources.

In general, programs that use a system's resources gently contribute to overall security. One reason is because it's often when those hard limits are reached that seldom- tested exceptions and combinations are invokedwhether the system is about to run out of slots for open files, running programs, or open TCP connections.

Still, resource-consumption limitations must be combined with meaningful error recovery to be most effective. Suppose you decide to limit how much of the system's memory your program can consume . Well done! Now, remember to ensure that you design and implement measures that will detect and rationally handle the fact that the limit has been reached. Even better, of course, would be graceful degradation measures that check whether memory is beginning to run low, and take steps to stem the flow of demands before the hard threshold is reached.

2.2.23 Make Sure It's Possible to Reconstruct Events

It must be possible to reconstruct the sequence of events leading up to key actionsfor example, changes to data. This requirement means that the application must make and retain audit logs. The host system must make and retain event logs as well. Such a feature is often referred to as auditability .

We'll discuss audit logs in much more detail as an element of the operational security of software, described in Chapter 5.

2.2.24 Eliminate "Weak Links"

There is no use barricading the front door if you are going to leave the back door open and unattended. Everyone can see the sense of this. Why, then, do we see so many applications and systems with "weak links" that open gaping holes?

We think the answer is part economic, part psychological, and part practical. We discussed many of these elements in Chapter 1 and won't repeat them here. Whatever the cause, a tendency to leave gaping holes certainly is part of the common current-day development experience.

What this principle tells you to strive for is to provide a consistent level of defensive measures over the entire range of your program. This point is reinforced by the two following principles, which approach the problem from slightly different perspectives.

2.2.25 Build in Multiple Layers of Defense

Providing defense in depth is better than relying on a single barrier . In other words, don't put all your security eggs in one basket . Require a user to have both proper permissions and a password before allowing him to access a file.

We regard this principle as a point of common sense requiring little elaboration. Why else would some gentlemen of ample girth and a certain ageone of your authors surely qualifiesmake it a habit to wear both a belt and suspenders, but to guard against a well- understood risk scenario with defense in depth?

2.2.26 Treat an Application as a Holistic Whole

One factor commonly overlooked by software engineers trying to write secure applications is that the application system as a whole needs to be secured. [7]

[7] We both learned this best from Tim Townsend of Sun Microsystems and thank him for the lesson.

We are not just singing here our old song about building security in at every stage of development. We are arguing, in addition, that you need to consider the entire collection of interoperating application software, support software, network connectivity, and hardware in analyzing threats, the chain of trust, and so forth.

Imagine that you use your web browser to look at a file built by Microsoft Word. Imagine, further, that the Word file has a macro in it that makes use of a Visual Basic program to perform some undesired action on your computer. If that were to happen (we understand that some browsers may have code in them to disable or warn against this case, but let's follow the argument through), in which software piece would you say the security flaw exists?

Our point is that a set of reasonable design decisions may well combine in a way that is unreasonable. A necessary step in avoiding this outcome is to look at all the pieces, all the steps, and work through them all, with a worrying eye, considering what can go wrong during each step.

2.2.27 Reuse Code Known to be Secure

Steal from your friends ! That's what we used to say when promoting the DECUS (Digital Equipment Corporation User Society) code library in the old days. As the folks at CPAN (Comprehensive Perl Archive Network, the Perl code repository) well know, it's still good advice these days. If you can get your hands on existing code that does what you need to do (or illustrates how to do it) securely, waste no time in getting permission to make thorough use of it.

We have a lot of sympathy for readers who are not inclined to adopt this particular practice immediately. The experience (which we've endured more than once) of trying to fix one's own engineering mistakes some years after they were committed is all that's necessary to convince folks on this point. And, after all, why should you bother solving problems that have already been solved ?

How to find "secure" code extracts to adaptnow that's a challenge. Many large corporations (and, surprisingly, a good percentage of small ones) have "code reuse" policies and repositories that you should scavenge around in. Of course, if you are in the position of being able to reuse open source or publicly available software, you're in much better shape than folks locked in to the proprietary software world. News lists, cooperative coding projects, books and articles, and word of mouth are all sources that you or someone on your project team should be carefully sifting through.

On this subject, we'll say a word about open source software. As you may know, the source code for much of the software that runs the Internet is publicly available. (CGI scripts and the Linux operating system are two good examples.) Security experts have been debating for many years whether open source software is generally more secure than closed source (proprietary) software. The argument is that because open source is open to inspection by many eyes, vulnerabilities are likely to be found and fixed sooner than in the proprietary case.

We are inclined to agree with that proposition. But please, be careful. It would be clearly irresponsible for you to assume that a particular program or algorithm is secure from attack just because it has been publicly available for many years. The Kerberos Version 4 "random number" vulnerability [8] and a number of faults in well-known encryption schemes over the years make this point very convincingly. Be sure to do your own security analysis of borrowed software before putting it into operation.

[8] For details about this bug see http://www.ieee-security.org/Cipher/Newsbriefs/1996/960223.kerbbug.html, and we discuss it briefly in Chapter 4.

2.2.28 Don't Rely on Off-the-Shelf Software for Security

There is a another consideration about software reuse that applies specifically to commercial third-party or "off-the-shelf" products.

You should be especially careful about relying on such software or services for critical operations. To be sure, sometimes an outsourced solution is the secure choice; but be aware of any dependencies or additional risks to confidentiality you create by relying on outside technology or services. This concern is especially relevant to proprietary software, where you may have little control over the pace of fixes and updates, the course of future development, or a decision to declare an "end of life" for it at a time inopportune for you.

And, as we've said before, be sure to assess the security aspects of any solution you bring in-house carefully before (a) deciding to making use of it, or (b) deploying it.

2.2.29 Don't Let Security Needs OverwhelmDemocratic Principles

In implementing security measures, we believe that the principles of individual privacy and the laws governing the storage, transmission and use of information must be respected. Security architects today can and should be held responsible for obeying the law and for helping to lay the foundation for the future of freedom. The decisions we make concerning how to protect private information, detect tampering, or anticipate and defend against attack can have a impact far beyond their technical effect.

This is a new and differerent kind of architectural principle, one that transcends the technical. We find its emergence and acceptance (in GASSP, for example), as the Internet grows to become an essential part of our everyday lives, quite encouraging.

Please understand that your authors, who are both Americans, make no claim that the views of the United States on these matters should prevail. Privacy laws and their interaction with cyber security vary considerably from country to country. What we are arguing is that engineers should keep these laws and societal values in mind as they work.

2.2.30 Remember to Ask, "What Did I Forget?"

Just as your authors are gratified that we remembered to include this principle here at the end, we also hope that you'll strive always to ask yourself and your colleagues what you might have forgotten to take into account on any particular project. This practice is perhaps the most important habit for members of our profession to adopt.

   


Secure Coding[c] Principles and Practices 2003
Secure Coding[c] Principles and Practices 2003
ISBN: 596002424
EAN: N/A
Year: 2004
Pages: 81

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net