|< Free Open Study >|
3.5 Controls Against Program Threats
The picture we have just described is not pretty. There are many ways a program can fail and many ways to
In this section we look at three types of controls : developmental, operating system, and administrative. We discuss each in turn.
Many controls can be applied during software development to
The Nature of Software Development
Software development is often
One person could do all these things. But more often than not, a team of developers works together to perform these tasks. Sometimes a team member does more than one activity; a tester can take part in a requirements review, for example, or an implementer can write documentation. Each team is different, and team dynamics play a large role in the team's success.
We can examine both product and process to see how each
Modularity, Encapsulation, and Information Hiding
Code usually has a long shelf-life, and it is enhanced over time as needs change and faults are found and fixed. For this reason, a key principle of software engineering is to create a design or code in small, self-contained units, called
If a component is isolated from the effects of other components, then it is easier to trace a problem to the fault that caused it and to limit the damage the fault causes. It is also easier to maintain the system, since changes to an isolated component do not affect other components. And it is easier to see where vulnerabilities may lie if the component is isolated. We call this isolation encapsulation .
is another characteristic of modular software. When information is hidden, each component hides its precise implementation or some other design decision from the others. Thus, when a change is needed, the overall design can
Let us look at these characteristics in more detail.
Figure 3-16. Modularity.
Often, other characteristics, such as having a single input and single output or using a limited set of programming constructs, help a component be modular. From a security standpoint, modularity should improve the
In particular, smallness is an important quality that can help security analysts understand what each component does. That is, in good software, design and program units should be only as large as needed to perform their required functions. There are several advantages to having small, independent components.
Security analysts must be able to understand each component as an independent unit and be assured of its limited effect on other components.
A modular component usually has high cohesion and low coupling. By
, we mean that all the elements of a component have a logical and functional reason for being there; every aspect of the component is tied to the component's single purpose. A highly cohesive component has a high degree of focus on the purpose; a low degree of cohesion means that the component's contents are an unrelated
refers to the degree with which a component depends on other components in the system. Thus, low or loose coupling is better than high or tight coupling, because the loosely
Figure 3-17. Coupling.
Encapsulation hides a component's implementation details, but it does not
An encapsulated component's protective boundary can be translucent or transparent, as needed. Berard [BER00] notes that encapsulation is the "technique for packaging the information [inside a component] in such a way as to hide what should be hidden and make visible what is intended to be visible."
Developers who work where modularization is stressed can be sure that other components will have limited effect on the ones they write. Thus, we can think of a component as a kind of black box, with certain well-defined inputs and outputs and a well-defined function. Other components' designers do not need to know how the module completes its function; it is enough to be assured that the component performs its task in some correct manner.
Figure 3-18. Information Hiding.
These three characteristicsmodularity, encapsulation, and information hidingare fundamental principles of software engineering. They are also good security practices because they lead to modules that can be
Here, we look at each practice
You have probably been doing some form of review for as many
A wise engineer who finds a fault can deal with it in at least three ways:
Peer reviews address this problem directly.
But there are compelling reasons to do reviews. An overwhelming amount of evidence suggests that various types of peer review in software engineering can be extraordinarily effective. For example, early studies at Hewlett-Packard in the 1980s revealed that those developers performing peer review on their projects enjoyed a very significant advantage over those relying only on traditional dynamic testing techniques, whether black-box or white-box. Figure 3-19
Figure 3-19. Fault Discovery Rate
Faults Found (Per Thousand Lines of Code)
The inspection process involves several important steps: planning, individual preparation, a logging meeting,
During the review process, it is important to keep careful track of what each reviewer discovers and how quickly he or she discovers it. This log suggests not only whether particular reviewers need training but also whether certain kinds of faults are harder to find than others. Additionally, a root cause analysis for each fault found may reveal that the fault could have been discovered earlier in the process. For example, a requirements fault that surfaces during a code review should probably have been found during a requirements review. If there are no requirements reviews, you can start performing them. If there are requirements reviews, you can examine why this fault was missed and then improve the requirements review process.
The fault log can also be used to build a checklist of items to be sought in future reviews. The review team can use the checklist as a basis for questioning what can go wrong and where. In particular, the checklist can
Hazard analysis is a set of systematic techniques intended to expose
Although hazard analysis is
A variety of techniques support the identification and management of potential hazards. Among the most effective are
hazard and operability studies
failure modes and effects analysis
fault tree analysis
). HAZOP is a structured analysis technique originally developed for the process control and chemical plant industries. Over the last few years it has been
Each of these techniques is clearly useful for finding and preventing security breaches. We decide which technique is most appropriate by understanding how much we know about causes and effects. For example, Table 3-7 suggests that when we know the cause and effect of a given problem, we can strengthen the description of how the system should behave. This clearer picture will help requirements analysts understand how a potential problem is linked to other requirements. It also helps designers understand exactly what the system should do and helps testers know how to test to verify that the system is behaving properly. If we can describe a known effect with unknown cause, we use deductive techniques such as fault tree analysis to help us understand the likely causes of the unwelcome behavior. Conversely, we may know the cause of a problem but not understand all the effects; here, we use inductive techniques such as failure modes and effects analysis to help us trace from cause to all possible effects. For example, suppose we know that a subsystem is
Description of system behavior
Deductive analysis, including fault tree analysis
Inductive analysis, including failure modes and effects analysis
Exploratory analysis, including hazard and operability studies
We see in Chapter 8 that hazard analysis is also useful for determining vulnerabilities and mapping them to suitable controls.
Testing is a process activity that
Testing usually involves several stages. First, each program component is tested on its own, isolated from the other components in the system. Such testing, known as module testing, component testing, or unit testing , verifies that the component functions properly with the types of input expected from a study of the component's design. Unit testing is done in a controlled environment whenever possible so that the test team can feed a predetermined set of data to the component being tested and observe what output actions and data are produced. In addition, the test team checks the internal data structures, logic, and boundary conditions for the input and output data.
When collections of components have been subjected to unit testing, the next step is ensuring that the interfaces among the components are defined and handled properly. Indeed, interface mismatch can be a significant security vulnerability. Integration testing is the process of verifying that the system components work together as described in the system and program design specifications.
Once we are sure that information is passed among components in accordance with the design, we test the system to ensure that it has the desired functionality. A
The function test compares the system being built with the functions described in the developers' requirements specification. Then, a performance test compares the system with the remainder of these software and hardware requirements. It is during the function and performance tests that security requirements are examined, and the testers confirm that the system is as secure as it is required to be.
When the performance test is complete, developers are certain that the system functions according to their understanding of the system description. The next step is conferring with the customer to make certain that the system works according to customer expectations. Developers join the customer to perform an
, in which the system is checked against the customer's requirements description. Upon completion of acceptance testing, the accepted system is installed in the environment in which it will be used. A final
is run to make sure that the system still functions as it should. However, security requirements often state that a system should not do something. As Sidebar 3-6
The objective of unit and integration testing is to ensure that the code implemented the design properly; that is, that the programmers have written code to do what the designers intended. System testing has a very different objective: to ensure that the system does what the customer wants it to do. Regression testing, an aspect of system testing, is particularly important for security purposes. After a change is made to enhance the system or fix a problem, regression testing ensures that all remaining functions are still working and performance has not been degraded by the change.
Each of the types of tests listed here can be performed from two perspectives: black box and clear box (sometimes called white box).
treats a system or its components as black boxes; testers cannot "see inside" the system, so they apply particular inputs and verify that they get the expected output.
allows visibility. Here, testers can examine the design and code directly, generating test cases based on the code's actual construction. Thus, clear-box testing
Sidebar 3-6 Absence vs. Presence
Pfleeger [PFL97] points out that security requirements resemble those for any other computing task, with one seemingly insignificant difference. Whereas most requirements say "the system will do this," security requirements add the phrase "and nothing more." As we pointed out in Chapter 1, security awareness calls for more than a little caution when a creative developer takes liberties with the system's specification. Ordinarily, we do not worry if a programmer or designer adds a little something extra. For instance, if the requirement calls for generating a file list on a disk, the "something more" might be sorting the list into alphabetical order or displaying the date it was created. But we would never expect someone to meet the requirement by displaying the list and then erasing all the files on the disk!
If we could determine easily whether an addition was harmful, we could just disallow
It is natural for programmers to want to exercise their creativity in extending and expanding the requirements. But apparently
In another instance, an enthusiastic programmer added parity checking to a cryptographic procedure. Because the keys were generated
No technology can automatically distinguish between malicious and benign code. For this reason, we have to rely on a combination of approaches, including human-
The mix of techniques appropriate for testing a given system depends on the system's size, application domain, amount of risk, and many other factors. But understanding the effectiveness of each technique helps us know what is right for each particular system. For example, Olsen [OLS93] describes the development at Contel IPC of a system containing 184,000 lines of code. He tracked faults discovered during various activities, and found differences:
17.3 percent of the faults were found during inspections of the system design
19.1 percent during component design inspection
15.1 percent during code inspection
29.4 percent during integration testing
16.6 percent during system and regression testing
Only 0.1 percent of the faults were revealed after the system was placed in the field. Thus, Olsen's work shows the importance of using different techniques to uncover different kinds of faults during development; it is not enough to rely on a single method for catching all problems.
Who does the testing? From a security standpoint, independent testing is highly desirable; it may prevent a developer from attempting to hide something in a routine, or keep a subsystem from controlling the tests that will be applied to it. Thus, independent testing
We saw earlier in this chapter that modularity, information hiding, and encapsulation are characteristics of good design. Several design-
using a philosophy of fault tolerance
having a consistent policy for handling failures
capturing the design rationale and history
using design patterns
We describe each of these activities in turn.
Designs should try to anticipate faults and handle them in ways that minimize disruption and maximize safety and security. Ideally, we want our system to be fault free. But in reality, we must assume that the system will fail, and we make sure that unexpected failure does not bring the system down, destroy data, or destroy life. For example, rather than waiting for the system to fail (called
passive fault detection
), we might construct the system so that it reacts in an acceptable way to a failure's occurrence.
Active fault detection
could be practiced by, for instance, adopting a philosophy of mutual suspicion. Instead of
If correcting a fault is too risky, inconvenient, or expensive, we can choose instead to practice fault tolerance : isolating the damage caused by the fault and minimizing disruption to users. Although fault tolerance is not always thought of as a security technique, it supports the idea, discussed in Chapter 8, that our security policy allows us to choose to mitigate the effects of a security problem instead of preventing it. For example, rather than install expensive security controls, we may choose to accept the risk that important data may be corrupted. If in fact a security fault destroys important data, we may decide to isolate the damaged data set and automatically revert to a backup data set so that users can continue to perform system functions.
More generally, we can design or code defensively, just as we drive defensively, by constructing a consistent policy for handling failures. Typically, failures include
failing to provide a service
providing the wrong service or data
We can build into the design a particular way of handling each problem, selecting from one of three ways:
Retrying : restoring the system to its previous state and performing the service again, using a different strategy
Correcting : restoring the system to its previous state, correcting some system characteristic, and performing the service again, using the same strategy
Reporting : restoring the system to its previous state, reporting the problem to an error-handling component, and not providing the service again
This consistency of design helps us check for security vulnerabilities; we look for instances that are different from the standard approach.
Design rationales and history tell us the reasons the system is built one way instead of another. Such information helps us as the system evolves, so we can integrate the design of our security functions without
Moreover, the design history enables us to look for patterns, noting what designs work best in which situations. For example, we can reuse patterns that have been successful in preventing buffer overflows, in ensuring data integrity, or in implementing
Among the many kinds of prediction we do during software development, we try to predict the risks involved in building and using the system. As we see in depth in Chapter 8, we must postulate which unwelcome events might occur and then make plans to avoid them or at least mitigate their effects. Risk prediction and management are especially important for security, where we are always dealing with unwanted events that have negative consequences. Our
Before a system is up and running, we can examine its design and code to locate and repair security flaws. We noted earlier that the peer review process involves this kind of scrutiny. But static analysis is more than peer review, and it is usually performed before peer review. We can use tools and techniques to examine the characteristics of design and code to see if the characteristics warn us of possible faults lurking within. For example, a large number of levels of nesting may
To this end, we can examine several aspects of the design and code:
control flow structure
data flow structure
The control flow is the sequence in which instructions are executed, including iterations and
The data structure is the way in which the data are organized, independent of the system itself. For instance, if the data are arranged as lists, stacks, or queues, the algorithms for manipulating them are likely to be well understood and well defined.
There are many approaches to static analysis, especially because there are so many ways to create and document a design or program. Automated tools are available to generate not only
When we develop software, it is important to know who is making which changes to what and when:
corrective changes : maintaining control of the system's day-to-day functions
adaptive changes : maintaining control over system modifications
preventive changes : preventing system performance from degrading to unacceptable levels
We want some degree of control over the software changes so that one change does not inadvertently undo the effect of a previous change. And we want to control what is often a
Four activities are involved in configuration management:
configuration control and change management
Configuration identification sets up baselines to which all other code will be compared after changes are made. That is, we build and document an inventory of all components that comprise the system. The inventory includes not only the code you and your colleagues may have created, but also database management systems, third-party software, libraries, test cases, documents, and more. Then, we "freeze" the baseline and carefully control what happens to it. When a change is proposed and made, it is described in terms of how the baseline changes.
ensure we can coordinate separate, related versions. For example, there may be closely related versions of a system to execute on 16-bit and 32-bit processors. Three ways to control the changes are separate files, deltas, and conditional compilation. If we use separate files, we have different files for each release or version. For example, we might build an encryption system in two configurations: one that uses a short key length, to
Alternatively, we can
Finally, we can do conditional compilation , whereby a single code component addresses all versions, relying on the compiler to determine which statements to apply to which versions. This approach seems appealing for security applications because all the code appears in one place. However, if the variations are very complex, the code may be very difficult to read and understand.
Once a configuration management technique is
records information about the components: where they came from (for instance, purchased, reused, or written from scratch), the current version, the change history, and pending change
All four sets of activities are performed by a configuration and change control board , or CCB. The CCB contains representatives from all organizations with a vested interest in the system, perhaps including customers, users, and developers. The board reviews all proposed changes and approves changes based on need, design integrity, future plans for the software, cost, and more. The developers implementing and testing the change work with a program librarian to control and update relevant documents and components; they also write detailed documentation about the changes and test results.
Configuration management offers two advantages to those of us with security concerns: protecting against unintentional threats and guarding against malicious ones. Both goals are addressed when the configuration management processes protect the integrity of programs and documentation. Because changes occur only after explicit approval from a configuration management authority, all changes are also carefully evaluated for side effects. With configuration management, previous versions of programs are archived, so a developer can retract a faulty change when necessary.
Malicious modification is made quite difficult with a strong review and configuration management process in place. In fact, as presented in Sidebar 3-7, poor configuration control has resulted in at least one system failure; that sidebar also confirms the principle of
Sidebar 3-7 There's More Than One Way to Crack a System
In the 1970s the primary security assurance strategy was "penetration" or "tiger team" testing. A team of computer security experts would be
The U.S. Department of Defense was testing the Multics system, which had been designed and built under extremely high security quality standards. Multics was being studied as a base operating system for the WWMCCS command and control system. The developers from M.I.T. were justifiably proud of the strength of the security of their system, and the sponsoring agency invoked the penetration team with a note of haughtiness. But the developers underestimated the security testing team.
Led by Roger Schell and Paul Karger, the team analyzed the code and performed their tests without finding major flaws. Then one team member thought like an attacker. He wrote a
When it came time to demonstrate their work, the penetration team congratulated the Multics developers on generally solid security, but said they had found this one apparent failure, which the team member went on to show. The developers were aghast because they knew they had scrutinized the affected code carefully. Even when told the nature of the trapdoor that had been added, the developers could not find it. [KAR74, KAR02]
One of the easiest things we can do to enhance security is learn from our mistakes. As we design and build systems, we can document our decisionsnot only what we decided to do and why, but also what we decided not to do and why. Then, after the system is up and running, we can use information about the failures (and how we found and fixed the underlying faults) to give us a better understanding of what leads to vulnerabilities and their exploitation.
From this information, we can build checklists and codify guidelines to help
A security specialist wants to be certain that a given program computes a particular result, computes it correctly, and does nothing beyond what it is supposed to do. Unfortunately, results in computer science theory (see [PFL85] for a description) indicate that we cannot know with
In spite of this disappointing general result, a technique called
Proving program correctness, although desirable and useful, is hindered by several factors.
Correctness proofs depend on a programmer or logician to translate a program's statements into logical implications. Just as programming is prone to errors, so also is this translation.
Deriving the correctness proof from the initial assertions and the implications of statements is difficult, and the logical engine to generate proofs runs slowly. The speed of the engine degrades as the size of the program increases, so proofs of correctness are even less appropriate for large programs.
Sidebar 3-8 Formal
The current state of program verification is less well developed than code production. As a result, correctness proofs have not been consistently and successfully applied to large production systems.
Program verification systems are being improved constantly. Larger programs are being
None of the development controls described here can guarantee the security or quality of a system. As Brooks often points out [BRO87], the software development community seeks, but is not likely to find, a "silver bullet": a tool, technique, or method that will dramatically improve the quality of software developed. "There is no single development in either technology or management technique that by itself promises even one order-of-magnitude improvement in productivity, in reliability, in simplicity." He bases this conjecture on the fact that software is complex, it must conform to the infinite variety of human requirements, and it is abstract or invisible, leading to its being hard to draw or envision. While software development technologiesdesign tools, process improvement models, development methodologieshelp the process, software development is
In the early 1970s Paul Karger and Roger Schell led a team to evaluate the security of the Multics system for the U.S. Air Force. They republished their original report [KAR74] thirty years later with a thoughtful analysis of how the security of Multics compares to the security of current systems [KAR02]. Among their observations were that buffer overflows were almost impossible in Multics because of support from the programming language, and security was easier to ensure because of the simplicity and structure of the Multics design. Karger and Schell argue that we can and have designed and implemented systems with both functionality and security.
Development controls are usually applied to large development projects in a variety of software production environments. However, not every system is developed in the ways we described above; sometimes projects are too small or too resource constrained to justify the extra resources needed for reviews and configuration control
We examine operating systems in some detail in Chapters 4 and 5, in which we see what security features they provide for their users. In this chapter, we outline how an operating system can protect against some of the design and implementation flaws we have discussed here.
We say that software is
if we know that the code has been rigorously developed and analyzed, giving us reason to trust that the code does what it is expected to do and nothing more. Typically, trusted code can be a foundation on which other, untrusted, code runs. That is, the untrusted system's quality depends, in part, on the trusted code; the trusted code establishes the baseline for security of the overall system. In particular, an operating system can be trusted software when there is a basis for trusting that it correctly controls the accesses of components or systems run from it. For example, the operating system might be expected to limit users'
To trust any program, we base our trust on rigorous analysis and testing, looking for certain key characteristics:
Functional correctness : The program does what it is supposed to, and it works correctly.
Enforcement of integrity : Even if presented erroneous commands or commands from unauthorized users, the program maintains the correctness of the data with which it has contact.
: The program is allowed to access secure data, but the access is minimized and
Appropriate confidence level : The program has been examined and rated at a degree of trust appropriate for the kind of data and environment in which it is to be used.
Trusted software is often used as a safe way for general users to access sensitive data. Trusted programs are used to perform limited (safe) operations for users without allowing the users to have direct access to sensitive data.
Programs are not always trustworthy. Even with an operating system to enforce access limitations, it may be impossible or infeasible to bound the access privileges of an untested program effectively. In this case, the user U is legitimately suspicious of a new program P. However, program P may be invoked by another program, Q. There is no way for Q to know that P is correct or proper, any more than a user knows that of P.
Therefore, we use the concept of
to describe the relationship between two programs. Mutually suspicious programs
Confinement is a technique used by an operating system on a suspected program. A
program is strictly limited in what system resources it can access. If a program is not trustworthy, the data it can access are
An access or audit log is a listing of who accessed which computer objects, when, and for what amount of time. Commonly applied to files and programs, this technique is less a means of protection than an after-the-fact means of tracking down what has been done.
Typically, an access log is a protected file or a dedicated output device (such as a printer) to which a log of activities is written. The logged activities can be such things as logins and logouts, accesses or attempted accesses to files or directories, execution of programs, and uses of other devices.
Failures are also logged. It may be less important to record that a particular user listed the contents of a permitted directory than that the same user tried to but was prevented from listing the contents of a protected directory. One failed login may result from a typing error, but a series of failures in a short time from the same device may result from the attempt of an intruder to break into the system.
Unusual events in the audit log should be scrutinized. For example, a new program might be tested in a dedicated, controlled environment. After the program has been tested, an audit log of all files accessed should be scanned to determine if there are any unexpected file accesses, the presence of which could point to a Trojan horse in the new program. We examine these two important aspects of operating system control in more detail in the next two chapters.
Not all controls can be imposed automatically by the computing system. Sometimes controls are applied instead by the declaration that certain practices will be followed. These controls, encouraged by managers and administrators, are called administrative controls. We look at them briefly here and in more depth in Chapter 8.
No software development organization worth its salt allows its developers to produce code at any time in any manner. The good software development practices described earlier in this chapter have all been
We can exercise some degree of administrative control over software development by considering several kinds of standards or guidelines.
standards of design, including using specified design tools, languages, or methodologies, using design diversity, and devising strategies for error handling and fault tolerance
, including layout of code on the page, choices of
standards of programming, including mandatory peer reviews, periodic code audits for correctness, and compliance with standards
standards of testing , such as using program verification techniques, archiving test results for future reference, using independent testers, evaluating test thoroughness, and encouraging test diversity
standards of configuration management, to control access to and changes of stable or completed program units
Banks often break tasks into two or more pieces to be performed by separate employees.
This section has explored how to control for faults during the program development process. Some controls apply to how a program is developed, and others establish restrictions on the program's use. The best is a combination, the classic layered defense.
Is one control essential? Can one control be
But creative humans can learn from their mistakes and shape their creations to account for fundamental principles. Just as a great painter will achieve harmony and balance in a painting, a good software developer who truly understands security will incorporate security into all phases of development. Thus, even if you never become a security professional, this exposure to the needs and shortcomings of security will influence many of your future actions. Unfortunately, many developers do not have the opportunity to become sensitive to security issues, which probably accounts for many of the unintentional security faults in today's programs.
|< Free Open Study >|