All is not lostsecurity penetration testing can be used effectively. The best approach bases penetration testing activities on security findings discovered and tracked from the beginning of the software lifecycle: during requirements analysis, architectural risk analysis, and so on. To do this, a penetration test must be structured according to perceived risk and offer some kind of metric relating the security posture of the software at the time of the test to risk measurement. Results are less likely to be misconstrued and used to declare pretend security victory if they are related to business impact through proper risk management. (See Chapter 2, which describes a risk management framework amenable to feeding security testing.) Penetration testing is about testing a system in its final production environment. For this reason, penetration testing is best suited to probing configuration problems and other environmental factors that deeply impact software security. Driving tests that concentrate on these factors with some knowledge of risk analysis results is the most effective approach. Outsidein testing is great as long as it is not the only testing you do. The modern approach that I describe throughout the remainder of this chapter is much more closely aligned with risk-based security testing (see Chapter 7) than it is with application penetration testing as practiced by most consulting shops today. Be careful what you ask for! Make Use of ToolsTools (including the static analysis tools discussed in Chapter 4) should definitely be used in penetration testing. Tools are well suited to finding known security vulnerabilities with little effort. Static analysis tools can vet software code, either in source or binary form, in an attempt to identify common implementation-level bugs such as buffer overflows. Dynamic analysis tools can observe a system as it executes. These tools can submit malformed, malicious, and random data to a system's entry points in an attempt to uncover faultsa process commonly referred to as fuzzing [Miller et al. 1995]. Faults are then reported to the tester for further analysis. When possible, use of these tools should be guided by risk analysis results and attack patterns. (See the following box, Tools for Penetration Testing.) Tool use carries two major benefits. First, when used effectively, tools can carry out a majority of the grunt work needed for basic software penetration testing (at the level of a fielded system). Of course, a tool-driven approach can't be used as a replacement for review by a skilled security analyst (especially since today's tools are by their nature not applicable at the design level), but a tool-based approach does help relieve the work burden of a reviewer and can thus drive down cost. Second, tool output lends itself readily to metrics. Software development teams can use these metrics to track progress over time as they move toward a security goal. Simple metrics in common use today do not offer a complete picture of the security posture of a system. Thus it is important to emphasize that a clean bill of health from an analysis tool does not mean that a system is defect free (recall the discussion of badness-ometers from Chapter 1). The value lies in relative comparison: If the current run of the tools reveals fewer defects than a previous run, progress has likely been made.
Test More Than OnceAs it stands today, automated review is best suited to identifying the most basic of implementation defects. Human review is necessary to reveal flaws in the design or more complicated implementation-level vulnerabilities (of the sort that attackers can and will exploit). However, review by an expert is costly and, for reasons just described, can be ineffective if the "expert" is not. By leveraging the seven software security touchpoints described in this book, software penetration tests can be structured in such a way as to be cost effective and give a reasonable estimation of the security posture of the system. Penetration testing can benefit greatly from knowledge of the security risks built into a system. No design or implementation is perfect, and carrying risk is usually acceptable. Penetration testing can help you find out what this means to your fielded system. In fact, penetration testing in some sense collapses the "risk probability wave" into something much more tangible when testing clarifies ways that a risk can be exploited. That is, if you know what your likely risks are in the design, you can use penetration testing to figure out what impact this has on an actual fielded system. As noted earlier, static and dynamic analysis tools should be uniformly applied; this holds true at the subsystem level too. In most cases, no customization of basic static analysis tools is necessary for component-level tests. However, dynamic analysis tools will likely need to be written or modified for the target component. Such tools often involve data-driven tests that operate at the API level. Any tool should include data sets known to cause problems, such as long strings, strange encodings, and control characters [Hoglund and McGraw 2004]. Furthermore, the design of the tool should reflect the security test's goalto misuse the component's assets, to violate intercomponent assumptions, or to probe risks. Customizations are almost always necessary. Penetration testing should focus at the system level and should be directed at properties of the integrated software system. For efficiency's sake, testing should be structured in such a way as to avoid repeating unit-level testing (as described in Chapter 7), and should therefore be focused on aspects of the system that could not be probed during unit testing. In order to be defined as penetration tests, system-level tests should analyze the system in its deployed environment. Such analysis may be targeted to ensure that suggested deployment practices are effective and reasonable, and that assumptions external to the system cannot be violated. |