Chapter 7. Risk-Based Security Testing[1]
Security testing has recently moved beyond the realm of network port scanning to include probing software behavior as a critical aspect of system behavior (see the box From OutsideIn to InsideOut on page 189). Unfortunately, testing software security is a commonly misunderstood task. Security testing done properly goes much deeper than simple black box probing on the presentation layer (the sort performed by so-called application security tools, which I rant about in Chapter 1)and even beyond the functional testing of security apparatus. Testers must carry out a risk-based approach, grounded in both the system's architectural reality and the attacker's mindset, to gauge software security adequately. By identifying risks in the system and creating tests driven by those risks, a software security tester can properly focus on areas of code where an attack is likely to succeed. This approach provides a higher level of software security assurance than is possible with classical black box testing. Security testing has much in common with (the new approach to) penetration testing as covered in Chapter 6. The main difference between security testing and penetration testing is the level of approach and the timing of the testing itself. Penetration testing is by definition an activity that happens once software is complete and installed in its operational environment. Also, by its nature, penetration testing is focused outsidein and is somewhat cursory. By contrast, security testing can be applied before the software is complete, at the unit level, in a testing environment with stubs and pre-integration.[2] This distinction is similar to the slippery distinction between unit testing and system testing. Security testing should start at the feature or component/unit level, prior to system integration. Risk analysis carried out during the design phase (see Chapter 5) should identify and rank risks and discuss intercomponent assumptions. At the component level, risks to the component's assets must be mitigated within the bounds of contextual assumptions. Tests should be structured in such a way as to attempt both unauthorized misuse of and access to target assets as well as violations of the assumptions the system writ large may be making relative to its components. A security fault may well surface in the complete system if tests like these are not devised and executed. Security unit testing carries the benefit of breaking system security down into a number of discrete parts. Theoretically, if each component is implemented safely and fulfills intercomponent design criteria, the greater system should be in reasonable shape (though this problem is much harder than it may seem at first blush [Anderson 2001]).[3] By identifying and leveraging security goals during unit testing, the security posture of the entire system can be significantly improved.
Security testing should continue at the system level and should be directed at properties of the integrated software system. This is precisely where penetration testing meets security testing, in fact. Assuming that unit testing has successfully achieved its goals, system-level testing should shift the focus toward identifying intracomponent failures and assessing security risk inherent at the design level. If, for example, a component assumes that only another trusted component has access to its assets, a test should be structured to attempt direct access to that component from elsewhere. A successful test can undermine the assumptions of the system and would likely result in a direct, observable security compromise. Data flow diagrams, models, and intercomponent documentation created during the risk analysis stage can be a great help in identifying where component seams exist. Finally, abuse cases developed earlier in the lifecycle (see Chapter 8) should be used to enhance a test plan with adversarial tests based on plausible abuse scenarios. Security testing involves as much black hat thinking as white hat thinking.
|