1. Code Review (Tools)Artifact: Code Example of risks found: Buffer overflow on line 42 All software projects produce at least one artifactcode. This fact moves code review to the number one slot on our list. At the code level, the focus is on implementation bugs, especially those that static analysis tools that scan source code for common vulnerabilities can discover. A taxonomy of these bugs can be found in Chapter 12. Several tools vendors now address this space. Code review is a necessary but not sufficient practice for achieving secure software. Security bugs (especially in C and C++) are a real problem, but architectural flaws are just as big a problem. In Chapter 4 you'll learn how to review code with static analysis tools. Doing code review alone is an extremely useful activity, but given that this kind of review can only identify bugs, the best a code review can uncover is around 50% of the security problems. Architectural problems are very difficult (and mostly impossible) to find by staring at code. This is especially true for modern systems made of hundreds of thousands of lines of code. A comprehensive approach to software security involves holistically combining both code review and architectural analysis. 2. Architectural Risk AnalysisArtifact: Design and specification Examples of risks found: Poor compartmentalization and protection of critical data; failure of a Web Service to authenticate calling code and its user and to make access control decisions based on proper context At the design and architecture level, a system must be coherent and present a unified security front. Designers, architects, and analysts should clearly document assumptions and identify possible attacks. At both the specifications-based architecture stage and at the class-hierarchy design stage, architectural risk analysis is a necessity. At this point, security analysts uncover and rank architectural flaws so that mitigation can begin. Disregarding risk analysis at this level will lead to costly problems down the road. Note that risks crop up during all stages of the software lifecycle, so a constant risk management thread, with recurring risk-tracking and monitoring activities, is highly recommended. Chapter 2 describes the RMF process and how to apply it. Chapter 5 teaches about architectural risk analysis and will help you ferret out flaws in software architecture. 3. Penetration TestingArtifact: System in its environment Example of risks found: Poor handling of program state in Web interface Penetration testing is extremely useful, especially if an architectural risk analysis informs the tests. The advantage of penetration testing is that it gives a good understanding of fielded software in its real environment. However, any such testing that doesn't take the software architecture into account probably won't uncover anything interesting about software risk. Software that fails during the kind of canned black box testing practiced by prefab application security testing tools is truly bad. Thus, passing a low-octane penetration test reveals little about your actual security posture, but failing a canned penetration test indicates that you're in very deep trouble indeed (see Chapter 1). One pitfall with penetration testing involves who does it. Be very wary of "reformed hackers" whose only claim to being reformed is some kind of self-description.[2] Also be aware that network penetration tests are not the same as application or software-faced penetration tests. If you want to do penetration testing properly, see Chapter 6.
4. Risk-Based Security TestingArtifact: Units and system Example of risks found: Extent of data leakage possible by leveraging data protection risk Security testing must encompass two strategies: (1) testing of security functionality with standard functional testing techniques and (2) risk-based security testing based on attack patterns, risk analysis results, and abuse cases. A good security test plan embraces both strategies. Security problems aren't always apparent, even when you probe a system directly, so standard-issue quality assurance is unlikely to uncover all critical security issues. QA is about making sure good things happen. Security testing is about making sure bad things don't happen. Thinking like an attacker is essential. Guiding security testing with knowledge of software architecture, common attacks, and the attacker's mindset is thus extremely important. Chapter 7 shows you how to carry out security testing given some insight into the system's construction. 5. Abuse CasesArtifact: Requirements and use cases Example of risks found: Susceptibility to well-known tampering attack Building abuse cases is a great way to get into the mind of the attacker. Similar to use cases, abuse cases describe the system's behavior under attack; building abuse cases requires explicit coverage of what should be protected, from whom, and for how long. Underused but important, abuse and misuse cases are the subject of Chapter 8. Practitioners wondering how abuse cases might work for them will get lots of mileage out of that chapter. 6. Security RequirementsArtifact: Requirements Example of risks found: No explicit description of data protection needs Security must be explicitly worked into the requirements level. Good security requirements cover both overt functional security (say, the use of applied cryptography) and emergent characteristics (best captured by abuse cases and attack patterns). The art of identifying and maintaining security requirements is a complex undertaking that deserves broad treatment. Interested readers are encouraged to check out the references in the Security Requirements box on the next page for pointers. A brief treatment of the subject can be found spread throughout Chapters 7 and 8. 7. Security OperationsArtifact: Fielded system Example of risks found: Insufficient logging to prosecute a known attacker Software security can benefit greatly from network security. Well-integrated security operations allow and encourage network security professionals to get involved in applying the touchpoints, providing experience and security wisdom that might otherwise be missing from the development team. Battle-scarred operations people carefully set up and monitor fielded systems during use to enhance the security posture. Attacks do happen, regardless of the strength of design and implementation, so understanding software behavior that leads to successful attack is an essential defensive technique. Knowledge gained by understanding attacks and exploits should be cycled back into software development. *. External AnalysisThis is not really a touchpoint, but it's important enough to emphasize so I've put it in the touchpoints picture anyway. External analysis (i.e., analysis by somebody outside the design team) is often a necessity when it comes to security. All software security touchpoints are best applied by people not involved in the original design and implementation of the system. Every programmer has been stuck for hours working on a bug only to have a buddy (coming to drag you off for pizza) show up and point out the error: "How come you did that?!" This always warrants a huge groan. Argh! This phenomenon can happen in all stages of the software lifecycleone reason why external analysis is a necessity.
Why Only Seven?Some approaches to software security are way too bulky for most organizations to swallow. By limiting the touchpoints to seven best practices, I hope to make effective best practices easier to adopt while still making a huge impact on software security. The touchpoints are not only amenable to whatever process you already follow to make software (you do ship software already, right?) but also lightweight and easy to use. If you apply the seven terrific touchpoints outlined here, your software will be much more secure. |