Risk management has two distinct flavors in software security. I use the term risk analysis to refer to the activity of identifying and ranking risks at some particular stage in the software development lifecycle. Risk analysis is particularly popular when applied to architecture and design-level artifacts. I use the term risk management to describe the activity of performing a number of discrete risk analysis exercises, tracking risks throughout development, and strategically mitigating risks. Chapter 2 is about the latter. A majority of risk analysis process descriptions emphasize that risk identification, ranking, and mitigation is a continuous process and not simply a single step to be completed at one stage of the development lifecycle. Risk analysis results and risk categories thus drive both into requirements (early in the lifecycle) and into testing (where risk results can be used to define and plan particular tests). Risk analysis, being a specialized subject, is not always best performed solely by the design team without assistance from risk professionals outside the team. Rigorous risk analysis relies heavily on an understanding of business impact, which may require an understanding of laws and regulations as much as the business model supported by the software. Also, human nature dictates that developers and designers will have built up certain assumptions regarding their system and the risks that it faces. Risk and security specialists can at a minimum assist in challenging those assumptions against generally accepted best practices and are in a better position to "assume nothing." (For more on this, see Chapter 9.) A prototypical risk analysis approach involves several major activities that often include a number of basic substeps. - Learn as much as possible about the target of analysis.
Read and understand the specifications, architecture documents, and other design materials. Discuss and brainstorm about the target with a group. Determine system boundary and data sensitivity/criticality. Play with the software (if it exists in executable form). Study the code and other software artifacts (including the use of code analysis tools). Identify threats and agree on relevant sources of attack (e.g., will insiders be considered?).
- Discuss security issues surrounding the software.
Argue about how the product works and determine areas of disagreement or ambiguity. Identify possible vulnerabilities, sometimes making use of tools or lists of common vulnerabilities. Map out exploits and begin to discuss possible fixes. Gain understanding of current and planned security controls.[3] [3] Note that security controls can engender and introduce new security risks themselves (through bugs and flaws) even as they mitigate others.
- Determine probability of compromise.
- Perform impact analysis.
- Rank risks.
- Develop a mitigation strategy.
- Report findings.
Carefully describe the major and minor risks, with attention to impacts. Provide basic information regarding where to spend limited mitigation resources.
A number of diverse approaches to risk analysis for security have been devised and practiced over the years. Though many of these approaches were expressly invented for use in the network security space, they still offer valuable risk analysis lessons. The box Risk Analysis in Practice lists a number of historical risk analysis approaches that are worth considering. My approach to architectural risk analysis fits nicely with the RMF described in Chapter 2. For purposes of completeness, a reintroduction to the RMF is included in the box Risk Analysis Fits in the RMF. Risk Analysis in Practice A number of methods calculate a nominal value for an information asset and attempt to determine risk as a function of loss and event probability. Others rely on checklists of threats and vulnerabilities to determine a basic risk measurement. Examples of risk analysis methodologies for software fall into two basic categories: commercial and standards-based. Commercial STRIDE from Microsoft <http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vbcon/html/vbconOverviewOfWebApplicationSecurityThreats.asp> (also see [Howard and LeBlanc 2003]) Security Risk Management Guide, also from Microsoft <http://www.microsoft.com/technet/security/topics/policiesandprocedures/secrisk/default.mspx> ACSM/SAR (Adaptive Countermeasure Selection Mechanism/Security Adequacy Review) from Sun (see [Graff and van Wyk 2003] for public discussion) Cigital's architectural risk analysis process (described later in this chapter), which is designed to fit into the RMF (see Chapter 2) Standards-Based ASSET (Automated Security Self-Evaluation Tool) from the National Institute on Standards and Technology (NIST) <http://csrc.nist.gov/asset/> OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) from SEI <http://www.sei.cmu.edu/publications/documents/99.reports/99tr017/99tr017abstract.html> COBIT (Control Objectives for Information and Related Technology) from Information Systems Audit and Control Association (ISACA) <http://www.isaca.org/Template.cfm?Section=COBIT_Online&Template=/ContentManagement/ContentDisplay.cfm&ContentID=15633>
|
Risk Analysis Fits in the RMF Architectural risk analysis fits within a continuous risk management framework (RMF) just as the other touchpoint best practices do. The continuous risk management process we use at Cigital loops constantly and at many levels of description through several stages (Figure 5-1). A simplified version of the RMF shown here is described in gory detail in Chapter 2. In this approach, business goals determine risks, risks drive methods, methods yield measurement, measurement drives decision support, and decision support drives fix/rework and application quality. Figure 5-1. Cigital's risk management framework typifies the fractal and continuous nature of risk analysis processes. Many aspects of frameworks like these can be automatedfor example, risk storage, business risk to technical risk mapping, and display of status over time. During the process of architectural risk analysis, we follow basic steps very similar to those making up the RMF. The RMF shown in Figure 5-1 has a clear loop, called the validation loop. This loop is meant to graphically represent the idea that risk management is a continuous process. That is, identifying risks only once in a project is insufficient. The idea of "crossing off a particular stage" once it has been executed and never doing those activities again is incorrect. Though the seven stages are shown in a particular serial order in Figure 5-1, they may need to be applied over and over again throughout a software development effort, and their particular ordering may be interleaved in many different ways. Risk management is in some sense fractal. In other words, the entire continuous, ongoing process can be applied at several different levels. The primary level is the project level. Each stage of the validation loop clearly must have some representation during a complete development effort in order for risk management to be effective. Another level is the software lifecycle artifact level. The validation loop will most likely have a representation given requirements, design, architecture, test plans, and so on. The validation loop will have a representation during both requirements analysis and use case analysis, for example. Fortunately, a generic description of the validation loop as a serial looping process is sufficient to capture critical aspects at all of these levels at once. (See Chapter 2.) |
|