Risk AnalysisA Tool for Testing


Risk Analysis A Tool for Testing

Risk analysis is a part of planning any development effort. It also can be critical in determining what to test in development and how much. In this section we will describe some basic concepts in risk analysis. Then we will apply those concepts to testing. We will also compare using risk-based testing to basing test case selection on the functionality's frequency of use.

Risks

In general, a risk is anything that threatens the successful achievement of a project's goals. Specifically, a risk is an event that has some probability of happening and, if it occurs, there will be some loss. The loss may be down time, financial loss, or even injury depending on the type of system. Every project has a set of risks; some risks are rated "higher" than others. This ordering takes into account both the likelihood the loss will occur and how serious the loss will be in terms of its impact. In the context of risk-based testing, a fundamental principle is to test most heavily those portions of the system that pose the highest risk to the project to ensure that the most harmful faults are identified.

Risks are divided into three general types: project, business, and technical risks.

Project risks include managerial and environmental risks (such as an insufficient supply of qualified personnel) that cannot directly be affected by the testing process.

Business risks are associated with domain-related concepts. For example, changes in IRS reporting regulations would be a risk to the stability of the requirements for an accounting system because the system's functionality must be altered to comply with new regulations. This type of risk is related to the functionality of the program and therefore to system-level testing. When a system under test addresses a volatile domain, the system test suite should investigate the extensibility and modifiability attributes of the system's architecture.

Technical risks include some implementation concepts. For example, the quality of code generated by the compiler or the stability of software components is a technical risk. This type of risk is related to the implementation of the program and hence is associated primarily with testing at the code level.

Risk Analysis

Risk analysis is a procedure for identifying risks and for identifying ways to prevent potential problems from becoming real. The output of risk analysis is a list of identified risks in the order of the level of risk that can be used to allocate limited resources and to prioritize decisions. The definition of risk varies from one project to another and even over time within the same project because priorities and development strategies change. Typical risks on object-oriented projects are specific and unique to the architectural features, the areas of complex interactions among objects, the complex behaviors associated with a class specification, and the changing or evolving project requirements. A class being developed for inclusion in a library needs much more testing than one that's being developed for use in a prototype. Other definitions of risk might be the complexity of the class as measured by the size of its specification, or the number of relationships it has with other classes.

Sources of Risk

For system testing, the various uses of the system are prioritized based on the importance to the user and the proper operation of the system. Risk may also be evaluated based on the complexities of the concepts that must be implemented in different subsystems, the volatility of the requirements in a particular subsystem, or the maturity of domain knowledge within a particular subsystem.

Risks are also associated with the programming language and development tools that are being used to implement the software. Programming languages permit certain classes of errors and inhibit others for example, the strong typing in C++ and Java ensures that every message sent (member function called) in a program execution can be understood by its receiver. By contrast, the lack of strong typing in Smalltalk means "message not understood" exceptions can occur during program execution. Strong typing can make identifying test cases much easier because some kinds of inputs are eliminated as possibilities by the programming language itself.

Conducting the Analysis

Our approach to risk analysis identifies the risk that each use case poses to the successful completion of the project. Other definitions are possible for risk, but this definition fits our purpose of planning a testing effort.

The risk analysis technique includes three tasks:

  1. Identify the risk(s) each use case poses to the development effort.

  2. Quantify the risk.

  3. Produce a ranked list of use cases.

The use case writer can assign a risk rating to an individual use case by considering how the risks identified at the project level apply to the specific use case. For example, those requirements that are rated most likely to change are high risks; those requirements that are outside the expertise of the development team are even higher risks; and those requirements that rely on new technology such as hardware being developed in parallel to the software are high risks as well. In fact it is usually harder to find low-risk use cases than high-risk ones.

The exact set of values used in the ranking scale can vary from one project to another. It should have sufficient levels to separate the use cases into reasonably sized groupings, but it should not have so many categories so that some categories have no members. We usually start with three rankings: low, medium, high. In a project with 100 use cases, this might result in approximately 40 in the high category. This is probably more than we have time to give special attention. Adding a very high category and reclassifying the uses might result in 25 high and 15 very high cases. Those fifteen will receive the most intense examination.

The assigned risks result in an ordering of the use cases. The ordering is used for a couple of project activities. First, managers can use it to assign use cases to increments (not our problem!). Second, the ordering can be used to determine the amount of testing applied to each item. Risk-based testing is used when the risks are very high, such as in life-critical systems. In our examples in the text, we will consider both risk-based and use profile approaches to test case selection.

Let us consider a couple of examples. First, we will apply risk analysis to the Brickles game. Since this is a very simple system, we will then present a more illustrative example.

For a game such as Brickles, the biggest risks are things that affect the player's satisfaction. In Figure 3.4, the analysis information for the two basic use cases is summarized. The "winning the game" use case is rated as more critical than the "losing the game" use case. Imagine winning the game but the software refuses to acknowledge it! The frequency of winning is rated as lower than the frequency of losing. There are n! sequences in which the bricks can be broken, in which n is the number of bricks in the pile. There are many more sequences when the variability of wall and ceiling bounces are included. There are (n-1)+(n-2)+…+2+1 ways to lose the game with a given puck, but there are many more possibilities when misses are considered. There are many more ways to lose than ways to win. Since winning and losing are accomplished by the same code, there is the same amount of risk in implementing each use case so the risk is rated the same. If we combine the frequency and criticality values using the scheme shown in Technique Summary-Creating Test Cases from Use Cases, on page 127, the two uses are both rated as medium. The program should be tested with roughly the same number of winning results as losing.

Figure 3.4. Two Brickles use cases

graphics/03fig04.gif

Consider another example for an application in which personnel records are being modified, saved, and possibly deleted. The use cases are summarized in Figure 3.5. The use cases address a record update that changes an employee's name, thereby committing that update and deleting a record. An analysis of the use cases identifies domain objects of name, personnel, and security.

Figure 3.5. Three use cases for a personnel management system

graphics/03fig05.gif

The risk information indicates that deleting a record is a high risk. Being able to save is highly critical. The usual approach is to schedule high-risk uses for early delivery because then those uses can take longer than estimated without delaying the completion of the project. The criticality and frequency of uses are combined to determine which should be tested more heavily. Obviously we would want to test most of the uses that are the most critical and frequent. But sometimes a critical operation is not very frequent in comparison to other uses. For example, logging on to your Internet Service Provider is critical, but it is only done once per session whereas you might check e-mail many times during a single login. So the values of the frequency and criticality attributes are combined to determine the relative amount of testing.

The technique for combining these values varies from one project to another, but there are a couple of general strategies. A conservative strategy combines the two values by selecting the higher of the two values. For example, the "Modify name" use case would have a combined value of medium using a conservative strategy. Likewise, an averaging strategy would choose a value between the two values. In this case there is none unless we invent a new category such as medium high. This should only be done if there is a large number of cases being categorized in one cell and there is a need for better discrimination.

By applying the selected strategy, you can make an ordered list of uses. For the three uses noted, using a conservative strategy, the list in order of increasing rank is Edit name, Delete record, and Save record. Thus, Save record would be tested more heavily than Delete record, which in turn would be tested more heavily than Edit name. Exactly how many test cases would be used will be discussed later as we consider techniques for selecting test cases.



A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net