The Basics of Guided Inspection


The guided inspection technique provides a means of objectively and systematically searching a work product for faults by using explicit test cases. This testing perspective means that reviews are treated as a test session. The basic testing steps are as follows:

  1. Define the test space.

  2. Select values from the test space using a specific strategy.

  3. Apply the test values to the product being tested.

  4. Evaluate the results and the percentage of the model covered by the tests (based on some criteria).

These steps are specialized to the following steps (we will elaborate on each of these in this chapter):

  1. Specify the scope and depth of the inspection. The scope will be defined by describing a body of material or a specific set of use cases. For small projects, the scope may always be the entire model. The depth will be defined by describing the level of detail to be covered. It may also be defined by specifying the levels in aggregation hierarchies on certain UML diagrams in the model under test (MUT).

  2. Identify the basis from which the MUT was created. The basis for all but the initial model is the set of models from the previous development phase. For example, the application analysis model is based on the domain analysis model and the use case model. Initial models are based on the knowledge in the heads of select groups of people.

  3. Develop test cases for each of the evaluation criteria to be applied using the contents of the basis model as input (see Selecting Test Cases for the Inspection on page 123). The scenarios from the use case model are a good starting point for test cases for many models.

  4. Establish criteria for measuring test coverage. For example, a class diagram might be well covered if every class is touched by some test case.

  5. Perform the static analysis using the appropriate checklist. The MUT is compared to the basis model to determine consistency between the two diagrams.

  6. "Execute" the test cases. We will describe the actual test session in detail later in this chapter.

  7. Evaluate the effectiveness of the tests using the coverage measurement. Calculate the coverage percentage. For example, If 12 of the classes from a class diagram containing 18 classes have been "touched" by the test cases, the test coverage is 75%. The testing of analysis or design models is so high-level that 100% coverage is necessary to achieve good results.

  8. If the coverage is insufficient, expand the test suite and apply the additional tests, otherwise terminate the testing. Usually the additional test cases cannot be written during the inspection session. The testers identify where the coverage is lacking and work with a developer to identify potential test cases that would touch the uncovered model elements. The tester then creates the full test cases and another inspection session is held.

Coverage in Models

In the UML models we use, the model elements are the usual object-oriented concepts: classes, relationships, objects, and messages. A test case "covers" one of these elements if it uses that element as part of a test case. Of course, a single test case using a particular element probably does not exhaust all possible values of the attributes of that element. For example, using an object from a class to receive a single message does not test the other methods in the same class.

As we move deeper into the development life cycle, the detail of the model increases and the detail at which coverage matters increases as well. For a domain analysis model, simply creating a single object from a class will be sufficient to consider that we have covered the class. Coverage for this level of model can be stated as a percentage of classes and relationships covered. At the design level, we would typically like to use every method in an interface before saying that a class is covered. Coverage for this level is more likely to be stated by counting all of the methods in the model rather than all of the classes.

The more abstract the classes, the higher the level of coverage that should be required. To omit a single abstract class from the coverage in testing overlooks the defects that could potentially be found in all of the concrete classes that eventually are derived from the abstract class. When testing at a concrete-class level, omitting a class during testing only overlooks the defects in that one class.

The higher the level of abstraction of the model, the higher the level of coverage that is required.

Reviews usually involve a discussion of the role of each piece of a model from a high level. The relationships between pieces are also explained in terms of the specified interfaces at the formal parameter level. The test cases created using this technique allow these same pieces and relationships to be examined at a much more concrete level that assigns specific values to the attributes of the objects. The test cases should be written at a level that is sufficiently specific to support tracing exact paths of execution through the logic of the algorithms, but not so specific that the code must be written first.

graphics/note.jpg

Should test cases be available to developers prior to the inspection session?

There has to be a balance between allowing developers to program to the tests and having the developers duplicate the effort of the testers by coming up with their own use scenarios. If the testers were going to develop all possible scenarios then giving those to the developers and sampling from them for model testing would be acceptable. Since the testers usually only create a small percentage of the possible scenarios, it is doubtful that they are duplicating the work of the developers who independently will (we hope) identify other scenarios. So, our general approach is to not let the developers have the scenarios prior to the inspection session.


Many object-oriented software development methods discuss using one or more diagrams within a model to evaluate the other diagrams. For example, a sequence diagram traces a path through the class diagram in which the messaging arrows in the sequence diagram are supposed to correspond to associations found in the class diagram. However, these development methods do not ensure a systematic coverage of the model. One step in guided inspection checks the internal consistency and completeness of the diagrams using the diagrams created during test execution.

graphics/note.jpg

Should testers only use test cases for the current increment in an inspection session?

No. Running a test scenario from a previous increment as a regression check on the model is a useful idea. The regression scenarios should be chosen to include those that failed in the previous increment and those that cover areas most likely to have been changed to incorporate the functionality of the current increment.


Evaluation Criteria

We are essentially trying to answer three questions as we inspect the MUT:

  • Is the model correct?

  • Is the model a complete representation of the information?

  • Is the model internally consistent and consistent with its basis model?

Correctness is a measure of the accuracy of the model. At the analysis level, it is the accuracy of the problem description. At the design level, it is how accurately the model represents the solution to the problem. At both levels, the model must also accurately use the notation. The degree of accuracy is judged with respect to a standard that is assumed to be infallible (referred to as "the oracle"), although it seldom is. The oracle often is a human expert whose personal knowledge is considered sufficient to be used as a standard. The human expert determines the expected results for each test case.

Testing determines that a model is correct with respect to a test case if the result of the execution is the result that was expected. (It is very important that each test case have an expected result explicitly stated before the test case writer becomes biased by the results of the inspection.) The model is correct with respect to a set of test cases if every test case produces the expected result.

In the real world, we must assume that the oracle can be incorrect on occasion. We often separate the domain experts on a project into two teams who represent different perspectives or approaches within the company. One team constructs the model and at the same time, the second team develops the test cases. This check and balance doesn't guarantee correct evaluations, but it does raise the probability. The same is true for every test case. Any of them could specify an incorrect, expected result. The testers and developers must work together to determine when this is the case.

Completeness is a measure of the inclusiveness of the model. Are any necessary, or at least useful, elements missing from the model? Testing determines whether there are test cases that pose scenarios that the elements in the model cannot represent. In an iterative incremental process, completeness is considered relative to how mature the current increment is expected to be. This criteria becomes more rigorous as the increment matures over successive iterations.

One factor directly affecting the effectiveness of the completeness criteria is the quality of the test coverage. The model is judged complete if the results of executing the test cases can be adequately represented using only the contents of the model. For example, a sequence diagram might be constructed to represent a scenario. All of the objects needed for the sequence diagram must come from classes in the class diagram or it will be judged incomplete. However, if only a few test cases are run, the fact that some classes are missing may escape detection. For the early models, this inspection is sufficiently high level that a coverage of 100% of all use cases is necessary.

Consistency is a measure of whether there are contradictions within the model or between the current model and the model upon which it is based. Testing identifies inconsistencies by finding different representations within the model for similar test cases. Inconsistencies may also be identified during the execution of a test case when the current MUT is compared to its basis model or when two diagrams in the same model are compared. In an incremental approach, consistency is judged locally until the current increment is integrated with the larger system. The integration process must ensure that the new piece does not introduce inconsistencies into the integrated model.

Consistency checking can determine whether there are any contradictions or conflicts present either internal to a single diagram or between two diagrams. For example, a sequence diagram might require a relationship between two classes while the class diagram shows none. Inconsistencies will often initially appear as incorrect results in the context of one of the two diagrams and correct results in the other. Inconsistencies are identified by careful examination of the diagrams in a model during the simulated execution.

Additional qualities defines a number of system attributes that the development team might wish to verify. For example, architectural models usually have performance goals to meet. The guided inspection test cases can be used as the scenarios for testing performance. Structural models used to compute performance can be applied to these scenarios, which are selected based on the use profile to estimate total performance and to identify potential bottlenecks.

If the architecture has an objective of facilitating change, test cases based on the change cases should be used to evaluate the degree of success in achieving this objective (see Testing Models for Additional Qualities on page 151).



A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net