The guided inspection technique provides a means of objectively and systematically searching a work product for faults by using explicit test cases. This testing perspective means that reviews are treated as a test session. The basic testing steps are as follows:
These steps are specialized to the following steps (we will elaborate on each of these in this chapter):
Reviews usually involve a discussion of the role of each piece of a model from a high level. The relationships between pieces are also explained in terms of the specified interfaces at the formal parameter level. The test cases created using this technique allow these same pieces and relationships to be examined at a much more concrete level that assigns specific values to the attributes of the objects. The test cases should be written at a level that is sufficiently specific to support tracing exact paths of execution through the logic of the algorithms, but not so specific that the code must be written first.
Many object-oriented software development methods discuss using one or more diagrams within a model to evaluate the other diagrams. For example, a sequence diagram traces a path through the class diagram in which the messaging arrows in the sequence diagram are supposed to correspond to associations found in the class diagram. However, these development methods do not ensure a systematic coverage of the model. One step in guided inspection checks the internal consistency and completeness of the diagrams using the diagrams created during test execution.
Evaluation CriteriaWe are essentially trying to answer three questions as we inspect the MUT:
Correctness is a measure of the accuracy of the model. At the analysis level, it is the accuracy of the problem description. At the design level, it is how accurately the model represents the solution to the problem. At both levels, the model must also accurately use the notation. The degree of accuracy is judged with respect to a standard that is assumed to be infallible (referred to as "the oracle"), although it seldom is. The oracle often is a human expert whose personal knowledge is considered sufficient to be used as a standard. The human expert determines the expected results for each test case. Testing determines that a model is correct with respect to a test case if the result of the execution is the result that was expected. (It is very important that each test case have an expected result explicitly stated before the test case writer becomes biased by the results of the inspection.) The model is correct with respect to a set of test cases if every test case produces the expected result. In the real world, we must assume that the oracle can be incorrect on occasion. We often separate the domain experts on a project into two teams who represent different perspectives or approaches within the company. One team constructs the model and at the same time, the second team develops the test cases. This check and balance doesn't guarantee correct evaluations, but it does raise the probability. The same is true for every test case. Any of them could specify an incorrect, expected result. The testers and developers must work together to determine when this is the case. Completeness is a measure of the inclusiveness of the model. Are any necessary, or at least useful, elements missing from the model? Testing determines whether there are test cases that pose scenarios that the elements in the model cannot represent. In an iterative incremental process, completeness is considered relative to how mature the current increment is expected to be. This criteria becomes more rigorous as the increment matures over successive iterations. One factor directly affecting the effectiveness of the completeness criteria is the quality of the test coverage. The model is judged complete if the results of executing the test cases can be adequately represented using only the contents of the model. For example, a sequence diagram might be constructed to represent a scenario. All of the objects needed for the sequence diagram must come from classes in the class diagram or it will be judged incomplete. However, if only a few test cases are run, the fact that some classes are missing may escape detection. For the early models, this inspection is sufficiently high level that a coverage of 100% of all use cases is necessary. Consistency is a measure of whether there are contradictions within the model or between the current model and the model upon which it is based. Testing identifies inconsistencies by finding different representations within the model for similar test cases. Inconsistencies may also be identified during the execution of a test case when the current MUT is compared to its basis model or when two diagrams in the same model are compared. In an incremental approach, consistency is judged locally until the current increment is integrated with the larger system. The integration process must ensure that the new piece does not introduce inconsistencies into the integrated model. Consistency checking can determine whether there are any contradictions or conflicts present either internal to a single diagram or between two diagrams. For example, a sequence diagram might require a relationship between two classes while the class diagram shows none. Inconsistencies will often initially appear as incorrect results in the context of one of the two diagrams and correct results in the other. Inconsistencies are identified by careful examination of the diagrams in a model during the simulated execution. Additional qualities defines a number of system attributes that the development team might wish to verify. For example, architectural models usually have performance goals to meet. The guided inspection test cases can be used as the scenarios for testing performance. Structural models used to compute performance can be applied to these scenarios, which are selected based on the use profile to estimate total performance and to identify potential bottlenecks. If the architecture has an objective of facilitating change, test cases based on the change cases should be used to evaluate the degree of success in achieving this objective (see Testing Models for Additional Qualities on page 151). |