Testing Specific Types of Models


The basic guided inspection technique does not change from one development phase to another, but some characteristics of the model content and some aspects of the team do change.

  • The level of detail in the model becomes greater as development proceeds.

  • The amount of information also increases as development proceeds.

  • The exact interpretation of the evaluation criteria can be made more specific for a specific model.

  • The membership of the inspection team changes for different models.

We will now discuss models at several points in the life cycle.

Requirements Model

The requirements for an application are summarized by creating a model of the uses of the system. The UML construct used for this model is the use case developed by Jacobson [JCJO92], which is discussed in Chapter 2. Figure 4.13 is an abbreviated version of the text format used for a use case. Figure 4.14 shows the UML use case diagram for the Brickles example, and Figure 4.16 through Figure 4.21 show the use-case text descriptions. The use case diagram captures relationships between the use cases. Individual use cases are broken into "sub-use cases" using the uses and extends relationships. Later, in Chapter 9, we will use these relationships to structure the system test cases. The text descriptions capture the majority of the information for each use case. While the relationships are used to structure tests, the text descriptions are used to provide most of the information for a test case.

Figure 4.13. An example of a use case

graphics/04fig13.gif

Figure 4.14. Brickles use case model

graphics/04fig14.gif

Figure 4.15. Criteria for requirements inspection

graphics/04fig15.gif

Figure 4.16. An example of use case #1

graphics/04fig16.gif

Acceptance testing often finds faults that result from problems with the requirements. The typical problems include missing requirements (an incomplete requirements model), requirements that contradict each other (an inconsistent model), and scenarios in which the system does not behave as the client intended (an incorrect model). Many of these problems can be identified much earlier than the acceptance test phase using guided inspection.

The criteria for evaluating the models is interpreted specifically for the requirements model in Figure 4.15. Completeness is a typical requirements problem for which the iterative, incremental process model is a partial solution. Guided inspection can offer further help by requiring a detailed examination by an independent group of domain experts and product definition people. This examination will identify many missing requirements much earlier than the typical process.

The detailed examination will also search for correctness faults. The act of writing the test cases for the guided inspection will identify many requirements that are not sufficiently precise to allow a test case to be written. Running the test cases will provide an opportunity for the independent group to identify discrepancies between the expected results in the test cases and the actual content of the requirements model.

The larger the system, the more problem there is with the consistency of the requirements. In addition to contradictions, there is often the need to identify places where one use case supersedes another. For example, one use case calls for an action to happen at least within ten seconds while another expects the same action to occur within seven seconds. The use of the end-to-end scenarios that trace a complete action will help locate these inconsistencies.

One feature of the requirements model that affects how the inspection is organized is that there is no UML model on which the requirements are based. So comparisons to the basis model refer to documents produced by marketing, system engineering, or client organizations. Since this is a notorious source of defects, we will expend extra effort in verifying the requirements model.

The roles for this inspection are assigned as shown in Figure 4.22. You will want to adapt these to your situation. The domain expert provides the "correct" answers for test cases. In this case that means agreeing or disagreeing that a use case adequately represents the required functionality. Using the system testers in the tester role provides the system testers with an early look at the source of information for the system test cases and an opportunity to have input into improving the use cases. We also use a second group of domain experts and product definition people to work with the system testers. This provides a source of scenarios that is independent of the people who wrote the requirements. Some organizations will have the use cases written by developers rather than a separate organization of system engineers, and these developers will be the ones to execute test cases.

Figure 4.17. An example of use case #2

graphics/04fig17.gif

Figure 4.18. An example of use case #6

graphics/04fig18.gif

Figure 4.19. An example of use case #4

graphics/04fig19.gif

Figure 4.20. An example of use case #5

graphics/04fig20.gif

Figure 4.21. An example of use case #6

graphics/04fig21.gif

Figure 4.22. Roles in requirements inspection

graphics/04fig22.gif

Tip

When dividing the domain experts into two groups, don't divide based on ideology. That just precipitates theoretical debates. Divide the experts so that each team has representation from as many "camps" as possible.


The basic outline of "testing" the requirements model is given in the following list along with an example using Brickles. The example is given in italics.

  1. Develop the ranking of use cases by computing the combined frequency and criticality information for the use cases. Figure 4.23 gives the ranking for Brickles.

    Figure 4.23. Brickles use cases

    graphics/04fig23.gif

  2. Determine the total number of test cases that can be constructed given the amount of resources available. It should be possible to estimate this number from historical data. We will assume we have time for 15 test cases.

  3. Ration the tests based on the ranking. Note how in Figure 4.23 only 14 of the 15 are assigned since it is impossible to evenly split the number of tests. The 15th test would be allocated to the category showing the most failures in the initial round of testing.

  4. Write scenarios based only on the knowledge of those in the domain expert's role. The number of scenarios is determined by the values computed in Step 3. The player starts the game, moves the paddle, and has broken several bricks by the time he loses the puck. The system responds by providing a new puck.

  5. In a meeting of the producers of the requirements and the test scenario writers, the writer presents each scenario and the requirements modelers identify the use case that contains the test scenario as either a main scenario, extension, exception, or alternative path that represents the scenario. If no match is found, it is listed as an incompleteness defect. If the scenario could be represented by two or more use cases (on the same level of abstraction), an inconsistency defect has occurred. In both of these cases, the first question asked is whether there is an incorrectness defect in the statement of a use case that, if corrected, would handle the scenario accurately. In the scenario provided in Step 4 there is no mention of the limited number of pucks. The system may not be able to provide a puck if the supply is exhausted. The requirement should be explicit about a fixed number of pucks.

Much of this effort will be reused in the testing of other models. Both the ranking of use cases and construction of test cases will produce reusable assets. The requirements model will serve as the basis for testing several other models, and therefore, these test cases can be reused.

Analysis Models

We will be concerned with two types of analysis models: domain analysis and application analysis models. The two types model existing knowledge. One models the knowledge in a domain while the other models knowledge about the product.

Domain Analysis Model

The domain analysis model represents information about a domain of knowledge that pertains to the application about to be constructed. As such, it is derived from the literature and knowledge about the domain as opposed to another UML model. Although many projects are satisfied with creating a domain model that is only a simple class diagram, most domains encompass standard algorithms and many refer to states that are characteristic of the concepts being represented. Figure 4.24 shows the interpretation of the evaluation criteria for a domain model.

Figure 4.24. Criteria for domain model inspection

graphics/04fig24.gif

The domain model is a representation of the knowledge in a domain as seen through the eyes of a set of domain experts. As is to be expected, there can be differences of opinion between experts. For this reason, we have found it useful to divide the available set of experts into two groups. One group, the larger, creates the domain model while the second group serves as the testers of that model. In Figure 4.25, group one is referred to as the developers and group two is referred to as both testers and domain experts. This check and balance between the groups provides a thorough examination of the model.

Figure 4.25. Roles in domain model inspection

graphics/04fig25.gif

Figure 4.26 relates portions of the class diagram from the domain models for Brickles to its application analysis model. Note that there are two domains represented, Interactive Graphics and Games. The test cases for this model will come from the second group of domain experts. They consider how these concepts are used in the typical applications in which they have had experience. The test cases will be written by a team composed of a system tester who knows how to write test cases, and the second group of domain experts. A test case only states details down to the level of the domain concepts. Any actions are domain algorithms.

Figure 4.26. Mapping domain models onto application analysis models

graphics/04fig26.gif

A test case for the Interactive Graphics domain model would look like the following:

Assume that a canvas has been created and asked to display a shape. How will the canvas know where to locate the shape? It is expected that a mouseEvent would provide the coordinates to which a system user points.

Application Analysis Model

There will usually be multiple domain models for a large project. All of these contribute to the single application analysis model. Some parts of each domain model will be thrown away because they are outside the scope of this particular project. Some pieces of domain models will be merged to provide a single element in the application model. This makes judging completeness during the inspection more difficult since there is not a direct mapping from one model to another. Criteria and roles are shown in Figures 4.27 and 4.28.

Figure 4.27. Criteria for application analysis model inspection

graphics/04fig27.gif

Figure 4.28. Roles in application analysis model inspection

graphics/04fig28.gif

An analysis model can be too complete. That is, it can contain design information that the project team has erroneously made part of the requirements. This leads to an overly constrained design that may not be as flexible as possible. As the inspection team measures the test coverage of the model, they examine pieces that are not covered to determine whether they should be removed from the model.

Figure 2.13 shows the class diagram for the application analysis model for Brickles

A test case for the application analysis model would look like the following:

Assume that a match has been started and the playfield has been constructed. How will a paddle prevent a puck from striking the floor boundary? It is expected that the paddle will move into the trajectory of the puck and collide with it. The collision will cause the puck to change direction by reflecting off the middle third of the paddle at the same angle from the point of impact.

Design Models

There are three levels of design in an object-oriented project: architectural, mechanistic, and detailed. We will focus on two basic design models that encompass those three levels: the architectural design model and the detailed class design model. The architectural model provides the basic structure of the application by defining how a set of interfaces are related. It also specifies the exact content of each interface. The detailed class model provides the precise semantics of each class and identifies the architectural interface to which the class corresponds.

Architectural Model

The architectural model is the skeleton of the entire application. It is arguably the most important model for the application so we will go into a fair amount of detail in this section. This is the model in which the nonfunctional requirements are blended with the functional requirements. This provides the opportunity to use the scenarios as a basis for modeling performance and other important architectural constraints.

An architectural design test case would look like the following:

Assume that the BricklesDoc and BricklesView objects have been constructed. A tick message is sent to every MovablePiece. How does the BricklesView receive the information necessary to update the bitmaps on the screen? It is expected that the BricklesDoc object will calculate the new position of each bitmap before it notifies the BricklesView that a change has occurred. The BricklesView object will call methods on the BricklesDoc object to obtain all of the information that it needs to update the display.

We will use the architecture of our game framework to illustrate the variety of techniques considered in this section. We first implemented the framework in C++ using the Microsoft Foundation Classes (MFC), which imposes an architecture known as Document/View and is a variant of the canonical Model/View/Controller (MVC) architecture [Gold89]. The framework was then implemented in Java using the java.awt package, which supports a slightly different form of MVC. In each of these efforts the user interface classes present the state of the game to the user. To achieve this, the user interface has to maintain some state itself. A typical fault for these systems would be for the state in the user interface to be different from the state maintained in the classes implementing the model.

A software architecture is the basic structure that defines the system in terms of computational components and interactions among those components [ShGa96]. We will use the terms component and connector to describe the pieces of an architecture. In the UML notation, components of the architecture are represented as classes with interfaces. If the connectors between components do not have any explicit behavior, they can be represented by simple relationships between classes. If the connectors do have state and/or meaningful behavior then they are represented by objects.

Representations for Architectures

There are three types of information that are widely used to represent an architecture: relationships, states, and algorithms. The basic UML modeling language has the advantage that it can be used for all three design models as well as the analysis models. Using the same notation for all three levels of models eliminates the need to learn multiple notations. Notations such as the UML are sufficiently simple in that no special tools are required, although for large models, tool support quickly becomes a necessity. Tools such as Rational Rose perform a variety of consistency checks on the static-relationship model. With this type of representation, the test cases are manually executed using the technique discussed previously. However, UML does not have specific syntax for describing architectures so the concept/token mapping between the architecture and UML symbols is ad hoc.

Tools such as ObjectTime [Selic94] and BetterState [BetterState00] provide facilities for "animating" design diagrams and provide automatic checking of some aspects of the model. In particular, they support a simulation mechanism that can be used to execute scenarios. The diagrams are annotated with detailed scenario information as well as special simulation information. The developer can "play" scenarios and watch for a variety of faults to be revealed. This approach makes the creation of new scenarios (and test cases) easier by providing a generalized template. One advantage of this approach is the combination of easy model creation with powerful simulation facilities. Usually however, these tools have a limited set of diagram types. BetterState, for example, focuses on building a state model as the specification for the system. This leaves incomplete those static portions of the system that do not affect the state. The obvious benefit is that the scenarios are executed automatically. This makes it easier to run a wide range of scenarios at the price of more time needed to create the model initially. This approach is best suited to small, reactive systems or those systems whose requirements change very little during development.

Often these tools will assist in finding some types of faults as the model is entered into the tool. Consistency checks will prevent certain types of connections from being established. Scenarios are represented in some appropriate format such as a sequential file of input values that are read at appropriate times. The actions of the simulation are often represented by events. Event handlers can be used to "catch" and "generate" events at a high level without the need to write detailed algorithms. This level of execution is sufficient for verifying that required interfaces are provided. It obviously is not sufficient for determining whether the functionality is correctly implemented.

Finally, architectural description languages provide the capability to represent a system at a high level of abstraction. Languages such as Rapide [Luckham95], which has developed at Stanford University, allow the modelers to be as specific as they would like to be. The flow of computation is modeled by events that flow between components. One advantage of this approach is the control that this approach provides to the modeler. The language is sufficiently descriptive to support any level of detail that the modeler wishes to use, unlike the tools previously discussed, which have a fixed level of representation. The disadvantage is that these models are programs with all of the problems associated with that level of detail.

When the model and test cases are represented in a programming language, the test execution can be performed automatically. The representation language may be a general purpose programming language used to implement a high-level prototype or a special purpose architectural description language such as Rapide, which is used to build a standard model. The level of detail represented in the prototype will determine how specific the testing can be.

Testing the Architecture

The Software Architecture Testing (SAT) [McGr96] technique is a special type of guided inspection that requires the following usual steps in testing any product: (1) test cases are constructed; (2) the tests are conducted on the product; and (3) the results of the test are evaluated for correctness. This technique is a "testing" technique because there are very specific test cases, and there is the concept of an execution even if the execution is sometimes manual. The team that is assigned to drive this activity is divided as shown in Figure 4.29. We will provide additional detail on each of the steps.

Figure 4.29. Roles in the architectural design model inspection

graphics/04fig29.gif

Constructing Test Cases

Test cases for the architecture are constructed from the use cases as described previously. Each use case describes a family of scenarios that specifies the different types of results that can occur during a specific use of the system. The test cases for the architecture are defined at a higher level than the more detailed design models. The results are used to evaluate the criteria shown in Figure 4.30.

Figure 4.30. Criteria for the architectural design model inspection

graphics/04fig30.gif

The test cases are essentially defined at a level that exercises the interfaces between subsystems. For example, for the game framework, the essential interface is between a model and a view. The model is divided among the Puck, Paddle, and BrickPile classes. The view is concentrated in the BricklesView class.

The Model/View architecture calls for most of the interaction to be from the view but with the model notifying the view when a change has occurred to the model. Since Brickles requires animation, we modified the architecture so that when the BricklesView object is created it is sent a series of messages that provide it with handles to the pieces of the model.

The basic architectural model is given in Figure 4.31. With the analysis out of the way, the test cases can be selected. The two basic operations are (1) setup of the system and (2) repainting the screen after a move has occurred. Unlike many systems built on Model/View, there is no need to consider the ability to add additional views. We could define a test case for each operation; however, a single grand tour[1] case can be defined in this case. Usually grand tours are too large and give little information if they fail, but in this case the second operation cannot be realized without the first so it is a natural conjunction.

[1] A grand tour is a test case that combines a number of separate test cases into one run.

Figure 4.31. An architectural model for Brickles

graphics/04fig31.gif

Test Execution

The tests are executed as described for each specific type of representation. We have used the UML notation so this will be an interactive session.

We execute the test case by constructing a message-sequence diagram. The diagram reflects preconditions for a test case. The BricklesView object is created followed by a BricklesGame object. As the BricklesGame object is created, it creates a PlayField object that in turn creates Puck, Paddle, and BrickPile objects. The messages across the architectural boundaries are shown in bold italics in Figure 4.32.

Figure 4.32. Test case execution

graphics/04fig32.gif

Verification of Results

Usually for architectures, this step is fairly simple, even though for the detailed functionality of the final application it can be very difficult. When the output from the test is in the form of diagrams, the resulting diagrams must be verified after each test execution by domain experts. When the output is the result of an execution, the test results can be verified by having those domain experts construct event sequences that would be produced by an architecture that performs correctly. The interpretation of the evaluation criteria is given in Figure 4.29.

An Additional Example

The architecture of Brickles is obviously very simple so let's consider the typical three-layer architecture. Although the diagram in Figure 4.33 is greatly simplified, we can consider the types of test cases that would be effective. The client is intended to interact with a user, do computations needed to format presentations, and interact with the business model residing on the application server. The application server is intended to be the primary computational engine, and it also handles interactions with the client and database components. Finally, the database component provides persistence for the business objects from the application server.

Figure 4.33. Three-tiered architecture

graphics/04fig33.gif

The most important scenarios for this type of architecture include multiple client/single server and multiple client/multiple server scenarios. The discussion in the next section provides a technique for structuring these tests so that they are repeatable and representative. Useful coverage of this architecture includes exercising various combinations of threads. Since these systems are usually distributed, we will defer further discussion until Chapter 8.

Evaluating Performance and Scalability

The architecture of a system should be evaluated beyond correctness, completeness, and consistency. Most architectures will have a specified set of quality attributes and these should also be evaluated. A system that presents animation, as does Brickles, must meet performance goals. The scenarios used as test cases for the basic inspection can also be used to analyze the expected performance for the architecture. The SAAM [Kazman94] approach uses a free-form analysis technique for analyzing performance. Software Architecture Testing (SAT) [McGr96] uses the testing perspective to ensure that the important features of the architecture are investigated.

The test cases are symbolically executed and the message-sequence diagrams can be analyzed from a performance perspective. For the analysis, each connection between components in the architecture is assigned a "cost" that reflects the type of communication used by the connection. The number of messages in each scenario gives an indication of the relative performance although by itself the technique gives an order of magnitude to quantified value rather than a specific quantity. A more exact value can be computed by the following string:

 time to compute = n1c1 + n2c2 +... + nmcm 

in which the c's are the connection types and the subscripts represent the number of each connection type in the scenario. If a use profile (see Use Profiles on page 130) is used to select a representative set of test cases, an accurate approximation to a typical user session can be computed. Worst and best case approximations can also be constructed.

Design alternatives can be evaluated by comparing the message sequence diagrams and the relative quantities of messages. By using the same set of test cases, selected based on the use profile, a fair and realistic comparison can be made as to how the system will perform if constructed using each alternative.

The performance of distributed systems can also be analyzed in this manner by annotating those messages that will be interprocess and interprocessor. The test case approach using a use profile produces a representative performance measure.

The sequence diagrams can also be used to evaluate the scalability of the architecture. The following use profile indicates several types of users and different frequencies of operations in each:

 userType = p1s1, p2s2, ..., pnsn useProfile = q1ut1, q2ut2, ..., qmutm 

in which the p's and q's are the probability that a particular scenario and user type, respectively, will be selected.

A scalability test case is a hypothetical mix of actors that is different from the current use profile, that is, a set of values for the q's in the useProfile equation. Usually, the different types of users will remain constant, but the relative number changes. The computation given previously is used for each scenario and for each user type. Then the number of each user type is used to aggregate further. The resulting values can identify the intensity of use for specific messages.

Detailed Class Design Model

The detailed class design model populates the architectural model with classes that will implement the interfaces defined in the architecture. This model typically includes a set of class diagrams, the OCL pre- and postconditions for every method of every class, activity diagrams of significant algorithms, and state diagrams for each class. The detailed design model for Brickles is shown in Figure 2.18, and additional detail is shown in Figure 2.15 for one specific class.

The model evaluation criteria are specialized in Figure 4.34. The focus is on compliance with the architecture. This reinforces the idea that the architecture is the keystone of the product. This is also the place where components will be reused and inserted into the system. The specification of the component should be included in the execution trace to ensure there is no need for an adapter between the component and the application.

Figure 4.34. Criteria for the class design model inspection

graphics/04fig34.gif

The roles are assigned in Figure 4.35. Notice that the architects have a role in testing the class diagram. The architects' responsibility to a project is to "enforce" the architecture. That is, the architect makes certain that developers do not violate the constraints imposed by the architecture. By selecting test cases and evaluating the results, the architects can gain detailed knowledge about the developers' implementation.

Figure 4.35. Roles in class design model inspection

graphics/04fig35.gif

A detailed class design test case would look like this:

Assume that a puck is moving to the left and up, but will hit the left wall before hitting a brick. How will the puck's direction and speed be changed when it hits the wall? It is expected that when the puck is found against the left wall, the wall will create a Collision object that will be passed to the puck. The puck will modify its velocity and begin to move to the right and up. It will be moving at the same speed.

The test cases at this level are very much like the final system test cases. There is so much detail available at this level that the testers have to be careful to record all the model elements that are touched by test cases. Figure 4.34 shows the diagram elements that must be coordinated during the guided inspection session. As the test progresses, the executors select methods that will be invoked, the state model of the receiver is checked to be certain that the target object can receive the message. The messages are then added to the sequence diagram and the state models are updated to reflect changes in state. When a state in a diagram is shaded, there is additional detail to the state but that information is not needed to evaluate the current test. Sequence diagrams will also have "dead-end" objects in which the testers will not attempt to examine the logic beyond that object.

Figure 4.36. A test environment

graphics/04fig36.gif

This is the last step prior to implementation and code-based testing. Developers doing buddy testing of each other's code will benefit from coming back to the test cases created at this level of testing and translating these into class-level code tests.

Testing Again

We are assuming that you are using an iterative development process as we do. That means that these tests must be repeatable. We have tried to accomplish this by writing down formal test cases as opposed to simply thinking up scenarios during the inspection session.

On the second and succeeding iterations, we usually choose to reapply all those tests that were failed the last time and some of those that were passed. Tests may be added to cover the new features added. If any problems were discovered after the inspection was conducted, tests should be added to check for that problem as well.

Tip

Use guided inspection to transfer knowledge about the model under test. On a recent project, when the developer responsible for a specific piece of the design was leaving, we used a series of inspection sessions to bring other developers up to speed on their individual piece. A presentation by the developer, as opposed to an inspection, would have addressed the design in the way he knew it best, not in the way the other developers were viewing it.




A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net