Suggestions


Organization and Process

Create a testing organization with two levels. The first level has responsibility for facilitating low-level testing among developers. This group provides high-level Tester classes and other reusable test assets to developers. The members of this organization must be able to program and probably can have split assignments between a development team and the project testing team.

The second level supports system-wide testing activities. This group interacts with the development group throughout the entire course of a project. They write test cases from the use cases early in the project to identify requirements that are not testable. They participate in guided inspection sessions and ultimately test applications as complete entities.

Begin organizational improvement by sharing testing assets that you have created with others in your organization. When decisions are being discussed in a project meeting, ask how it will affect the team's ability to test the software. Basically, raise the level of visibility of testing and cast it in a positive way: "Look how many defects are not delivered to customers."

Write an explicit process description for each of these levels. Use the ones that we have provided as a starting point (see A Testing Process on page 78). Tailor those to your organization's level of maturity and type of system.

Data

Collect data over the lifetime of the project and use it as the basis for making decisions. "I think…" and "It seems to me that…" are not the bases on which business or technical decisions should be made. "This technique is better because it increases the defect/hour detection rate" is a much more successful strategy. Even rough numbers are better than no numbers. Figure 11.1 lists some test metrics and references places in the text where we have discussed them.

Figure 11.1. Test metrics

graphics/11fig01.gif

Include logging mechanisms in the Tester classes that make the detection of failures easy to quantify. Standardize the terminology and symbols used in the logging. Remember that a test case that should result in an "error" condition and the throwing of an exception passes if the error occurs and fails if not. Count those times when the software doesn't do what the specification says it should.

Standards

Use standards as the starting point for any products that will be archived for continuing use. Even de facto standards have evolved through the discussions of a community of supporters and early adopters and that gives a broader perspective than it would if you had a few developers on a single project. Be certain that the standards chosen for testing are compatible with the standards chosen for development.

Define a set of templates for the testing of work products. This reduces the effort required to develop these products. Standardize these templates throughout your organization.

Test Plans, Cases, and Reports

Use IEEE standards [IEEE87], just as we did for the test plan as a jump start for creating your test plan or test case formats. These should be tailored to your domain and type of system, but it is easier to delete or modify an existing item than it is to create it originally. Refine your formats based on your experience with them. To do this:

  1. Collect data on deviations from plans.

  2. Use test reports to collect data on live defect ranges, reliability, and number of defects per thousand lines of code.

Requirements

Create your own standard use case template by modifying our template and the template available on Alastair Cockburn's Web site [Cock00]. Write test scenarios for guided inspection as early as possible in the development cycle. This will identify vague requirements as early as possible.

Defect Categories

Use widely accepted de facto standards such as Orthogonal Defect Classification, which is based on a large body of data collected within IBM. These classifications can serve as the basis for reviewing your test cases and test case strategies. We have provided lists of some of these and illustrated their use in Orthogonal Defect Classification as a Test Case Selector on page 125 and ODCon page 314.

Software Infrastructure

Spend resources on infrastructure for testing. It will take the time of experienced designers to produce well-designed Tester classes and to use parallel architecture for class testing (PACT) effectively. We have discussed test environments for C++ and Java that support various types of low-level testing. Versions of many of these are available on the Web site. Each will require resources to modify the tool to your environment, but each will save you many person-hours of time spent testing.

Take advantage of free or low-cost testing tools such as JUnit [Beck99]. It works well with PACT classes and provides an execution environment for them. Bring these into your project and use them to automate the routine parts of testing so that you have time for selecting appropriately diabolical test cases.

Techniques

We have presented a number of techniques that are applied at a variety of points in the development process. Let's consider these as a tester's toolbox.

Apply guided inspection from the first models until models are no longer updated. Early on this will be a test both of the requirements from which the tests are derived and the models. As the requirements become stable there will be less need to question whether the tests are correct. It will be faster and easier to apply and evaluate the results.

Use temporal logic as you create design specifications that involve concurrency. This will allow you to design the software more exactly and to test it more thoroughly. Where timing makes a difference be certain that the specification expresses that difference.

Use SYN-path analysis as you create test cases to ensure coverage of possible synchronization defects. Identify critical points at which two threads must access the same state information. Create test cases by following each thread to other synchronization points in the code.

Apply hierarchical increment testing (HIT) and the orthogonal array testing system (OATS) when there are more tests that could be run than there are resources to run them. Use HIT to determine which of the test cases that are inherited from a parent PACT class will be applied to the child class. If they are all fully automated and machine time is not a problem, run them all! If the resources required increases as the number of tests that you run increases, then use HIT to eliminate those tests that are less likely to discover new defects.

Use OATS to sample from a population of possible, but not yet written, test cases. Look for places in the design where a large amount of flexibility is designed into the software.

Use test patterns that correspond to the specific developmental design patterns as you design test cases and the supporting PACT classes. Where the design documentation refers to specific design patterns, determine if a corresponding test pattern exists. If it does, use it to speed the design of test cases and software. If it does not exist, write it and publish it either within the company or in the many patterns conferences.

Risks

There are a number of risks associated with the testing roles of a project. Let's consider a few and how to mitigate them.

  1. Testing may be viewed as a secondary concern behind development rather than as an equal partner. This risk should be mitigated by collecting data to show the "worth" of testing. Be careful that this worth is not seen as being at the expense of developers.

  2. Testers may underestimate the amount of testing that the project is willing to support and allow serious faults to escape detection. Be a pain to managers and developers. Always test until you are told, "No more." At the same time, collect data so that you know the cost per defect of your testing. Use the reuse and automation techniques that we have described to keep this cost as low as possible.

  3. Traditional test strategies may not be effective at identifying the types of defects that occur in dynamically reconfigurable, distributed object systems. This is mitigated by modifying existing strategies to include some of the techniques listed in Chapter 8.



A Practical Guide to Testing Object-Oriented Software
A Practical Guide to Testing Object-Oriented Software
ISBN: 0201325640
EAN: 2147483647
Year: 2005
Pages: 126

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net