Chapter 8: How Much is Enough?


Overview

The previous chapters dealt mainly with what should be tested, how, and why. If we were to turn all our testing ideas into automated test cases, then the test effort to implementation effort ratio would be at least 10 to 1, if not higher. One important question is therefore, When will we have sufficiently tested?

Let's be honest, only very few of us are addicted to testing. Most development teams suffer rather from the opposite phenomenon. There are too few tests for us to always deliver or restructure with a clear conscience. The other important question is therefore, When have we tested too little?

There are numerous factors that play a role in determining the optimal testing effort. The most important ones follow:

  • Complete testing with the declared intention to verify the correctness of a program is impossible to achieve for all nontrivial programs. The objective of our testing efforts; therefore, can only be to find as many faults as possible at a manageable effort level.

  • There is an acceptable error level for each system. How high this level is depends on the type of system: the software to control a radiological unit should certainly not contain as many errors as a Web application for sock subscriptions. [1] The acceptable error level is normally specified by the customer in metrics like mean time between failure or the like. The actual error level achieved by an implementation is usually hard to predict without running the application productively for some time.

  • The effort required to achieve a specific error level grows in a nonlinear way with regard to the benefit. For this reason, we can achieve an average error level at a relatively small testing effort. But the effort will be much higher for only half as many bugs in the program.

  • Not all faults are created equal—some are cosmetic, some are catastrophic. If possible, testing should concentrate on finding the severe bugs.

  • The correct number of unit tests has positive effects on the development velocity, as soon as the project term exceeds a certain duration.

  • Test-first development requires at least a sufficient number of tests so that all developers will have enough confidence in their own work. When a developer is forced to fall below his or her personal minimum quality standard, due to time pressure for instance, then identification with the result of their work and their motivation and productivity will suffer.

  • Unit tests are not only a quality-assurance technique. Above all, they steer the development of our evolutionary design.

  • Unit tests are not the only tests we use, so they don't have to fully guarantee the desired error level. The acceptance tests specified by the customer are another responsible element. A life-critical system demands additional test steps and quality-assurance measures.

  • Too few tests hold another danger: we get lulled into a false sense of security.

For these reasons, we have to weigh two aspects against each other: the economic side (How much does which error level cost me?) and the technical side (How many tests will bring me maximum velocity, flexible design, and happy developers?).

[1]There is no such thing? Check it out at [URL:Soxabo].




Unit Testing in Java. How Tests Drive the Code
Unit Testing in Java: How Tests Drive the Code (The Morgan Kaufmann Series in Software Engineering and Programming)
ISBN: 1558608680
EAN: 2147483647
Year: 2003
Pages: 144
Authors: Johannes Link

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net