Chapter 4. Reliability and Knowing When to Stop Testing


Your product has been in final system test for days; or has it been weeks? Surely it must be time to stop testing and release it? It's the moment of decision, and you realize: Damned if you do; Damned if you don't. Release it too early, and you incur the wrath of customers inflicted with a buggy product and the high cost of fixing and testing defects released to production. Hold the product in testing, and you incur the wrath of marketing as they remind you of the revenue that is being lost on top of the cost of too much testing. There is a sweet spot in testing, a point that strikes that perfect balance between releasing the product too early and releasing the product too late.[1] But how do you know you are close to that sweet spot?

[1] Although examples from this chapter are couched in terms of final system test, in the Unified Software Development Process the question of whether to stop or continue testing for and fixing defects is one that is pertinent throughout the construction and transition phase during: Testing of increments of the system at the end of each iteration to determine if moving to the next iteration is warranted; final system test at the end of the construction phase to determine if a product is reliable enough for beta test; beta test during transition phase to determine if the system is ready for full commercial release.

The second idea Software Reliability Engineering (SRE) brings to use case development is a quantitative way to talk about reliability, providing a sound basis for determining when a product's reliability goal has been reached, testing can terminate, and the product can be released.

In this chapter, we'll do the following:

  • Talk about reliability, how to define it, measure it, set goals in terms of it, and track it in the testing of use case-driven development projects.

  • Look at a spreadsheet-based dashboard that lets you track three key factors in knowing whether it's time to stop testing and release a product: failure intensity, open defects, and test coverage as per the operational profile. This dashboard provides at-a-glance monitoring of reliability growth across a large package of use cases.

  • Learn how to measure the effectiveness of a use case-driven development process in terms of defect detection and removal, touted by some as the single most important metric for improving the capability of a process to produce quality products. And we'll look at how to use this measure to determine if your reliability goals can be raised or lowered for future releases.

Let's begin by defining what we mean by reliability.



Succeeding with Use Cases. Working Smart to Deliver Quality
Succeeding with Use Cases: Working Smart to Deliver Quality
ISBN: 0321316436
EAN: 2147483647
Year: 2004
Pages: 109

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net