In this chapter, we've talked about reliability; how to define it, measure it, set goals in terms of it, and track it in testing. Let's review some key points from this chapter. Reliability is the probability of failure-free operation for a specified length of time in a specified environment. This definition is user-centric (defined from the perspective of the user using the system), and dynamic (failures only happen when a product is being used). Having a quantifiable definition of reliability is the key to being able to measure and track reliability during testing as a means of helping decide when it's time to stop testing. Two common techniques for quantifying a reliability goal are: Failure intensity, which is the number of failures of a specified severity per unit of time, or number of transactions, and so on. Exponential failure law, which provides the probability for failure-free operation for a specified length of time, when the failure intensity is constant (i.e., you aren't patching the system or adding features, as would generally be the case for a system in use by a customer, in the field).
A reliability growth curve is a run chart that lets you visualize the failure intensity of a product during development and test. Used in conjunction with a failure intensity objective, the reliability growth curve allows you to tell if your system is getting close to being ready for release. The Swamp Report is a spreadsheet-based dashboard that lets you track three key factors to knowing whether it's time to stop testing and release a product: failure intensity, open defects, and test coverage as per the operational profile. It provides at-a-glance monitoring of reliability growth across a large number of "pieces" that form a whole (use cases of a component, components of a product, whole products that are part of a program). Defect Detection Effectiveness (DDE) is a measure of the effectiveness of a testing process to detect defects. DDE provides a quantitative approach to correlate failure intensity objectives with successful releases measured by DDE. By tracking failure intensities during testing, followed by DDE analysis after the release, you are able to determine whether you can raise, or need to lower, failure intensity objectives for future releases. |