Based on a special study commissioned by the Department of Defense, Jones (Software Productivity Research, 1994; Jones, 2000) estimates the defect removal effectiveness for organizations at different levels of the development process capability maturity model (CMM):
These values can be used as comparison baselines for organizations to evaluate their relative capability with regard to this important parameter.
In a discussion on quantitative process management (a process area for Capability Maturity Model Integration, CMMI, level 4) and process capability baselines, Curtis (2002) shows the estimated baselines for defect removal effectiveness by phase of defect insertion (or defect origin in our terminology). The cumulative percentages of defects removed up through acceptance test (the last phase before the product is shipped) by phase insertion, for CMMI level 4, are shown in Table 6.4. Based on historical and recent data from three software engineering organizations at General Dynamics Decision Systems, Diaz and King (2002) report that the phase containment effectiveness by CMM level as follows :
Table 6.4. Cumulative Percentages of Defects Removed by Phase for CMMI Level 4
Phase Inserted |
Cumulative % of Defects Removed Through Acceptance Test |
---|---|
Requirements |
94% |
Top-level design |
95% |
Detailed design |
96% |
Code and unit test |
94% |
Integration test |
75% |
System test |
70% |
Acceptance test |
70% |
It is not clear how many key phases are there in the development process for these projects and the extent of variations in containment effectiveness across phases. It appears that these statistics represent the average effectiveness for peer reviews and testing for a number of projects at each maturity level. Therefore, these statistics perhaps could be roughly interpreted as overall inspection effectiveness or overall test effectiveness.
According to Jones (2000), in general, most forms of testing are less than 30% efficient. The cumulative efficiency of a sequence of test stages, however, can top 80%.
These findings demonstrate a certain level of consistency among each other and with the example in Figure 6.4. The Figure 6.4 example is based on a real-life project. There was no process maturity assessment conducted for the project but the process was mature and quantitatively managed. Based on the key process practices and the excellent field quality results, the project should be at level 4 or level 5 of a process maturity scale.
More empirical studies and findings on this subject will surely produce useful knowledge. For example, test effectiveness and inspection effectiveness by process maturity, characteristics of distributions at each maturity level, and variations across the type of software are all areas for which reliable benchmark baselines are needed.
What Is Software Quality?
Software Development Process Models
Fundamentals of Measurement Theory
Software Quality Metrics Overview
Applying the Seven Basic Quality Tools in Software Development
Defect Removal Effectiveness
The Rayleigh Model
Exponential Distribution and Reliability Growth Models
Quality Management Models
In-Process Metrics for Software Testing
Complexity Metrics and Models
Metrics and Lessons Learned for Object-Oriented Projects
Availability Metrics
Measuring and Analyzing Customer Satisfaction
Conducting In-Process Quality Assessments
Conducting Software Project Assessments
Dos and Donts of Software Process Improvement
Using Function Point Metrics to Measure Software Process Improvements
Concluding Remarks
A Project Assessment Questionnaire