A Closer Look at Defect Removal Effectiveness

To define defect removal effectiveness clearly, we must first understand the activities in the development process that are related to defect injections and to removals. Defects are injected into the product or intermediate deliverables of the product (e.g., design document) at various phases. It is wrong to assume that all defects of software are injected at the beginning of development. Table 6.1 shows an example of the activities in which defects can be injected or removed for a development process.

For the development phases before testing, the development activities themselves are subject to defect injection, and the reviews or inspections at end-of-phase activities are the key vehicles for defect removal. For the testing phases, the testing itself is for defect removal. When the problems found by testing are fixed incorrectly, there is another chance to inject defects. In fact, even for the inspection steps, there are chances for bad fixes. Figure 6.3 describes the detailed mechanics of defect injection and removal at each step of the development process. From the figure, defect removal effectiveness for each development step, therefore, can be defined as:

Figure 6.3. Defect Injection and Removal During One Process Step

graphics/06icon08.gif

graphics/06icon07.gif

 

Table 6.1. Activities Associated with Defect Injection and Removal

Development Phase

Defect Injection

Defect Removal

Requirements

Requirements-gathering process and the development of programming functional specifications

Requirement analysis and review

High-level design

Design work

High-level design inspections

Low-level design

Design work

Low-level design inspections

Code implementation

Coding

Code inspections

Integration/build

Integration and build process

Build verification testing

Unit test

Bad fixes

Testing itself

Component test

Bad fixes

Testing itself

System test

Bad fixes

Testing itself

This is the conceptual definition. Note that defects removed is equal to defects detected minus incorrect repairs. If an ideal data tracking system existed, all elements in Figure 6.3 could be tracked and analyzed . In reality, however, it is extremely difficult to reliably track incorrect repairs . Assuming the percentages of incorrect repair or bad fixes are not high (based on my experience), defects removed can be approximated by defects detected. From experience with the AS/400, about 2% are bad fixes during testing, so this assumption seems reasonable. If the bad-fix percentage is high, one may want to adjust the effectiveness metric accordingly , if an estimate is available.

To derive an operational definition, we propose a matrix approach by cross-classifying defect data in terms of the development phase in which the defects are found (and removed) and the phases in which the defects are injected. This requires that for each defect found, its origin (the phase where it was introduced) be decided by the inspection group (for inspection defects) or by agreement between the tester and the developer (for testing defects). Let us look at the example in Figure 6.4.

Figure 6.4. Defect Data Cross-Tabulated by Where Found (Phase During Which Defect Was Found) and Defect Origin

graphics/06fig03.gif

Once the defect matrix is established, calculations of various effectiveness measures are straightforward. The matrix is triangular because the origin of a defect is always at or prior to the current phase. In this example, there were no formal requirements inspections so we are not able to assess the effectiveness of the requirements phase. But in the requirements phase, defects can be injected that can be found later in the development cycle. Therefore, the requirements phase also appears in the matrix as one of the defect origins. The diagonal values for the testing phases represent the number of bad fixes. In this example all bad fixes are detected and fixed, again correctly, within the same phase. In some cases, however, bad fixes may go undetected until subsequent phases.

Based on the conceptual definition given earlier, we calculate the various effectiveness metrics as follows .

High-Level Design Inspection Effectiveness; IE (I0)

Defects removed at I0: 730

Defects existing on step entry ( escapes from requirements phase): 122

Defects injected in current phase: 859

graphics/06icon09.gif

Low-Level Design Inspection Effectiveness; IE (I1)

Defects removed at I1: 729

Defects existing on step entry (escapes from requirements phase and I0):

graphics/06icon10.gif

 

Defects injected in current phase: 939

graphics/06icon11.gif

Code Inspection Effectiveness; IE (I2)

Defects removed at I1: 1095

Defects existing on step entry (escapes from requirements phase, I0 and I1):

graphics/06icon12.gif

 

Defects injected in current phase: 1537

graphics/06icon13.gif

Unit Test Effectiveness; TE (UT)

Defects removed at I1: 332

Defects existing on step entry (escapes from all previous phases):

graphics/06icon14.gif

 

Defects injected in current phase (bad fixes): 2

graphics/06icon15.gif

For the testing phases, the defect injection (bad fixes) is usually a small number. In such cases, effectiveness can be calculated by an alternative method (Dunn's formula or Jones's second formula as discussed earlier). In cases with a high bad-fixes rate, the original method should be used.

graphics/06icon16.gif

 

Component Test Effectiveness; TE (CT)

graphics/06icon17.gif

 

System Test Effectiveness; TE (ST)

graphics/06icon18.gif

 

Overall Inspection Effectiveness; IE

graphics/06icon19.gif

 

or

graphics/06icon20.gif

 

Overall Test Effectiveness; TE

graphics/06icon21.gif

 

Overall Defect Removal Effectiveness of the Process; DRE

graphics/06icon22.gif

 

To summarize, the values of defect removal effectiveness from this example are as follows:

I0: 74%

I1: 61%

I2: 55%

Overall Inspection Defect Removal Effectiveness: 74%

UT: 36%

CT: 67%

ST: 58%

Overall Test Defect Removal Effectiveness: 91%

Overall Defect Removal Effectiveness of the Process: 97.7%

From the matrix of Figure 6.4 it is easy to understand that the PCE i used by Motorola is somewhat different from phase defect removal effectiveness. PCE i refers to the ability of the phase inspection to remove defects introduced during a particular phase, whereas phase defect removal effectiveness as discussed here refers to the overall ability of the phase inspection to remove defects that were present at that time. The latter includes the defects introduced at that particular phase as well as defects that escaped from previous phases. Therefore, the phase containment effectiveness (PCE) values will be higher than the defect removal effectiveness values based on the same data. The PCE i values of our example are as follows.

I0: 681/859 = 79%

I1: 681/939 = 73%

I2: 941/1537 = 61%

UT: 2/2 = 100%

CT: 4/4 = 100%

ST: 1/1 = 100%

Assume further that the data in Figure 6.4 are the defect data for a new project with 100,000 lines of source code (100 KLOC). Then we can calculate a few more interesting metrics such as the product defect rate, the phase defect removal rates, phase defect injection rates, the percent distribution of defect injection by phase, and phase-to-phase defect escapes. For instance, the product defect rate is 81/100 KLOC = 0.81 defects per KLOC in the field (for four years of customer usage). The phase defect removal and injection rates are shown in Table 6.2.

Having gone through the numerical example, we can now formally state the operational definition of defect removal effectiveness. The definition requires information of all defect data (including field defects) in terms both of defect origin and at which stage the defect is found and removed. The definition is based on the defect origin/where found matrix.

Let j = 1, 2, . . . , k denote the phases of software life cycle.

Let i = 1, 2, . . . , k denote the inspection or testing types associated with the life-cycle phases including the maintenance phase (phase k ).

Table 6.2. Phase Defect Removal and Injection Rates from Figure 6.3

Phase

Defects/KLOC (removal)

Defect Injection per KLOC

Total Defect Injection (%)

Requirements

1.2

3.5

High-level design

7.3

8.6

24.9

Low-level design

7.3

9.4

27.2

Code

11.0

15.4

44.5

Unit test

3.3

 

Component test

3.9

 

System test

1.1

 

Total

33.9

34.6

100.1

Then matrix M (Figure 6.5) is the defect origin/where found matrix. In the matrix, only cells Nij ,where i j (cells at the lower left triangle), contain data. Cells on the diagonal ( Nij where i = j ) contain the numbers of defects that were injected and detected at the same phase; cells below the diagonal ( Nij where i > j ) contain the numbers of defects that originated in earlier development phases and were detected later. Cells above the diagonal are empty because it is not possible for an earlier development phase to detect defects that are originated in a later phase. The row marginals ( Ni. ) of the matrix are defects by removal activity, and the column marginals ( N.j ) are defects by origin.

Figure 6.5. Defect Origin/Where Found Matrix ”Matrix M

graphics/06fig04.gif

Phase defect removal effectiveness (PDREi) can be phase inspection effectiveness [IE(i)] or phase test effectiveness [TE(i)]

graphics/06icon23.gif

 

Phase defect containment effectiveness (PDCEi)

graphics/06icon24.gif

 

Overall inspection effectiveness (IE)

graphics/06icon25.gif

 

where I is the number of inspection phases.

Overall test effectiveness (TE)

graphics/06icon26.gif

 

where I + 1, I + 2, . . . , k - 1 are the testing phases.

Overall defect removal effectiveness (DRE) of the development process:

graphics/06icon27.gif


What Is Software Quality?

Software Development Process Models

Fundamentals of Measurement Theory

Software Quality Metrics Overview

Applying the Seven Basic Quality Tools in Software Development

Defect Removal Effectiveness

The Rayleigh Model

Exponential Distribution and Reliability Growth Models

Quality Management Models

In-Process Metrics for Software Testing

Complexity Metrics and Models

Metrics and Lessons Learned for Object-Oriented Projects

Availability Metrics

Measuring and Analyzing Customer Satisfaction

Conducting In-Process Quality Assessments

Conducting Software Project Assessments

Dos and Donts of Software Process Improvement

Using Function Point Metrics to Measure Software Process Improvements

Concluding Remarks

A Project Assessment Questionnaire



Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering (2nd Edition)
ISBN: 0201729156
EAN: 2147483647
Year: 2001
Pages: 176

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net