MONITORING AND CONTROL

10.3 MONITORING AND CONTROL

The effectiveness of the review process depends on how well the process has been deployed. For example, if only two defects were found during the review of a 500-line program or a 20-page design document, clearly the review was not effective. The most common reason for a poor review is that it was not done with the proper focus and seriousness. Unless reviews are taken seriously, they will likely be a huge waste of time that does not give any due return, or they may be seen as a step to be checked off by performing it perfunctorily.

How does a project manager or a moderator evaluate whether a review has been effective so that she can decide the future course of action? One effective way of monitoring and controlling reviews is to use statistical process control (SPC) concepts implemented through control charts. Because the number of data points for reviews particularly for code reviews can be large, statistical techniques can be applied with confidence and rigor. This section discusses how Infosys monitors and controls reviews using statistical techniques.

10.3.1 The Review Capability Baseline

How can SPC be applied to monitoring reviews? To apply SPC, project managers must identify critical performance parameters, determine control limits for them, and then monitor the actual performance. They can build control charts by plotting the performance parameters of reviews and then use the plots to evaluate the effectiveness of a review. Another approach is to set the control limits for the various parameters and then use that range to determine the effectiveness. Although this latter approach has limitations because the run chart is not available, it is easy to apply. At Infosys, this latter approach is followed.

At Infosys, control limits have been determined for the following performance parameters: the coverage rate during preparation, the coverage rate during the group review meeting, the defect density for minor or cosmetic defects, and the defect density for serious or major defects (the overall defect density is simply the sum of the two preceding defect densities). These limits are determined from past data and from the review capability baseline. Creating and maintaining this baseline are important reasons for collecting summary data on reviews. Table 10.2 shows the group review capability baseline.

The group review baseline in Table 10.2 gives, for the various types of work products, the coverage rate during preparation, the coverage rate during review, and the defect density for minor and critical defects (the overall defect density is the sum of the two). The defect density is normalized with respect to size, where size is measured in the number of pages for all noncode work products and in lines of code for code work products. (For a design, size can also be measured in terms of the number of specification statements.) The coverage rate is stated in terms of size per unit effort, where effort is measured in person-hours. As you can see, for documents the coverage rates and defect densities are quite similar, but they are different for code (where the unit of size is also different).

The rates for one-person reviews can be expected to differ. Detailed design documents, test plans, and code undergo this form of review regularly. Hence, a one-person review baseline has been developed for these work products. In the baseline for one-person review of documents, the coverage rate per hour is about twice the corresponding coverage rate of group reviews; the defect detection rate per page is about half of that found with group reviews. For code, the coverage rate per hour is about the same, but the defect detection rate per LOC is about 30% less.

This baseline is the foundation for monitoring a review conducted in a project.

Table 10.2. Infosys Review Capability Baseline

Review Item

Preparation Coverage Rate (If Different from Coverage Rate)

Group Review Coverage Rate

Defect Density Cosmetic/Minor

Serious/Major Defect Density

Requirements

5 7 pages/hour

0.5 1.5 defects/page

0.1 0.3 defects/page

 

High-level design

4 5 pages/hour (or 200 250 specification statements/hour)

0.5 1.5 defects/page

0.1 0.3 defects/page

 

Detailed design

3 4 pages/hour (or 70 100 specification statements/hour)

0.5 1.5 defects/page

0.2 0.6 defects/page

 

Code

160 200 LOC/hour

110 150 LOC/hour

0.01 0.06 defects/LOC

0.01 0.06 defects/LOC

Integration test plan

5 7 pages/hour

0.5 1.5 defects/page

0.1 0.3 defects/page

 

Integration test cases

3 4 pages/hour

 

 

 

System test plan

5 7 pages/hour

0.5 1.5 defects/page

0.1 0.3 defects/page

 

System test cases

3 4 pages/hour

 

 

 

Project management and configuration management plan

4 6 pages/hour

2 4 pages/hour

0.6 1.8 defects/page

0.1 0.3 defects/page

10.3.2 Analysis and Control Guidelines

The ranges given in the baseline are used to determine whether the performance of the review falls within acceptable limits. This check is specified as an exit criterion for the review process. Project managers can define the exit criteria as in-range checking of all the various parameters, but because detecting defects is the central purpose of reviews, the exit criterion is that the overall defect density should lie within the specified limits. (Another option is to check that the defect densities for the two types of defects are within the appropriate ranges.) If the number of defects found during the review is within the range given in the baseline, the review is considered effective, the exit criteria are satisfied, and no further action is needed for this review.

Instead of using the review capability baseline, a project manager can monitor reviews using an SPC tool developed in-house. This tool is essentially a spreadsheet that has the capability baseline data built into it. It also has data about the defect injection rate, defect removal efficiencies, organization-wide quality capability, and so on. From these data, the tool determines the performance specifications for a review that is, the range in which its outcome is expected to fall if the quality goal is to be met. The SPC tool gives warnings when a piece of review data falls outside either the control limits or the expected limits (defined shortly).

If the density of defects found in a review is not within the range given in the capability baseline, it does not automatically mean failure of the review. The project manager or the moderator critically evaluates the situation and decides on the next steps. The preparation rate and review rate become very useful here; if the review rate is "too fast" compared with that given in the baseline, the reason for the ineffectiveness of the review is relatively clear. The defect densities for minor and critical defects can also be useful in this analysis. Although the moderator or the project manager can use any technique to determine the cause of performance deviation and the corrective and preventive actions that should be taken, a set of guidelines, shown in Table 10.3, helps in this endeavor.

Table 10.3 includes two groups of guidelines: one set that is applicable when the defect density is below the range, and another set that is applicable when the defect density is above the range. Both cases suggest that something abnormal may have taken place, and the situation must be examined carefully. Table 10.3 lists some possible causes; the project leader or moderator can use this information to identify the cause and then decide on the corrective actions for this review and preventive actions for future reviews.

Static control limits work well when the process is operating in a steady state. However, if changes are consciously made to the process, control charts must be used with care because the process performance is expected to change. If changes in the process are likely to be regular, it is best to have dynamic control limits. One way to do this is to reset the control limits to new values based on the past n performance data points (n must be selected; it can be some number greater than about 10 to 15). With this approach, if the process is changed, its performance will change, and after a few data points, the control limits will reflect the performance capability of the changed process.

Another approach is to adjust the performance data or control limits based on the expected impact of the process changes. For example, if the review process is changed and if it is expected that the reviews will detect 10% more defects, then when a point falls outside the control limits, this fact should be taken into account during the analysis.

Table 10.3. Analysis Guidelines for Review

Possible Reason

Actions to Consider

If Defects Found Are Less Than Norms

Work product was very simple.

         Convert group reviews of similar work products to one-person reviews.

         Combine reviews.

Reviews may not be thorough.

         Check coverage rate; if too low, reschedule a review, perhaps with a different team.

Reviewers do not have sufficient training on group reviews or experience with the reviewed material.

         Schedule or conduct group review training.

         Re-review with a different team.

Work product is of very good quality.

         Confirm this fact by coverage rate, experience of the author, reviewers, and so on; see if this quality can be duplicated in other parts of the project.

         Revise defect prediction in downstream activities; see if there are general process improvement lessons.

If Defects Found Are More Than Norms

Work product is of low quality.

         Examine training needs for author.

         Have the work product redone.

         Consider reassigning future tasks (e.g., assign easier tasks only to the author).

Work product is very complex.

         Ensure good review or testing downstream.

         Increase estimates for system testing.

         Break the work product into smaller components.

There are too many minor defects (and too few major defects).

         Identify causes of minor defects; correct in the future by suitably enhancing checklists and making authors aware of the common causes.

         Reviewer may have insufficient understanding of the work product. If so, hold an overview meeting or have another review with different reviewers.

Reference document against which review was done is not precise and clear.

         Get the reference document reviewed and approved.

Reviewed modules are the first ones in the project.

         Analyze the defects, update the review checklist, and inform developers. Schedule training.

This latter approach is used at Infosys when defect prevention is employed during reviews. With DP, the defect injection rate is expected to decline. Consequently, the defect density detected in reviews is also likely to fall. If a project is using DP, then during project planning the expected impact of DP is also recorded. Using the defect injection rate, the expected reduction from using DP, and the defect detection rate, the expected limits for performance are established. If a review performance falls outside these limits, it is carefully examined for causes.

10.3.3 An Example

Consider the summary report for the group review of a project management plan given in Table 10.4. This summary covers a group review of a 14-page project management plan. The total number of minor and cosmetic defects found was 16, and the total number of major defects found was 3. Thus, the defect density found is 16/14 = 1.2 minor defects per page, and 3/14 = 0.2 major defects per page. Both rates are within the range given in the capability baseline, so the exit criteria are satisfied and it can be assumed that the review was conducted properly.

Table 10.4. Summary Report of a Review

Project

 

Work product type

Size of product

Moderator

Reviewer(s)

Author

Project plan, v. 1.0

14 pages

Meera

Biju, Meera

JC

Effort (Person-Hours)

 

a.       Overview meeting

b.       Preparation

c.       Group review meeting

Total Effort

0

10 person-hours

10 person-hours

20 person-hours

Defects

 

Number of critical defects

Number of major defects

Number of minor defects

Number of cosmetic defects

Number of defects detected during preparation

Number of defects detected during group review meeting

Number of open issues raised

Total number of defects

0

3

12

4

1

19

Result

Moderator reexamination

Recommendations for Next Phase

 

Units to undergo group review

Units to undergo one-person review

N/A

N/A

Comments (Moderator)

The plan has been well documented and presented.

Prepared by: Meera; Date: xx-xx-xxxx

 

Although not needed for this review because the exit criteria are satisfied, other rates can also be checked. The review team had 4 members, each of whom spent 2.5 hours in individual review; the review meeting lasted 2.5 hours. Thus, the coverage rate during preparation and review was 14/2.5 = 5.6 pages per hour, which is within the range for preparation but somewhat higher for review.

When summary data is given to the SPC tool, it graphically displays the performance of this review, along with the control limits and expected limits.

 



Software Project Management in Practice
Linux Annoyances for Geeks: Getting the Most Flexible System in the World Just the Way You Want It
ISBN: 0596008015
EAN: 2147483647
Year: 2005
Pages: 83
Authors: Michael Jang

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net