The Evaluation Phase

You figured out which data are important to look at and now you are ready to analyze and evaluate them. This is when one can apply all possible data analysis and statistical techniques to extract the messages within it.

15.2.1 Quantitative Data

For quantitative analysis tools such as control charts, trend charts , histograms, pareto diagrams, and scatter diagrams or statistical techniques ranging from simple tabulation analysis to sophisticated multivariate methods are all fair game. It is our experience that simple techniques can be very powerful and most of the time sophisticated statistical techniques are unnecessary. The key point is to garner useful information from the data. As discussed in previous chapters, we found that using the effort/outcome paradigm is particularly useful in assessing in-process metrics. Of course, the data gathered must include both effort indicators and outcome indicators in order to apply this approach, and this should be a consideration in the planning and preparation phase. At the least, from raw data to useful information, some meaningful comparisons with relevant baseline, plan, or a previous similar product need to take place.

When analyzing the data, it is always good practice to pay particular attention to anything unusual. Good questions to ask in such situations are, "What more can I learn about this?" and "How can I put this into perspective?" Figures 15.3, 15.4, and 15.5 include examples of data that bear further investigation. In Figure 15.3, Team A was significantly behind plan in its functional test and Component X had not even started its testing. In Figure 15.4, the defect arrival pattern of the current project dif-fered from that for previous comparable projects. Was the higher defects volume in the early part of the defect curve due to more effective testing and better progress? Was the testing effectiveness and progress about the same as previous project at this point in the development cycle? In Figure 15.5, the test plan S-curve shows an unusual and potentially unachievable pattern.

Figure 15.3. Data on Functional Tests that Beg for Further Investigation

graphics/15fig03.gif

Figure 15.4. A Defect Arrival Pattern that Deviates from Historical Data

graphics/15fig04.gif

Figure 15.5. A Test Plan S Curve Showing an Unusual Pattern

graphics/15fig05.gif

15.2.2 Qualitative Data

For the qualitative evaluation, information from the interviews and open -ended probing can be classified , grouped, and correlated with existing knowledge and findings from quantitative analyses. The strongest proponents of quantitative methods argue that without metrics, an assessment is just another opinion. While quantitative data is important, our experience indicates that effective quality assessments are characteris-tically based on cross validation of findings and observations of both quantitative data and qualitative evaluation. Expert opinions also carry special weight. To that regard, the assessor should be equipped with acute observations to delineate whether the input he or she is getting is true expert opinion or opinion clouded by other factors. For example, opinions of the quality of the project may be optimistic by the development manager and pessimistic by the testing manager. It is not uncommon that at project checkpoint review meetings, the status of the project goes from excellent to poor, or vice versa, in just a few moments depending on the order of presentations by the development, support, testing, and service groups.

15.2.3 Evaluation Criteria

Evaluation of qualitative data is based on expert judgment and cross validation. For quantitative indicators, you may want to use predetermined criteria to ensure consistency. The following are sample criteria for evaluation of quantitative indicators:

  • Green = actual within (<=) 5% behind or better than plan (model or a comparable previous project)
  • Yellow = actual is between 5% and (<=) 15% behind plan (model or a comparable previous project)
  • Red = actual is greater than 15% behind plan (model or a comparable previous project)
  • For some indicators, specific considerations apply. For example, for testing defect arrivals, higher is better at earlier phases. After peaking, lower is better if testing effort is not compromised.

The following are sample criteria for a qualitative indicator (plan change):

  • Green = no or small amount of plan changes after the commitment checkpoint of the project. No additional risks involved.
  • Yellow = some amount of plan changes after the commitment checkpoint of the project, but not on critical line items of the project. Risks identified and assessed, and plans in place to mitigate and control risks.
  • Red = plan changes on critical line items that took place after the project commitment checkpoint put the project at high risk. Assumptions at the commitment checkpoint are no longer valid.

The following shows sample criteria for an indicator that may require both qualitative and quantitative evaluation (design status):

  • Green = no major design issues; design review status within (<=) 5% behind or ahead of plan.
  • Yellow = design issues identified and plans being put in place to resolve, or design review status between 5% and (<=) 15% behind plan.
  • Red = critical, project-gating design issues identified with no plans to resolve, or design reviews behind plan greater than 15%.

What Is Software Quality?

Software Development Process Models

Fundamentals of Measurement Theory

Software Quality Metrics Overview

Applying the Seven Basic Quality Tools in Software Development

Defect Removal Effectiveness

The Rayleigh Model

Exponential Distribution and Reliability Growth Models

Quality Management Models

In-Process Metrics for Software Testing

Complexity Metrics and Models

Metrics and Lessons Learned for Object-Oriented Projects

Availability Metrics

Measuring and Analyzing Customer Satisfaction

Conducting In-Process Quality Assessments

Conducting Software Project Assessments

Dos and Donts of Software Process Improvement

Using Function Point Metrics to Measure Software Process Improvements

Concluding Remarks

A Project Assessment Questionnaire



Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering (2nd Edition)
ISBN: 0201729156
EAN: 2147483647
Year: 2001
Pages: 176

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net