How Do You Know Your Product Is Good Enough to Ship?

Determining when a product is good enough to ship is a complex issue. It involves the types of products (e.g., a shrink-wrap application software versus an operating system), the business strategy related to the product, market opportunities and timing, customers requirements, and many more factors. The discussion here pertains to the scenario in which quality is an important consideration and that on-time delivery with desirable quality is the major project goal.

A simplistic view is that one establishes a target for one or several in-process metrics, and if the targets are not met, then the product should not be shipped per schedule. We all know that this rarely happens in real life, and for legitimate reasons. Quality measurements, regardless of their maturity levels, are never as black and white as meeting or not meeting a delivery date. Furthermore, there are situations where some metrics are meeting targets and others are not. There is also the question of how bad is the situation. Nonetheless, these challenges do not diminish the value of in-process measurements; they are also the reason for improving the maturity level of software quality metrics.

In our experience, indicators from at least the following dimensions should be considered together to get an adequate picture of the quality of the product.

  • System stability, reliability, and availability
  • Defect volume
  • Outstanding critical problems
  • Feedback from early customer programs
  • Other quality attributes that are of specific importance to a particular product and its customer requirements and market acceptance (e.g., ease of use, performance, security, and portability.)

When various metrics are indicating a consistent negative message, the product will not be good enough to ship. When all metrics are positive, there is a good chance that the product quality will be positive in the field. Questions arise when some of the metrics are positive and some are not. For example, what does it mean to the field quality of the product when defect volumes are low and stability indicators are positive but customer feedback is less favorable than that of a comparable release? How about when the number of critical problems is significantly higher and all other metrics are positive? In those situations, at least the following points have to be addressed:

  • Why is this and what is the explanation?
  • What is the influence of the negative in-process metrics on field quality?
  • What can be done to control and mitigate the risks?
  • For the metrics that are not meeting targets, how bad is the situation?

Answers to these questions are always difficult, and seldom expressed in quantitative terms. There may not even be right or wrong answers. On the question of how bad is the situation for metrics that are not meeting targets, the key issue is not one of statistical significance testing (which helps), but one of predictive validity and possible negative impact on field quality after the product is shipped. How adequate the assessment is and how good the decision is depend to a large extent on the nature of the product, experience accumulated by the development organization, prior empirical correlation between in-process metrics and field performance, and experience and observations of the project team and those who make the GO or NO GO decision. The point is that after going through all metrics and models, measurements and data, and qualitative indicators, the team needs to step back and take a big-picture view, and subject all information to its experience base in order to come to a final analysis. The final assessment and decision making should be analysis driven, not data driven. Metric aids decision making, but do not replace it.

Figure 10.15 is an example of an assessment of in-process quality of a release of a systems software product when it was near the ship date. The summary table outlines the indicators used (column 1), key observations of the status of the indicators (column 2), release-to-release comparisons ( columns 3 and 4), and an assessment (column 5). Some of the indicators and assessments are based on subjective information. Many parameters are based on in-process metrics and data. The assessment was done about two months before the product ship date, and actions were taken to address the areas of concern from this assessment. The release has been in the field for more than two years and has demonstrated excellent field quality.

Figure 10.15. A Quality Assessment Summary

graphics/10icon02.gif

graphics/10fig15.gif

What Is Software Quality?

Software Development Process Models

Fundamentals of Measurement Theory

Software Quality Metrics Overview

Applying the Seven Basic Quality Tools in Software Development

Defect Removal Effectiveness

The Rayleigh Model

Exponential Distribution and Reliability Growth Models

Quality Management Models

In-Process Metrics for Software Testing

Complexity Metrics and Models

Metrics and Lessons Learned for Object-Oriented Projects

Availability Metrics

Measuring and Analyzing Customer Satisfaction

Conducting In-Process Quality Assessments

Conducting Software Project Assessments

Dos and Donts of Software Process Improvement

Using Function Point Metrics to Measure Software Process Improvements

Concluding Remarks

A Project Assessment Questionnaire



Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering (2nd Edition)
ISBN: 0201729156
EAN: 2147483647
Year: 2001
Pages: 176

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net