The Summarization Phase

This is the time to pull it all together. A good beginning is to look for recurring themes in the qualitative and quantitative data. For example, if a test expert comments that the testers seem to be finding a lot of problems in a certain component, and that component shows up in a pareto analysis as well, this is a good indication of a problem area.

15.3.1 Summarization Strategy

In summarizing the key issues and concerns, a quick analysis of the potential impacts of the identified problem areas can help rank the issues properly. For instance, the discovery of several low-severity problems in one area might not be a major concern, but a potential installation problem that customers will run into first thing when they install the product could be a very big deal. To put the information into perspective, one might compare a potential problem to a similar problem that occurred with a competitor's product or a discovery in a past beta test. Furthermore, in summarizing data, don't forget to identify what's done right. This information can be every bit as useful as the problem areas. If an incremental improvement in one component's code inspection process that resulted in nearly problem-free testing for that component during functional test, this could potentially provide a major breakthrough for the quality improvement effort of the entire team.

We found the format in Table 15.1 useful for summarizing and displaying the results. Each row shows a different quality parameter, listed in the first column. We often include key findings from the metrics or comments and information from interviews in the "observations" column. The final column shows an assessment for each parameter. At each interview, we ask for a "thumbs up" or "thumbs down" of the project compared with a previous similar project, and an overall assessment with regard to the project's quality goals. However, it's the assessor's overall equalizing judgment that goes on the final assessment, as shown in the table.

Table 15.1 shows only a sample of the parameters and their assessment summary. The set of parameters for a quality assessment should include all pertinent attributes of the project's quality objectives and development activities associated with those attributes. Some of the parameters may be phase-specific and others applicable for most of the development cycle. (See Figure 15.2 for a list of parameters.)

15.3.2 The Overall Assessment

In each assessment we provide an overall assessment as the "bottom line." The overall assessment should be developed with regard to the quality, function, and schedule objectives. In other words, "What is the likelihood that the product will meet quality objectives with the current content and schedule?" The overall assessment should be an integrated element in the project risk management process.

Table 15.1. Example Format for Summarizing Data

Indicator

Observations

Assessment

Design reviews

100% complete, earlier than comparison project relative to months to product ship date

Green

Code inspections

95% complete; tracking close to plan

Green

Function integration (to system library)

92% of function integrated by Driver Y; code integration and driver build (used for formal testing) executing to plan

Green

Function verification test

Test progress tracking close to a comparison project, but is 6% behind plan; concern with a critical item (EY) being late; risk mitigation plans in place

Yellow

Test defect arrivals

Tracking close to a comparison project; concern with delayed defect arrivals because of the late start of testing of item EY

Yellow

Test defect backlog

Good early focus; expect level to grow as arrivals peak, but currently below plan

Yellow

Install testing

98% of planned test cases attempted, and 95% successful; 60% into test cycle

Green

Late change

Late changes for tuning and scaling and for preventing performance degradation; plans to mitigate the impact of system stability not yet in place

Red

System test

Concern with availability of a key hardware product for the test environment to fully function

NA (too early )

It is important to develop criteria for each level of the scale that you can clearly communicate along with your final assessment. It is useful to develop criteria that can be used over time and across multiple assessments. The following is an example of an overall quality assessment scale.

  • Red = high probability of not meeting product quality goals or customer quality expectations
  • Yellow = moderate risk of not meeting product quality goals or customer quality expectations
  • Green = likely to meet product quality goals and satisfy customer quality expectations

Figure 15.6 displays potential quality assessment ratings over the project checkpoint reviews for two scenarios. Apparently the scenario of steadily declining assessment rating (from red to green) is more favorable. This trend might occur when a company is developing a cutting-edge product. In any project, the risks and unknowns could be very high early on, resulting in an overall assessment of "Red." Ideally, as the project progresses, the risks are addressed and problems resolved, thus improving the product's potential for meeting quality objectives.

Figure 15.6. Scenarios of Quality Assessment Ratings of a Project over Time

graphics/15fig06.gif

The second scenario is undesirable not only because the final rating is poor, but also because the ratings worsen over time and initial ratings suggest low risk. While it is entirely possible for a project risk to increase (loss of key personnel would be one example), one should examine early positive ratings closely. It can be difficult to identify risks early in a project, but failure to do so can result in false positive ratings. In the early phases of a project, there are few concrete indicators, much less quantitative metrics, and it is human to assume no news is good news. The challenge to the quality professionals who conduct quality assessments is to make use of all fuzzy information and murky indicators to come up with a candid assessment.

What Is Software Quality?

Software Development Process Models

Fundamentals of Measurement Theory

Software Quality Metrics Overview

Applying the Seven Basic Quality Tools in Software Development

Defect Removal Effectiveness

The Rayleigh Model

Exponential Distribution and Reliability Growth Models

Quality Management Models

In-Process Metrics for Software Testing

Complexity Metrics and Models

Metrics and Lessons Learned for Object-Oriented Projects

Availability Metrics

Measuring and Analyzing Customer Satisfaction

Conducting In-Process Quality Assessments

Conducting Software Project Assessments

Dos and Donts of Software Process Improvement

Using Function Point Metrics to Measure Software Process Improvements

Concluding Remarks

A Project Assessment Questionnaire



Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering (2nd Edition)
ISBN: 0201729156
EAN: 2147483647
Year: 2001
Pages: 176

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net