Implementing defect analysis

5.7 Implementing defect analysis

The creation of a metrics program starts with determining the requirements or reason for measuring something. Just as in the development of software, defining the problem we wish to solve is the first step. Once we know what information we might want, we can begin to determine what measures can lead us to our objective.

It is unfortunate that there are few, if any, metrics in common use in the industry. Even those measures that many organizations use, such as LOC, function points, error counts, time, and so on, are not defined industrywide. In effect, each organization that counts something counts it in its own way. For that reason, when one company claims to have one error per thousand delivered LOC, another company may have no idea as to how it compares. The reason is that there is no commonly accepted definition of either error or LOC. Even function point users know that there are variations in how function points are defined and counted.

The solution may be for a given company to design its metrics program for its own situation. When the company has a metrics set that presents information about its software processes, it might offer those measures to the industry as guidelines. At some point, other companies may adopt and adapt those metrics, and a de facto standard may be born. The IEEE has published a standard covering the establishment of a metrics program (IEEE Standard 1061–1992) that could certainly be a starting point for a company just starting to develop its own program.

5.7.1 Rules

When preparing to design and implement a defect analysis and metrics program, a few simple but important rules should be observed. Many programs are started, and most fail in the first year or so. In order to have a good chance for success, the following should be considered:

  • The program must be instigated and supported from the top of the organization down.

  • The metrics must support quality as seen from the customer's perspective.

  • The measurements must not interfere with the performance of assigned work.

  • The people being measured must have a role in defining the measurements and methods.

The primary reason for failed programs is failure to observe these vital considerations.

Support from top management is necessary because, as measurements are begun, they must be seen to be of interest—and value—to top management. If management does not use the metrics, ignores the results of the program, does not provide for the costs of data collections and metrics development, and is not visibly committed to the success of the metrics program, the rest of the organization will soon conclude that metrics do not matter.

Metrics that are developed for the sake of metrics will usually not be used. Metrics that are not used become targets for elimination. The metrics developed must be based on defects and other data that will lead to better customer satisfaction. If the program does not result in increased customer satisfaction, the costs will eventually be determined to have been wasted. That is almost always the end of the program.

Even when top management supports the defect analysis and metrics program, if it gets in the way of job performance, the workers will not cooperate. The persons conducting the data gathering must remember that the rest of the people are busy with the jobs to which they are assigned. They are being paid to do their work, not the measurer's. When pressures mount, the assigned task gets attention, not additional side tasks that do not appear in the worker's job description.

It should not be a surprise that if you are going to measure my productivity, the defect history of my work, and things of that nature, I want some influence over, or at least a full understanding of, what is measured, how the data is collected, and what the metrics and their use will be.

Perhaps even worse than non-customer-focused metrics are those used for personnel evaluations and comparisons. Especially damaging to the metrics program is defect data that is construed to reflect workers' performance. When that is the case, the program will not survive as a useful activity. It must always be remembered that you get what you measure. If my defect data is going to be used against me, there will be very little accurate defect data available to the software quality practitioner or management.

5.7.2 Designing the program

A defect analysis or metrics program should be treated exactly the same as the development of software. It is a project with requirements and must be designed, coded, tested, implemented, and maintained. The following simple five-step approach can be used to define and start the program:

  1. Define the goals of the program.

  2. Ask questions about the use of the program and metrics.

  3. Identify the metrics to be developed and used.

  4. Identify the measures that must be made to gather the data for the metrics.

  5. Plan the data collection, metrics development, and metrics application.

It has been stated that the defect analysis or metrics program must have established goals before anything else is done. This is analogous to the setting of vision and mission statements for the organization. The goals of the program lead to questions about customer attitude, product quality, defect experience, process improvement opportunities, and the like. The answers to the questions give insight into what kinds of metrics will be of value. If we are just interested in defect analysis, one set of metrics may emerge. If we are interested in improved quality and processes, a larger set of metrics will be recognized. In every case, the organization must perform these steps in the context of its own maturity, business, and capabilities.

Once the metrics that will be needed are defined, the data and required measurements can be defined as well. It was noted earlier that some data consists of hard numbers that are collectable directly. Other data is soft or subjective, in the form of opinions, guesses, feelings, and so on. The soft data must be quantified for used with the hard data. The organization must determine the quantification methods and how precise they believe the quantifications to be.

Throughout the process of defining goals, asking questions, and identifying metrics and measures, the people whose work and products will be the subjects of the measures must be involved. Acceptance of the program is not the only thing requiring the participation of the people being measured. The persons doing the work are the closest to all the things being measured—their effort, products, processes, defects, and so on. They often can suggest metrics and measures that have even more utility than those conceived by the software quality practitioners. If the program is to succeed, it is imperative that the voice of the workers be solicited and heard.

5.7.3 Metric characteristics

If the SQS and the metrics program have requirements, so have the metrics themselves. Measures and their resulting metrics must be easy to gather and develop. Measures that require extensive investigation or complicated collection methods will tend to be poorly collected (at least at the beginning of the program). Section 5.4 suggested that many useful metrics comprise easily collected measures. These measures and metrics should form the basis of the beginning metrics program. As experience and maturity is gained, more sophisticated metrics and measures can be adopted. In the beginning, keep it simple is a good motto.

Metrics must also be easy to understand and apply. It may be possible to determine the number of defects per thousand LOC written from 10 P.M. to 11 P.M. on a cloudy Friday the 13th by developers in Bangalore, India, compared to the same data for developers in Fort Wayne, Indiana. Whether there is useful information in that metric is another question. If there is information, of what use is it? As metrics become more sophisticated, their understandability often becomes more difficult. Many of the metrics being converted from hardware to software quality applications must be redefined for use on software. These metrics are generally applicable after their adaptation but frequently require very large sample sizes to be meaningful. Again, the new metrics program must be useful. Utility of the metrics being developed is more important than whether they comprise the most complete set of metrics.

Validity of the metrics is another key point. Do the metrics correctly reflect their target situation? An example was given in Section 5.5.2 of the need to consider the size of the project in a given situation. Metrics that are sensitive to parameters other than those in the direct equation may not reflect the real situation. The software is tested to determine if all its requirements have been corrected addressed. Metrics, too, need to be tested to ensure that they present the information we want and do so correctly and repeatably. Careful definition of each of the data terms being used, specification of exact data collection methods to be used, and precise equation for the metrics can only reduce the likelihood that the metric is developed incorrectly. The real question being asked is whether the metric is the correct one for the desired application. It must be shown that the metric actually applies to the situation in which we are interested. Is the metric in support of the original goals of the program? Does it address the organization's concerns? Does it give us information that we need and do not have elsewhere? The comparison of defects between Bangalore and Fort Wayne may be available and precise, but if we really do not have a need for that comparison or know what to do with it, it is not beneficial to develop it.



Practical Guide to Software Quality Management
Practical Guide to Software Quality Management (Artech House Computing Library)
ISBN: 1580535275
EAN: 2147483647
Year: 2002
Pages: 137
Authors: John W. Horch

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net