.NODE

Getting Started with a Software Metrics Program

Once a development organization begins collecting software data, there is a tendency for overcollection and underanalysis. The amount of data collected and the number of metrics need not be overwhelming. It is more important that the information extracted from the data be accurate and useful. Indeed, a large volume of data may lead to low data quality and casual analysis, instead of serious study. It may also incur a sizable cost on the project and a burden on the development team. As discussed earlier, to transform raw data into meaningful information, and to turn information into knowledge, analysis is the key. Analysis and its result, understanding and knowledge, drive improvement, which is the payback of the measurement approach. Therefore, it is essential for a measurement program to be analysis driven instead of data driven.

By "analysis driven" I mean the data to be collected and the metrics used should be determined by the models we use for development (such as models for development process, quality management, and reliability assessment) and the analysis we intend to perform. Associated with the analysis-driven approach, a key to operating a successful metrics program is knowing what to expect from each metric. In this regard, measurement paradigms such as Basili's Goal/Question/Metrics (GQM) approach prove to be useful (Basili, 1989, 1995). In Chapters 1 and 4 we briefly discussed the GQM approach and gave examples of implementation. To establish effective in-process metrics, I recommend the effort/outcome model, which was discussed in Chapters 8 and 9.

Metrics and measurements must progress and mature with the development process of the organization. If the development process is in the initial stage of the maturity spectrum, a heavy focus on metrics may be counterproductive. For example, if there is no formal integration control process, tracking integration defects will not be meaningful; if there is no formal inspection or verification, collecting defect data at the front end provides no help.

Suppose we draw a line on a piece of paper to represent the software life cycle, from the start of the development process to the maintenance phase, and put a mark on the line at about two- thirds from the start (and one-third from the end) to represent the product delivery date. Then the starting metrics in general ought to center around the product delivery phase. Then work backward into the development process to establish in-process metrics, and work forward to track the quality performance in the field. Usually field quality metrics are easier to establish and track because they are normally a part of the support and service process. Establishing and implementing effective in-process metrics are more challenging.

As an example, suppose we are to begin a simple metrics program with only three metrics. I would highly recommend these metrics be the size of the product, the number of defects found during the final phase of testing, and the number of defects found in the field (or other reliability measures). Assuming that the data collection process put in place ensures high accuracy and reliability, then here are a few examples of what can be done from these pieces of data:

  • Calculate the product defect rate (per specified time frame) (A).
  • Calculate the test defect rate (B).
  • Determine a desirable goal for A, and monitor the performance of the products developed by the organization.
  • Monitor B for the products in the same way as A.
  • Assess the correlation between A and B when at least several data points become available.
  • If a correlation is found between A and B, then form the metric of testing effectiveness (final phase), (B/A) x 100%. Or one can derive a simple regression model predicting A from B (a simple static reliability model).
  • Use the B/A metric to set the test defect removal target for new projects, given a predetermined goal for the product defect rate.
  • Monitor and use a control chart for the B/A metrics for all products to determine the process capability of the test defect removal of the organization's development process.

This simple example illustrates that good use of simple data can be quite beneficial. Of course, in real life we would not stop at the B/A metrics. To improve the B/A value, a host of questions and metrics will naturally arise: Is the test coverage improving? How can we improve the test suite to maximize test defect removal effectiveness? Is a test-focused defect removal strategy good for us? What alternative methods would make us more cost effective in removing defects? The point is that for metrics programs to be successful, it is important to make good use of small amounts of data, then build on the proven metrics in order to maximize the benefits of quantitative software quality engineering. As more metrics are used and more data collected, they should progress in reverse direction of the development process ” from the end product to the back end of the process, then to the front end of the process. Metrics and data are usually more clear-cut at the back end and more difficult to define and collect in the front end.

For small teams , I have recommended, in various chapters, a couple of quality management models and a small set of metrics, including the metrics in the preceding example. These metrics can be implemented easily, and for some that require tools and statistical expertise for implementation, I provided quick and easy alternatives. Regardless of small and large teams, I recommend the following to jump-start a metrics program:

  • Start the metrics practice at the project level, not at the process level or the organizational level. Select one or more projects to get started.
  • Integrate the use of metrics as part of the project quality management process, which in turn, is a key element of the project management process. For small projects, especially when the project lead is interested in metrics and measurement, the ideal situation is for the project lead to do the metrics himself or herself, with support from the project team. For larger projects, a member of the project team can lead the measurement process. In either case, the practice of metrics and measurement has to be part of the project management activities, not a separate activity.
  • Most importantly, determine and select a very small number of metrics (for example, two or three) that are important to the project and start the tracking and reporting based on the existing infrastructure. The existing infrastructure may not be adequate to provide precise tracking. Even with rudimentary tracking and basic tools (e.g., 1-2-3 spreadsheet, pencil and paper), it is essential to get the practice started. As discussed in various chapters, many of the metrics can indeed be implemented via basic project management tools and software that are widely available.
  • Always make use of the visual element of the metrics, measurements, and models. The availability and prevalent use of graphic and presentation software makes it easy to show the project's status via metrics and measurement, to maintain the team's interest, and to incorporate metrics in project management activities.

For long- term success at the organizational level, it is important to secure management commitment and to establish a data tracking system, which includes processes and tools. The point is that to jump-start a metrics program, it is essential to get started at the project level and to establish the relevance of some specific metrics to the success of the projects as soon as possible. A continual project-level focus is necessary for the continual success of a metrics program.

In addition to a tracking system, tools, and process, investment is also required to establish metrics expertise. When the development organization is small, data collection and analysis can be done by managers and project leaders . In large organizations, full-time metrics professionals are warranted for a successful program. I recommend that organizations with more than 100 members have at least one full-time metrics person. The metrics personnel design the metrics that support the organization's quality goals, design and implement the data collection and validation system, oversee the data collection, ensure data quality, analyze data, provide feedback to the development team, and engineer improvements in the development process. They can also provide training and support to the project teams. Or, for large projects they can be members of the project management team responsible for driving metrics, analysis, and quality into the mainstream of project management. The best candidates for a software metrics team are perhaps the members with training and experience in statistics (or related fields), software engineering, and quality. Large organizations can even form a metrics council, which could be called the software engineering metrics group (SEMG), to provide overall direction and consultations to specific projects. To be effective, the group 's success must be measured by the success of specific projects that it is associated with, not by its high-level definition and process work. In other words, metrics and process definition should not be separated from implementation.

Developers play a key role in providing data. Experience indicates that it is essential that developers understand how the data are to be used. They need to know the relationship between the data they collect and the issues to be solved . Such an understanding enhances cooperation and, hence, the accuracy and completeness of the data. Of course, the best situations are those in which the metrics can be used by the developers themselves . Unless the data are collected automatically without human intervention, the development team's willingness and cooperation is the most important factor in determining data quality.

When the process is mature enough, the best approach is to incorporate software data collection with the project management and the configuration management process, preferably supported by automated tools. In contrast, analysis should never be fully automated. It is helpful to use tools for analysis. However, the analyst ought to retain intellectual control of the process, the sources of the data, the techniques involved, the meaning of each piece of the data within the context of the product, development process, environment, and the outcome. This is the part of software quality engineering that the human mind cannot relegate. I have seen examples of funny outcomes of analysis, and of failures of metrics practices when the analysts lost control over the automated analysis process.

What Is Software Quality?

Software Development Process Models

Fundamentals of Measurement Theory

Software Quality Metrics Overview

Applying the Seven Basic Quality Tools in Software Development

Defect Removal Effectiveness

The Rayleigh Model

Exponential Distribution and Reliability Growth Models

Quality Management Models

In-Process Metrics for Software Testing

Complexity Metrics and Models

Metrics and Lessons Learned for Object-Oriented Projects

Availability Metrics

Measuring and Analyzing Customer Satisfaction

Conducting In-Process Quality Assessments

Conducting Software Project Assessments

Dos and Donts of Software Process Improvement

Using Function Point Metrics to Measure Software Process Improvements

Concluding Remarks

A Project Assessment Questionnaire





Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering (2nd Edition)
ISBN: 0201729156
EAN: 2147483647
Year: 2001
Pages: 176
Similar book on Amazon

Flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net