Once a development organization begins collecting software data, there is a tendency for overcollection and underanalysis. The amount of data collected and the number of metrics need not be overwhelming. It is more important that the information extracted from the data be accurate and useful. Indeed, a large volume of data may lead to low data quality and casual analysis, instead of serious study. It may also incur a sizable cost on the project and a burden on the development team. As discussed earlier, to transform raw data into meaningful information, and to turn information into knowledge, analysis is the key. Analysis and its result, understanding and knowledge, drive improvement, which is the payback of the measurement approach. Therefore, it is essential for a measurement program to be analysis driven instead of data driven.
By "analysis driven" I mean the data to be collected and the metrics used should be determined by the models we use for development (such as models for development process, quality management, and reliability assessment) and the analysis we intend to perform. Associated with the analysis-driven approach, a key to operating a successful metrics program is knowing what to expect from each metric. In this regard, measurement paradigms such as Basili's Goal/Question/Metrics (GQM) approach prove to be useful (Basili, 1989, 1995). In Chapters 1 and 4 we briefly discussed the GQM approach and gave examples of implementation. To establish effective in-process metrics, I recommend the effort/outcome model, which was discussed in Chapters 8 and 9.
Metrics and measurements must progress and mature with the development process of the organization. If the development process is in the initial stage of the maturity spectrum, a heavy focus on metrics may be counterproductive. For example, if there is no formal integration control process, tracking integration defects will not be meaningful; if there is no formal inspection or verification, collecting defect data at the front end provides no help.
Suppose we draw a line on a piece of paper to represent the software life cycle, from the start of the development process to the maintenance phase, and put a mark on the line at about two- thirds from the start (and one-third from the end) to represent the product delivery date. Then the starting metrics in general ought to center around the product delivery phase. Then work backward into the development process to establish in-process metrics, and work forward to track the quality performance in the field. Usually field quality metrics are easier to establish and track because they are normally a part of the support and service process. Establishing and implementing effective in-process metrics are more challenging.
As an example, suppose we are to begin a simple metrics program with only three metrics. I would highly recommend these metrics be the size of the product, the number of defects found during the final phase of testing, and the number of defects found in the field (or other reliability measures). Assuming that the data collection process put in place ensures high accuracy and reliability, then here are a few examples of what can be done from these pieces of data:
This simple example illustrates that good use of simple data can be quite beneficial. Of course, in real life we would not stop at the B/A metrics. To improve the B/A value, a host of questions and metrics will naturally arise: Is the test coverage improving? How can we improve the test suite to maximize test defect removal effectiveness? Is a test-focused defect removal strategy good for us? What alternative methods would make us more cost effective in removing defects? The point is that for metrics programs to be successful, it is important to make good use of small amounts of data, then build on the proven metrics in order to maximize the benefits of quantitative software quality engineering. As more metrics are used and more data collected, they should progress in reverse direction of the development process ” from the end product to the back end of the process, then to the front end of the process. Metrics and data are usually more clear-cut at the back end and more difficult to define and collect in the front end.
For small teams , I have recommended, in various chapters, a couple of quality management models and a small set of metrics, including the metrics in the preceding example. These metrics can be implemented easily, and for some that require tools and statistical expertise for implementation, I provided quick and easy alternatives. Regardless of small and large teams, I recommend the following to jump-start a metrics program:
For long- term success at the organizational level, it is important to secure management commitment and to establish a data tracking system, which includes processes and tools. The point is that to jump-start a metrics program, it is essential to get started at the project level and to establish the relevance of some specific metrics to the success of the projects as soon as possible. A continual project-level focus is necessary for the continual success of a metrics program.
In addition to a tracking system, tools, and process, investment is also required to establish metrics expertise. When the development organization is small, data collection and analysis can be done by managers and project leaders . In large organizations, full-time metrics professionals are warranted for a successful program. I recommend that organizations with more than 100 members have at least one full-time metrics person. The metrics personnel design the metrics that support the organization's quality goals, design and implement the data collection and validation system, oversee the data collection, ensure data quality, analyze data, provide feedback to the development team, and engineer improvements in the development process. They can also provide training and support to the project teams. Or, for large projects they can be members of the project management team responsible for driving metrics, analysis, and quality into the mainstream of project management. The best candidates for a software metrics team are perhaps the members with training and experience in statistics (or related fields), software engineering, and quality. Large organizations can even form a metrics council, which could be called the software engineering metrics group (SEMG), to provide overall direction and consultations to specific projects. To be effective, the group 's success must be measured by the success of specific projects that it is associated with, not by its high-level definition and process work. In other words, metrics and process definition should not be separated from implementation.
Developers play a key role in providing data. Experience indicates that it is essential that developers understand how the data are to be used. They need to know the relationship between the data they collect and the issues to be solved . Such an understanding enhances cooperation and, hence, the accuracy and completeness of the data. Of course, the best situations are those in which the metrics can be used by the developers themselves . Unless the data are collected automatically without human intervention, the development team's willingness and cooperation is the most important factor in determining data quality.
When the process is mature enough, the best approach is to incorporate software data collection with the project management and the configuration management process, preferably supported by automated tools. In contrast, analysis should never be fully automated. It is helpful to use tools for analysis. However, the analyst ought to retain intellectual control of the process, the sources of the data, the techniques involved, the meaning of each piece of the data within the context of the product, development process, environment, and the outcome. This is the part of software quality engineering that the human mind cannot relegate. I have seen examples of funny outcomes of analysis, and of failures of metrics practices when the analysts lost control over the automated analysis process.
What Is Software Quality?
Software Development Process Models
Fundamentals of Measurement Theory
Software Quality Metrics Overview
Applying the Seven Basic Quality Tools in Software Development
Defect Removal Effectiveness
The Rayleigh Model
Exponential Distribution and Reliability Growth Models
Quality Management Models
In-Process Metrics for Software Testing
Complexity Metrics and Models
Metrics and Lessons Learned for Object-Oriented Projects
Availability Metrics
Measuring and Analyzing Customer Satisfaction
Conducting In-Process Quality Assessments
Conducting Software Project Assessments
Dos and Donts of Software Process Improvement
Using Function Point Metrics to Measure Software Process Improvements
Concluding Remarks
A Project Assessment Questionnaire