Ensuring that the final software is of high quality is one of the prime concerns of a project manager. But how is software quality defined? The concept of software quality is not easily definable because software has many possible quality characteristics.1 In practice, however, quality management often revolves around defects. Hence, we use delivered defect density that is, the number of defects per unit size in the delivered software as the definition of quality. This definition is currently the de facto industry standard.2 Using it signals that the aim of a software project is to deliver the software with as few defects as possible.
What is a defect? Again, there can be no precise definition of a defect that will be general and widely applicable (is a software that misspells a word considered to have a defect?). In general, we can say a defect in software is something that causes the software to behave in a manner that is inconsistent with the requirements or needs of the customer.
Before considering techniques to manage quality, you must first understand the defect injection and removal cycle. Software development is a highly people-oriented activity and hence error-prone. Defects can be injected in software at any stage during its evolution. That is, during the transformation from user needs to software to satisfy those needs, defects can be injected in all the transformation activities undertaken. These injection stages are primarily the requirements specification, the high-level design, the detailed design, and coding.
For high-quality software, the final product should have as few defects as possible. Hence, for delivery of high-quality software, active removal of defects is necessary; this removal takes place through the quality control activities of reviews and testing. Because the cost of defect removal increases as the latency of defects (the time gap between the introduction of a defect and its detection) increases,3 any mature process will include quality control activities after each phase in which defects can potentially be injected. The activities for defect removal include requirements reviews, design reviews, code reviews, unit testing, integration testing, system testing, and acceptance testing (we do not include reviews of plan documents, although such reviews also help in improving quality of the software). Figure 5.1 shows the process of defect injection and removal.
The task of quality management is to plan suitable quality control activities and then to properly execute and control them to achieve the project's quality goals.
As noted earlier, you detect defects by performing reviews or testing. Whereas reviews are structured, human-oriented processes, testing is the process of executing software (or parts of it) in an attempt to identify defects. In the procedural approach to quality management, procedures and guidelines for the review and testing activities are established. In a project, these activities are planned (that is, it is established which activity will be performed and when); during execution, they are carried out according to the defined procedures. In short, the procedural approach is the execution of certain processes at defined points to detect defects.
The procedural approach does not allow claims to be made about the percentage of defects removed or the quality of the software following the procedure's completion. In other words, merely executing a set of defect removal procedures does not provide a basis for judging their effectiveness or assessing the quality of the final code. Furthermore, such an approach is highly dependent on the quality of the procedure and the quality of its execution. For example, if the test planning is done carefully and the plan is thoroughly reviewed, the quality of the software after performance of the testing will be better than if testing was done but the test plan was not carefully thought out and the review was done perfunctorily. A key drawback in the procedural approach is the lack of quantitative means for project managers to assess the quality of the software produced; the only factor visible to project managers is whether the quality control tasks are executed.
To better assess the effectiveness of the defect detection processes, an approach is needed that goes beyond asking, "Has the method been executed?" and looks at metrics data for evaluation. Based on this analysis of the data, you can decide whether more testing or reviews are needed. If controls are applied during the project based on quantitative data to achieve quantitative quality goals, then we say that a quantitative quality management approach is being applied.
Quantitative quality management has two key aspects: setting a quantitative quality goal and then managing the software development process quantitatively so that this quality goal is met (with a high degree of confidence).
A good quality management approach should provide warning signs early in the project and not only toward the end, when the options are limited. Early warnings allow for timely intervention. To achieve this goal, it is essential to predict the values of some parameters at various stages so that controlling them during project execution will ensure that the final product has the desired quality. If such predictions can be made, you can use the actual data gathered to judge whether the process has been applied effectively. With this approach, a defect detection process does not terminate with the declaration that the process has been executed; instead, the data from process execution are used to ensure that the process has been performed in a manner that exploited its full potential.
One approach to quantitatively control the quality of the software is to work with software reliability models. Most such models use the failure data during the final stages of testing to estimate the reliability of the software. These models can indicate whether the reliability is acceptable or more testing is needed. Unfortunately, they do not provide intermediate goals for the early phases of the project, and they have other limitations. Overall, such models are helpful in estimating the reliability of a software product, but they have a limited value for quality management. (More information is available on reliability models.4,5,6)
Another well-known quality concept in software is defect removal efficiency. For a quality control (QC) activity, we define the defect removal efficiency (DRE) as the percentage of existing total defects that are detected by the QC activity.5 The DRE for the full life cycle of the project that is, for all activities performed before the software is delivered represents the in-process efficiency of the process. If the overall defect injection rate is known for the project, then DRE for the full life cycle also defines the quality (delivered defect density) of the software.
Although defect removal efficiency is a useful metric for evaluating a process and identifying areas of improvement, by itself it is not suitable for quality management. The main reason is that the DRE for a QC activity or the overall process can be computed only at the end of the project, when all defects and their origins are known. Hence, it provides no direct way to control quality during project execution.
Another approach to quantitative quality management is defect prediction. In this approach, you set the quality goal in terms of delivered defect density. You set the intermediate goals by estimating the number of defects that may be identified by various defect detection activities; then you compare the actual number of defects to the estimated defect levels.
This approach makes the management of quality closely resemble the management of effort and schedule the two other major success parameters of a project. A target is first set for the quality of the delivered software. From this target, the values of chosen parameters at various stages in the project are estimated; that is, milestones are established. These milestones are chosen so that, if the estimates are met, the quality of the final software is likely to meet the desired level. During project execution, the actual values of the parameters are measured and compared to the estimated levels to determine whether the project is traveling the desired path or whether some actions need to be taken to ensure that the final software has the desired quality.
The effectiveness of this approach depends on how well you can predict the defect levels at various stages of the project. It is known that the defect rate follows the same pattern as the effort rate, with both following the Rayleigh curve.5,7,8 In other words, the number of defects found at the start of the project is small but keeps increasing until it reaches a peak (around unit testing time) before it begins to decline again. Because a process has defined points for defect detection, you can also specify this curve in terms of percentages of total defects detected at the various detection stages. And from the estimate of the defect injection rate and size, you can estimate the total number of defects. This approach for defect level prediction is similar to both the base defect model and the STEER approach of IBM's Federal Systems Division.5
Yet another approach is to use statistical process control (SPC) for managing quality (Chapter 7 includes a brief discussion of SPC). In this approach, you set performance expectations of the various QC processes, such as testing and reviews, in terms of control limits. If the actual performance of the QC task is not within the limits, you analyze the situation and take suitable action. The control limits resemble prediction of defect levels based on past performance but can also be used for monitoring quality activities at a finer level, such as review or unit testing of a module.
When you use a performance prediction approach and the actual number of defects is less than the target, the approach has too many uncertainties for you to say with surety that the removal process was not executed properly. As a result, you must look at other indicators to determine the cause.5 In other words, if the actual data are out of range, the project manager will look at other indicators to decide what the actual situation is and what action, if any, is needed.