Regardless of which process is used, the degree to which it is implemented varies from organization to organization and even from project to project. Indeed, given the framework of a certain process model, the development team usually defines its specifics such as implementation procedures, methods and tools, metrics and measurements, and so forth. Whereas certain process models are better for certain types of projects under certain environments, the success of a project depends heavily on the implementation maturity, regardless of the process model. In addition to the process model, questions related to the overall quality management system of the company are important to the outcome of the software projects.
This section discusses frameworks to assess the process maturity of an organization or a project. They include the SEI and the Software Productivity Research (SPR) process maturity assessment methods, the Malcolm Baldrige discipline and assessment processes, and the ISO 9000 registration process. Although the SEI and SPR methods are specific to software processes, the latter two frameworks are quality process and quality management standards that apply to all industries.
2.8.1 The SEI Process Capability Maturity Model
The Software Engineering Institute at the Carnegie-Mellon University developed the Process Capability Maturity Model (CMM), a framework for software development (Humphrey, 1989). The CMM includes five levels of process maturity (Humphrey, 1989, p. 56):
Level 1: Initial
Characteristics: Chaotic ”unpredictable cost, schedule, and quality performance.
Level 2: Repeatable
Characteristics: Intuitive ”cost and quality highly variable, reasonable control of schedules, informal and ad hoc methods and procedures. The key elements, or key process areas (KPA), to achieve level 2 maturity follow:
Level 3: Defined
Characteristics: Qualitative ”reliable costs and schedules, improving but unpredictable quality performance. The key elements to achieve this level of maturity follow:
Level 4: Managed
Characteristics: Quantitative ”reasonable statistical control over product quality. The key elements to achieve this level of maturity follow:
Level 5: Optimizing
Characteristics: Quantitative basis for continued capital investment in process automation and improvement. The key elements to achieve this highest level of maturity follow:
The SEI maturity assessment framework has been used by government agencies and software companies. It is meant to be used with an assessment methodology and a management system. The assessment methodology relies on a questionnaire (85 items in version 1 and 124 items in version 1.1), with yes or no answers. For each question, the SEI maturity level that the question is associated with is indicated. Special questions are designated as key to each maturity level. To be qualified for a certain level, 90% of the key questions and 80% of all questions for that level must be answered yes. The maturity levels are hierarchical. Level 2 must be attained before the calculation for level 3 or higher is accepted. Levels 2 and 3 must be attained before level 4 calculation is accepted, and so forth. If an organization has more than one project, its ranking is determined by answering the questionnaire with a composite viewpoint ” specifically , the answer to each question should be substantially true across the organization.
It is interesting to note that pervasive use of software metrics and models is a key characteristic of level 4 maturity, and for level 5 the element of defect prevention is key. Following is a list of metrics-related topics addressed by the questionnaire.
Several questions on defect prevention address the following topics:
The SEI maturity assessment has been conducted on many projects, carried out by SEI or by the organizations themselves in the form of self-assessment. As of April 1996, based on assessments of 477 organizations by SEI, 68.8% were at level 1, 18% were at level 2, 11.3% were at level 3, 1.5% were at level 4, and only 0.4% were at level 5 (Humphrey, 2000). As of March 2000, based on more recent assessments of 870 organizations since 1995, the percentage distribution by level is: level 1, 39.3%; level 2, 36.3%; level 3, 17.7%; level 4, 4.8%; level 5, 1.8% (Humphrey, 2000). The data indicate that the maturity profile of software organizations is improving.
The SEI maturity assessment framework applies to the organizational or project level. At the individual level and team level, Humphrey developed the Personal Software Processs (PSP) and Team Software Processs (TSP) (Humphrey, 1995, 1997, 2000 a,b). The PSP shows software engineers how to plan and track their work, and good and consistent practices that lead to high-quality software. Time management, good software engineering practices, data tracking, and analysis at the individual level are among the focus areas of PSP. The TSP is built on the PSP and addresses how to apply similar engineering discipline to the full range of a team's software tasks . The PSP and TSP can be viewed as the individual and team versions of the capability maturity model (CMM), respectively. Per Humphrey's guidelines, PSP introduction should follow organizational process improvement and should generally be deferred until the organization is working on achieving at least CMM level 2 (Humphrey, 1995).
Since the early 1990s, a number of capability maturity models have been developed for different disciplines. The Capability Maturity Model Integration sm (CMMI sm ) was developed by integrating practices from four CMMs: for software engineering, for systems engineering, for integrated product and process development (IPPD), and for acquisition. It was released in late 2001 (Software Engineering Institute, 2001a, 2001b). Organizations that want to pursue process improvement across disciplines can now rely on a consistent model. The CMMI has two representations, the staged representation and the continuous representation. The staged representation of the CMMI provides five levels of process maturity.
Maturity Level 1: Initial
Processes are ad hoc and chaotic.
Maturity Level 2: Managed
Focuses on basic project management. The process areas (PAs) are:
Maturity Level 3: Defined
Focuses on process standardization. The process areas are:
Level 4: Quantitatively Managed
Focuses on quantitative management. The process areas are:
Level 5: Optimizing
Focuses on continuous process improvement. The process areas are:
The two representations of the CMMI take different approaches to process improvement. One focuses on the organization as a whole and provides a road map for the organization to understand and improve its processes through successive stages. The other approach focuses on individual processes and allows the organization to focus on processes that require personnel with more or different capability. The rules for moving from one representation into the other have been defined. Therefore, a choice of one representation does not preclude the use of another at a later time.
2.8.2 The SPR Assessment
Software Productivity Research, Inc. (SPR), developed the SPR assessment method at about the same time (Jones, 1986) the SEI process maturity model was developed. There is a large degree of similarity and some substantial differences between the SEI and SPR methods (Jones, 1992). Some leading U.S. software developers use both methods concurrently. While SEI's questions focus on software organization structure and software process, SPR's questions cover both strategic corporate issues and tactical project issues that affect quality, productivity, and user satisfaction. The number of questions in the SPR questionnaire is about 400. Furthermore, the SPR questions are linked-multiple-choice questions with a five-point Likert scale for responses, whereas the SEI method uses a binary (yes/no) scale. The overall process assessment outcome by the SPR method is also expressed in the same five-point scale:
Different from SEI's five maturity levels, which have defined criteria, the SPR questions are structured so that a rating of "3" is the approximate average for the topic being explored. SPR has also developed an automated software tool (CHECKPOINT) for assessment and for resource planning and quality projection. In addition, the SPR method collects quantitative productivity and quality data from each project assessed. This is one of the differences between the SPR and SEI assessment methods.
With regard to software quality and metrics, topics such as the following are addressed by the SPR questions:
Findings of the SPR assessments are often divided into five major themes (Jones, 2000):
According to Jones (2000), as of 2000, SPR has performed assessments and benchmarks for nearly 350 corporations and 50 government organizations, with the number of sites assessed in excess of 600. The percent distributions of these assessments by the five assessment scales are excellent, 3.0%; above average, 18.0%; average, 54.0%; below average, 22.0%; poor, 3.0%.
2.8.3 The Malcolm Baldrige Assessment
The Malcolm Baldrige National Quality Award (MBNQA) is the most prestigious quality award in the United States. Established in 1988 by the U.S. Department of Commerce (and named after Secretary Malcolm Baldrige), the award is given annually to recognize U.S. companies that excel in quality management and quality achievement. The examination criteria are divided into seven categories that contain twenty-eight examination items:
The system for scoring the examination items is based on three evaluation dimensions: approach, deployment, and results. Each item requires information relating to at least one of these dimensions. Approach refers to the methods the company is using to achieve the purposes addressed in the examination item. Deployment refers to the extent to which the approach is applied. Results refers to the outcomes and effects of achieving the purposes addressed and applied.
The purpose of the Malcolm Baldrige assessment approach (the examination items and their assessment) is fivefold:
There are 1,000 points available in the award criteria. Each examination item is given a percentage score ( ranging from 0% to 100%). A candidate for the Baldrige award should be scoring above 70%. This would generally translate as follows:
While score is important, the most valuable output from an assessment is the feedback, which consists of the observed strengths and (most significant) the areas for improvement. It is not unusual for even the higher scoring enterprises to receive hundreds of improvement suggestions. By focusing on and eliminating the high-priority weaknesses, the company can be assured of continuous improvement.
To be the MBNQA winner, the four basic elements of the award criteria must be evident:
Many U.S. companies have adopted the Malcolm Baldrige assessment and its discipline as the basis for their in-company quality programs. In 1992, the European Foundation for Quality Management published the European Quality Award, which is awarded to the most successful proponents of total quality management in Western Europe. Its criteria are similar to those of the Baldrige award (i.e., 1,000 maximum points; the areas of approach, deployment, results are scoring dimensions). Although there are nine categories (versus Baldrige's seven), they cover similar examination areas. In 1998, the seven MBNQA categories were reorganized as: Leadership, Strategic Planning, Customer and Market Focus, Information and Analysis, Human Resource Focus, Process Management, and Business Results. Many U.S. states have established quality award programs modeled on the Malcolm Baldrige National Quality Award.
Unlike the SEI and SPR assessments, which focus on software organizations, projects, and processes, the MBNQA and the European Quality Award encompass a much broader scope. They are quality standards for overall quality management, regardless of industry. Indeed, the MBNQA covers three broad categories: manufacturing, service, and small business.
2.8.4 ISO 9000
ISO 9000, a set of standards and guidelines for a quality assurance management system, represents another body of quality standards. It was established by the International Organization for Standardization and has been adopted by the European Community. Many European Community companies are ISO 9000 registered. To position their products to compete better in the European market, many U.S. companies are working to have their development and manufacturing processes registered. To obtain ISO registration, a formal audit of twenty elements is involved and the outcome has to be positive. Guidelines for the application of the twenty elements to the development, supply, and maintenance of software are specified in ISO 9000-3. The twenty elements are as follows:
Many firms and companies pursue ISO 9000 registration, and many companies fail the first audit. The number of initial failures ranges from 60% to 70%. This interesting statistic is probably explained by the complexity of the standards, their bureaucratic nature, the opportunity for omissions, and a lack of familiarity with the requirements.
From the software standpoint, corrective actions and document control are the areas of most nonconformance . As discussed earlier, the defect prevention process is a good vehicle to address the element of corrective action. It is important, however, to make sure that the process is fully implemented throughout the entire organization. If an organization does not implement the DPP, a process for corrective action must be established to meet the ISO requirements.
With regard to document control, ISO 9000 has very strong requirements, as the following examples demonstrate :
From our perspective, the more interesting requirements address software metrics, which are listed under the element of statistical techniques. The requirements address both product metrics and process metrics.
At a minimum, some metrics should be used that represent
Selected metrics should be described such that results are comparable.
The MBNQA criteria and the ISO 9000 quality assurance system can complement each other as an enterprise pursues quality. However, note that Baldrige is a nonprescriptive assessment tool that illuminates improvement items; ISO 9000 registration requires passing an audit. Furthermore, while the Malcolm Baldrige assessment focuses on both process and results, the ISO 9000 audit focuses on a quality management system and process control. Simply put, ISO 9000 can be described as "say what you do, do what you say, and prove it." But ISO 9000 does not examine the quality results and customer satisfaction, to which the MBNQA is heavily tilted. The two sets of standards are thus complementary. Development organizations that adopt them will have more rigorous processes. Figure 2.8 shows a comparison of ISO 9000 and the Baldrige scoring system. For the Baldrige system, the length of the arrow for each category is in proportion to the maximum score for that category. For ISO 9000, the lengths of the arrows are based on the perceived strength of focus from the IBM Rochester ISO 9000 audit experience, initial registration audit in 1992 and subsequent yearly surveillance audits. As can be seen, if the strengths of ISO 9000 (process quality and process implementation) are combined with the strengths of the Baldrige discipline (quality results, customer focus and satisfaction, and broader issues such as leadership and human resource development), the resulting quality system will have both broad-based coverage and deep penetration.
Figure 2.8. Malcolm Baldrige Assessment and ISO 9000: A Comparison Based on the Baldrige Scoring
The Baldrige/ISO synergism comes from the following:
In recent years , the ISO technical committee responsible for ISO 9000 family of quality standards has undertaken a major project to update the standards and to make them more user-friendly. ISO 9001:2000 contains the first major changes to the stan-dards since their initial issue. Some of the major changes include the following (British Standards Institution, 2001; Cianfrani, Tsiakals, and West, 2001):
From these changes, it appears ISO 9000 is moving closer to the MBNQA criteria while maintaining a strong process improvement focus.
What Is Software Quality?
Software Development Process Models
Fundamentals of Measurement Theory
Software Quality Metrics Overview
Applying the Seven Basic Quality Tools in Software Development
Defect Removal Effectiveness
The Rayleigh Model
Exponential Distribution and Reliability Growth Models
Quality Management Models
In-Process Metrics for Software Testing
Complexity Metrics and Models
Metrics and Lessons Learned for Object-Oriented Projects
Measuring and Analyzing Customer Satisfaction
Conducting In-Process Quality Assessments
Conducting Software Project Assessments
Dos and Donts of Software Process Improvement
Using Function Point Metrics to Measure Software Process Improvements
A Project Assessment Questionnaire