Process Maturity Framework and Quality Standards

Regardless of which process is used, the degree to which it is implemented varies from organization to organization and even from project to project. Indeed, given the framework of a certain process model, the development team usually defines its specifics such as implementation procedures, methods and tools, metrics and measurements, and so forth. Whereas certain process models are better for certain types of projects under certain environments, the success of a project depends heavily on the implementation maturity, regardless of the process model. In addition to the process model, questions related to the overall quality management system of the company are important to the outcome of the software projects.

This section discusses frameworks to assess the process maturity of an organization or a project. They include the SEI and the Software Productivity Research (SPR) process maturity assessment methods, the Malcolm Baldrige discipline and assessment processes, and the ISO 9000 registration process. Although the SEI and SPR methods are specific to software processes, the latter two frameworks are quality process and quality management standards that apply to all industries.

2.8.1 The SEI Process Capability Maturity Model

The Software Engineering Institute at the Carnegie-Mellon University developed the Process Capability Maturity Model (CMM), a framework for software development (Humphrey, 1989). The CMM includes five levels of process maturity (Humphrey, 1989, p. 56):

Level 1: Initial

Characteristics: Chaotic ”unpredictable cost, schedule, and quality performance.

Level 2: Repeatable

Characteristics: Intuitive ”cost and quality highly variable, reasonable control of schedules, informal and ad hoc methods and procedures. The key elements, or key process areas (KPA), to achieve level 2 maturity follow:

  • Requirements management
  • Software project planning and oversight
  • Software subcontract management
  • Software quality assurance
  • Software configuration management

Level 3: Defined

Characteristics: Qualitative ”reliable costs and schedules, improving but unpredictable quality performance. The key elements to achieve this level of maturity follow:

  • Organizational process improvement
  • Organizational process definition
  • Training program
  • Integrated software management
  • Software product engineering
  • Intergroup coordination
  • Peer reviews

Level 4: Managed

Characteristics: Quantitative ”reasonable statistical control over product quality. The key elements to achieve this level of maturity follow:

  • Process measurement and analysis
  • Quality management

Level 5: Optimizing

Characteristics: Quantitative basis for continued capital investment in process automation and improvement. The key elements to achieve this highest level of maturity follow:

  • Defect prevention
  • Technology innovation
  • Process change management

The SEI maturity assessment framework has been used by government agencies and software companies. It is meant to be used with an assessment methodology and a management system. The assessment methodology relies on a questionnaire (85 items in version 1 and 124 items in version 1.1), with yes or no answers. For each question, the SEI maturity level that the question is associated with is indicated. Special questions are designated as key to each maturity level. To be qualified for a certain level, 90% of the key questions and 80% of all questions for that level must be answered yes. The maturity levels are hierarchical. Level 2 must be attained before the calculation for level 3 or higher is accepted. Levels 2 and 3 must be attained before level 4 calculation is accepted, and so forth. If an organization has more than one project, its ranking is determined by answering the questionnaire with a composite viewpoint ” specifically , the answer to each question should be substantially true across the organization.

It is interesting to note that pervasive use of software metrics and models is a key characteristic of level 4 maturity, and for level 5 the element of defect prevention is key. Following is a list of metrics-related topics addressed by the questionnaire.

  • Profiles of software size maintained for each software configuration item over time
  • Statistics on software design errors
  • Statistics on software code and test errors
  • Projection of design errors and comparison between projected and actual numbers
  • Projection of test errors and comparison between projected and actual numbers
  • Measurement of design review coverage
  • Measurement of test coverage
  • Tracking of design review actions to closure
  • Tracking of testing defects to closure
  • Database for process metrics data across all projects
  • Analysis of review data gathered during design reviews
  • Analysis of data already gathered to determine the likely distribution and characteristics of the errors in the remainder of the project
  • Analysis of errors to determine their process-related causes
  • Analysis of review efficiency for each project

Several questions on defect prevention address the following topics:

  • Mechanism for error cause analysis
  • Analysis of error causes to determine the process changes required for error prevention
  • Mechanism for initiating error-prevention actions

The SEI maturity assessment has been conducted on many projects, carried out by SEI or by the organizations themselves in the form of self-assessment. As of April 1996, based on assessments of 477 organizations by SEI, 68.8% were at level 1, 18% were at level 2, 11.3% were at level 3, 1.5% were at level 4, and only 0.4% were at level 5 (Humphrey, 2000). As of March 2000, based on more recent assessments of 870 organizations since 1995, the percentage distribution by level is: level 1, 39.3%; level 2, 36.3%; level 3, 17.7%; level 4, 4.8%; level 5, 1.8% (Humphrey, 2000). The data indicate that the maturity profile of software organizations is improving.

The SEI maturity assessment framework applies to the organizational or project level. At the individual level and team level, Humphrey developed the Personal Software Processs (PSP) and Team Software Processs (TSP) (Humphrey, 1995, 1997, 2000 a,b). The PSP shows software engineers how to plan and track their work, and good and consistent practices that lead to high-quality software. Time management, good software engineering practices, data tracking, and analysis at the individual level are among the focus areas of PSP. The TSP is built on the PSP and addresses how to apply similar engineering discipline to the full range of a team's software tasks . The PSP and TSP can be viewed as the individual and team versions of the capability maturity model (CMM), respectively. Per Humphrey's guidelines, PSP introduction should follow organizational process improvement and should generally be deferred until the organization is working on achieving at least CMM level 2 (Humphrey, 1995).

Since the early 1990s, a number of capability maturity models have been developed for different disciplines. The Capability Maturity Model Integration sm (CMMI sm ) was developed by integrating practices from four CMMs: for software engineering, for systems engineering, for integrated product and process development (IPPD), and for acquisition. It was released in late 2001 (Software Engineering Institute, 2001a, 2001b). Organizations that want to pursue process improvement across disciplines can now rely on a consistent model. The CMMI has two representations, the staged representation and the continuous representation. The staged representation of the CMMI provides five levels of process maturity.

Maturity Level 1: Initial

Processes are ad hoc and chaotic.

Maturity Level 2: Managed

Focuses on basic project management. The process areas (PAs) are:

  • Requirements management
  • Project planning
  • Project monitoring and control
  • Supplier agreement management
  • Measurement and analysis
  • Process and product quality assurance
  • Configuration management

Maturity Level 3: Defined

Focuses on process standardization. The process areas are:

  • Requirements development
  • Technical solution
  • Product integration
  • Verification
  • Validation
  • Organizational process focus
  • Organizational process definition
  • Integrated product management
  • Risk management
  • Decision analysis and resolution
  • Organizational environment for integration (IPPD)
  • Integrated teaming (IPPD)

Level 4: Quantitatively Managed

Focuses on quantitative management. The process areas are:

  • Organizational process performance
  • Quantitative project management

Level 5: Optimizing

Focuses on continuous process improvement. The process areas are:

  • Organizational innovation and deployment
  • Causal analysis and resolution
  • The continuous representation of the CMMI is used to describe the capability level of individual process areas. The capability levels are as follows :

    • Capability Level 0: Incomplete
    • Capability Level 1: Performed
    • Capability Level 2: Managed
    • Capability Level 3: Defined
    • Capability Level 4: Quantitatively Managed
    • Capability Level 5: Optimizing

The two representations of the CMMI take different approaches to process improvement. One focuses on the organization as a whole and provides a road map for the organization to understand and improve its processes through successive stages. The other approach focuses on individual processes and allows the organization to focus on processes that require personnel with more or different capability. The rules for moving from one representation into the other have been defined. Therefore, a choice of one representation does not preclude the use of another at a later time.

2.8.2 The SPR Assessment

Software Productivity Research, Inc. (SPR), developed the SPR assessment method at about the same time (Jones, 1986) the SEI process maturity model was developed. There is a large degree of similarity and some substantial differences between the SEI and SPR methods (Jones, 1992). Some leading U.S. software developers use both methods concurrently. While SEI's questions focus on software organization structure and software process, SPR's questions cover both strategic corporate issues and tactical project issues that affect quality, productivity, and user satisfaction. The number of questions in the SPR questionnaire is about 400. Furthermore, the SPR questions are linked-multiple-choice questions with a five-point Likert scale for responses, whereas the SEI method uses a binary (yes/no) scale. The overall process assessment outcome by the SPR method is also expressed in the same five-point scale:

  1. Excellent
  2. Good
  3. Average
  4. Below average
  5. Poor

Different from SEI's five maturity levels, which have defined criteria, the SPR questions are structured so that a rating of "3" is the approximate average for the topic being explored. SPR has also developed an automated software tool (CHECKPOINT) for assessment and for resource planning and quality projection. In addition, the SPR method collects quantitative productivity and quality data from each project assessed. This is one of the differences between the SPR and SEI assessment methods.

With regard to software quality and metrics, topics such as the following are addressed by the SPR questions:

  • Quality and productivity measurements
  • Pretest defect removal experience among programmers
  • Testing defect removal experience among programmers
  • Project quality and reliability targets
  • Pretest defect removal at the project level
  • Project testing defect removal
  • Postrelease defect removal

Findings of the SPR assessments are often divided into five major themes (Jones, 2000):

  • Findings about the projects or software products assessed
  • Findings about the software technologies used
  • Findings about the software processes used
  • Findings about the ergonomics and work environments for staff
  • Findings about personnel and training for management and staff

According to Jones (2000), as of 2000, SPR has performed assessments and benchmarks for nearly 350 corporations and 50 government organizations, with the number of sites assessed in excess of 600. The percent distributions of these assessments by the five assessment scales are excellent, 3.0%; above average, 18.0%; average, 54.0%; below average, 22.0%; poor, 3.0%.

2.8.3 The Malcolm Baldrige Assessment

The Malcolm Baldrige National Quality Award (MBNQA) is the most prestigious quality award in the United States. Established in 1988 by the U.S. Department of Commerce (and named after Secretary Malcolm Baldrige), the award is given annually to recognize U.S. companies that excel in quality management and quality achievement. The examination criteria are divided into seven categories that contain twenty-eight examination items:

  • Leadership
  • Information and analysis
  • Strategic quality planning
  • Human resource utilization
  • Quality assurance of products and services
  • Quality results
  • Customer satisfaction

The system for scoring the examination items is based on three evaluation dimensions: approach, deployment, and results. Each item requires information relating to at least one of these dimensions. Approach refers to the methods the company is using to achieve the purposes addressed in the examination item. Deployment refers to the extent to which the approach is applied. Results refers to the outcomes and effects of achieving the purposes addressed and applied.

The purpose of the Malcolm Baldrige assessment approach (the examination items and their assessment) is fivefold:

  1. Elevate quality standards and expectations in the United States.
  2. Facilitate communication and sharing among and within organizations of all types based on a common understanding of key quality requirements.
  3. Serve as a working tool for planning, training, assessment, and other uses.
  4. Provide the basis for making the award.
  5. Provide feedback to the applicants .

There are 1,000 points available in the award criteria. Each examination item is given a percentage score ( ranging from 0% to 100%). A candidate for the Baldrige award should be scoring above 70%. This would generally translate as follows:

  • For an approach examination item, continuous refinement of approaches are in place and a majority of them are linked to each other.
  • For a deployment examination item, deployment has reached all of the company's major business areas as well as many support areas.
  • For a results examination item, the company's results in many of its major areas are among the highest in the industry. There should be evidence that the results are caused by the approach.

While score is important, the most valuable output from an assessment is the feedback, which consists of the observed strengths and (most significant) the areas for improvement. It is not unusual for even the higher scoring enterprises to receive hundreds of improvement suggestions. By focusing on and eliminating the high-priority weaknesses, the company can be assured of continuous improvement.

To be the MBNQA winner, the four basic elements of the award criteria must be evident:

  1. Driver: The leadership of the senior executive management team.
  2. System: The set of well-defined and well-designed processes for meeting the company's quality and performance requirements.
  3. Measure of progress: The results of the company's in-process quality measurements (aimed at improving customer value and company performance).
  4. Goal: The basic aim of the quality process is the delivery of continuously improving value to customers.

Many U.S. companies have adopted the Malcolm Baldrige assessment and its discipline as the basis for their in-company quality programs. In 1992, the European Foundation for Quality Management published the European Quality Award, which is awarded to the most successful proponents of total quality management in Western Europe. Its criteria are similar to those of the Baldrige award (i.e., 1,000 maximum points; the areas of approach, deployment, results are scoring dimensions). Although there are nine categories (versus Baldrige's seven), they cover similar examination areas. In 1998, the seven MBNQA categories were reorganized as: Leadership, Strategic Planning, Customer and Market Focus, Information and Analysis, Human Resource Focus, Process Management, and Business Results. Many U.S. states have established quality award programs modeled on the Malcolm Baldrige National Quality Award.

Unlike the SEI and SPR assessments, which focus on software organizations, projects, and processes, the MBNQA and the European Quality Award encompass a much broader scope. They are quality standards for overall quality management, regardless of industry. Indeed, the MBNQA covers three broad categories: manufacturing, service, and small business.

2.8.4 ISO 9000

ISO 9000, a set of standards and guidelines for a quality assurance management system, represents another body of quality standards. It was established by the International Organization for Standardization and has been adopted by the European Community. Many European Community companies are ISO 9000 registered. To position their products to compete better in the European market, many U.S. companies are working to have their development and manufacturing processes registered. To obtain ISO registration, a formal audit of twenty elements is involved and the outcome has to be positive. Guidelines for the application of the twenty elements to the development, supply, and maintenance of software are specified in ISO 9000-3. The twenty elements are as follows:

  1. Management responsibility
  2. Quality system
  3. Contract review
  4. Design control
  5. Document control
  6. Purchasing
  7. Purchaser-supplied product
  8. Product identification and traceability
  9. Process control
  10. Inspection and testing
  11. Inspection, measuring, and test equipment
  12. Inspection and test status
  13. Control of nonconforming product
  14. Corrective action
  15. Handling, storage, packaging, and delivery
  16. Quality records
  17. Internal quality audits
  18. Training
  19. Servicing
  20. Statistical techniques

Many firms and companies pursue ISO 9000 registration, and many companies fail the first audit. The number of initial failures ranges from 60% to 70%. This interesting statistic is probably explained by the complexity of the standards, their bureaucratic nature, the opportunity for omissions, and a lack of familiarity with the requirements.

From the software standpoint, corrective actions and document control are the areas of most nonconformance . As discussed earlier, the defect prevention process is a good vehicle to address the element of corrective action. It is important, however, to make sure that the process is fully implemented throughout the entire organization. If an organization does not implement the DPP, a process for corrective action must be established to meet the ISO requirements.

With regard to document control, ISO 9000 has very strong requirements, as the following examples demonstrate :

  • Must be adequate for purpose: The document must allow a properly trained person to adequately perform the described duties .
  • Owner must be identified: The owner may be a person or department. The owner is not necessarily the author.
  • Properly approved before issued: Qualified approvers must be identified by the organization's title and the approver's name before the document is distributed.
  • Distribution must be controlled: Control may consist of:

    • Keeping a master hard copy with distribution on demand
    • Maintaining a distribution record
    • Having documents reside online available to all authorized users, with the following control statement, "Master document is the online version."
  • Version identified: The version must be identified clearly by a version level or a date.
  • Pages numbered: All pages must be numbered to ensure sections are not missing.
  • Total pages indicated: The total number of pages must be indicated, at least on the title page.
  • Promptly destroyed when obsolete: When a controlled document is revised or replaced , all copies of it must be recalled or destroyed. Individuals who receive controlled documents are responsible for prompt disposition of superseded documents.

From our perspective, the more interesting requirements address software metrics, which are listed under the element of statistical techniques. The requirements address both product metrics and process metrics.

  1. Product metrics: Measurements should be used for the following purposes:

    • To collect data and report metric values on a regular basis
    • To identify the current level of performance on each metric
    • To take remedial action if metric levels grow worse or exceed established target levels
    • To establish specific improvement goals in terms of the metrics

      At a minimum, some metrics should be used that represent

    • Reported field failures
    • Defects from customer viewpoint

      Selected metrics should be described such that results are comparable.

  2. Process metrics

    • Ask if in-process quality objectives are being met.
    • Address how well development process is being carried out with checkpoints.
    • Address how effective the development process is at reducing the probability that faults are introduced or go undetected.

The MBNQA criteria and the ISO 9000 quality assurance system can complement each other as an enterprise pursues quality. However, note that Baldrige is a nonprescriptive assessment tool that illuminates improvement items; ISO 9000 registration requires passing an audit. Furthermore, while the Malcolm Baldrige assessment focuses on both process and results, the ISO 9000 audit focuses on a quality management system and process control. Simply put, ISO 9000 can be described as "say what you do, do what you say, and prove it." But ISO 9000 does not examine the quality results and customer satisfaction, to which the MBNQA is heavily tilted. The two sets of standards are thus complementary. Development organizations that adopt them will have more rigorous processes. Figure 2.8 shows a comparison of ISO 9000 and the Baldrige scoring system. For the Baldrige system, the length of the arrow for each category is in proportion to the maximum score for that category. For ISO 9000, the lengths of the arrows are based on the perceived strength of focus from the IBM Rochester ISO 9000 audit experience, initial registration audit in 1992 and subsequent yearly surveillance audits. As can be seen, if the strengths of ISO 9000 (process quality and process implementation) are combined with the strengths of the Baldrige discipline (quality results, customer focus and satisfaction, and broader issues such as leadership and human resource development), the resulting quality system will have both broad-based coverage and deep penetration.

Figure 2.8. Malcolm Baldrige Assessment and ISO 9000: A Comparison Based on the Baldrige Scoring

graphics/02fig08.gif

The Baldrige/ISO synergism comes from the following:

  • The formal ISO documentation requirements (e.g., quality record) facilitate addressing the Baldrige examination items.
  • The formal ISO validation requirements (i.e., internal assessments, external audits, and periodic surveillance) assist completeness and thoroughness.
  • The heavy ISO emphasis on corrective action contributes to the company's continuous improvement program.
  • The audit process itself results in additional focus on many of the Baldrige examination areas.

In recent years , the ISO technical committee responsible for ISO 9000 family of quality standards has undertaken a major project to update the standards and to make them more user-friendly. ISO 9001:2000 contains the first major changes to the stan-dards since their initial issue. Some of the major changes include the following (British Standards Institution, 2001; Cianfrani, Tsiakals, and West, 2001):

  • Use of a process approach and new structure for standards built around a process model that considers all work in terms of inputs and outputs
  • Shift of emphasis from preparing documented procedures to describe the system to developing and managing a family of effective processes
  • Greater emphasis on role of top management
  • Increased emphasis on the customer, including understanding needs, meeting requirements, and measuring customer satisfaction
  • Emphasis on setting measurable objectives and on measuring product and process performance
  • Introduction of requirements for analysis and the use of data to define opportunity for improvement
  • Formalization of the concept of continual improvement of the quality management system
  • Use of wording that is easily understood in all product sectors, not just hardware
  • Provisions via the application clause to adapt ISO 9001:2000 to all sizes and kinds of organizations and to all sectors of the marketplace

From these changes, it appears ISO 9000 is moving closer to the MBNQA criteria while maintaining a strong process improvement focus.

What Is Software Quality?

Software Development Process Models

Fundamentals of Measurement Theory

Software Quality Metrics Overview

Applying the Seven Basic Quality Tools in Software Development

Defect Removal Effectiveness

The Rayleigh Model

Exponential Distribution and Reliability Growth Models

Quality Management Models

In-Process Metrics for Software Testing

Complexity Metrics and Models

Metrics and Lessons Learned for Object-Oriented Projects

Availability Metrics

Measuring and Analyzing Customer Satisfaction

Conducting In-Process Quality Assessments

Conducting Software Project Assessments

Dos and Donts of Software Process Improvement

Using Function Point Metrics to Measure Software Process Improvements

Concluding Remarks

A Project Assessment Questionnaire



Metrics and Models in Software Quality Engineering
Metrics and Models in Software Quality Engineering (2nd Edition)
ISBN: 0201729156
EAN: 2147483647
Year: 2001
Pages: 176

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net