Summary


The Process Areas for Maturity Level 4: Quantitatively Managed

Organizational Process Performance

The purpose of Organizational Process Performance is to establish and maintain a quantitative understanding of the performance of the organization's set of standard processes in support of quality and process-performance objectives, and to provide the process performance data, baselines, and models to quantitatively manage the organization's projects. Specific Goals and Practices for this process area include:

  • SG1: Establish performance baselines and models

    • SP1.1: Select processes

    • SP1.2: Establish process performance measures

    • SP1.3: Establish quality and process performance objectives

    • SP1.4: Establish process performance baselines

    • SP1.5: Establish process performance models

This process area includes measurements for both process and product. It combines these measures to determine both the quality of the process and the product in quantitative terms.

Process performance baselines and process performance models are now included in goals for this process area, and not just as suggested best practices. A process performance baseline (PPB) documents the historical results achieved by following a process. A PPB is used as a benchmark for comparing actual process performance against expected process performance. A process performance model (PPM) describes the relationships among attributes (e.g., defects) of a process and its work products. A PPM is used to estimate or predict a critical value that cannot be measured until later in the project's life for example, predicting the number of delivered defects throughout the life cycle. More information on PPBs and PPMs can be found in Chapter 19, A High Maturity Perspective.

Remember: do not wait until Level 4 to focus on measurements and to start collecting measures: that is way too late. The Measurement and Analysis process area resides at Level 2; so if you are attempting to achieve a Level 2 Maturity Level rating, this is probably not a process area to tailor out. And if you are using the continuous representation, which supposedly allows you to select which process areas to use, Measurement and Analysis should also be selected.

At Level 2, measures are collected, stored in a database per project, bubble up to an organizational database in Level 3, are reviewed for consistency and accuracy at Level 3, and then, at Level 4, have statistically based controls applied to them. What to put under statistical control depends on where the problems are in your organization, and which processes and measures will add value to your management techniques. This statement implies that not all processes must be put under statistical control. However, we do suggest that, for Level 4 and for this process area in particular, the organization's set of standard processes (OSSPs) must be understood from a statistical point of view.

The most common measurements we see in use for this process area are size , effort, cost, schedule, and product defect density. The measurements for these data points are usually displayed in ranges, and not by absolute points. Subsets of measures can be generated to be applied based on domains, new development versus maintenance, and type of customer.

Performance-related measurements can include schedule variance (lateness), effort variance, and unplanned tasks . Quality-related measurements can include rework and defects. These defects can be collected during all life-cycle phases, including requirements inspections, design inspections, code inspections, unit testing, integration testing, and system testing. Process- related measures that we commonly see can be found by reviewing Productivity at the different phases of life cycles. For example, in Testing, how many hours were spent deriving test cases versus how many tests were actually completed?

To be even more confusing, this process area refers to process performance as including both process measures and product measures. Then later, it refers to " quality and process -performance objectives" to emphasize the importance of product quality. The confusion comes in because product measures are primarily used in organizations to demonstrate quality. This process area refers to process measures as including effort, cycle time, and defect removal effectiveness. Product measures include reliability and defect density. However, the same source data (e.g., defects) can be used for both product and process measures. A process measure would be defect removal effectiveness the percentage of existing defects removed by a process, such as the inspection process or the testing process. A product measure would be defect density the number of defects per unit or product size, such as number of defects per thousand lines of code that reflects the quality of the product. Basically, it might help to translate in this process area that Quality measure = Product measure.

Training is critical in this process area, in both modeling techniques and in quantitative methods .

There are no new generic goals for Levels 4 and 5 in the staged representation because the process areas include the basic tenets. The continuous representation does have generic goals because the continuous representation allows the selection of various process areas. So, you may decide not to select the process areas in Maturity Level 4. If you do that, then the generic goals of the continuous representation have been added to ensure that the basic concepts of statistical control and application will be met.

This process area covers both project-level and organization-level activities. Selecting processes to measure and selecting appropriate measures themselves can be iterative to meet changing business needs. Establishing quality and process objectives can be iterative as well, based on fixing special causes of variation.

An example of the importance of not mixing "apples and oranges" in this process area follows . Suppose you are collecting peer review data. You collect defect data resulting from peer reviews. You may collect the number of defects found and the type of defect (code, requirement, design, etc.). Be sure to analyze that data appropriately. For example, if one review of code produces 17 defects, that may not sound like much, while another review of another program results in 25 defects, which is obviously more than from the first product reviewed. However, by reviewing the number of lines of code for each product, you discover that the first review resulting in 17 defects occurred in a program with only 11 lines of code, while the second review that resulted in 25 defects was conducted on a program of 1500 lines of code. The 17 defects were so severe that the program needed a total re-write, while the 25 defects were mostly cosmetic, with only one or two potential problem areas. So, you must study the data produced in terms of the number of defects, type, severity, number of pages or lines of code reviewed, complexity, domain, and type of technology used.

Measures can usually be traced back to life-cycle activities and products. For example, the percent of changes to the Requirements Document, while reviewing the product itself, can demonstrate problems with the process used for collecting requirements and physically writing the document. These numbers can then be used to include more rigorous training in this area of weakness. You might also consider reviewing the number of defects out of the Requirements phase versus the number of defects out of the Test phase. One study has determined that 85 percent of defects found in the Test phase were introduced in the Requirements phase.

We admit that measurement programs can become onerous. The CMMI response to this criticism is that measurements should be tied to the business objectives of the organization. So, if you are highly driven by time-to-market , you would focus on product defects and the scheduling effort. Decisions to release the product with an "appropriate" limit of defects would be made by senior management in order to make the schedule date. That "appropriate" part should be determined based on historical data (and analysis of that data and your measurement repository) for the number of defects that can be released into the marketplace , and the types of defects that can be released into the marketplace and still satisfy the customer and make the product work.

Organizational Process Performance includes deciding which processes to include as part of statistical performance analyses; defining metrics to use as part of the process performance analyses; defining quantitative objectives for quality and process performance (quality and process "by the numbers"); and generating process performance baselines and models.

Quantitative Project Management

The purpose of Quantitative Project Management is to quantitatively manage the project's defined process to achieve the project's established quality and process performance objectives. Specific Goals and Practices for this process area include:

  • SG1: Quantitatively manage the project

    • SP1.1: Establish the project's objectives

    • SP1.2: Compose the defined process

    • SP1.3: Select the subprocesses that will be statistically managed

    • SP1.4: Manage project performance

  • SG2: Statistically manage subprocess performance

    • SP2.1: Select measures and analytic techniques

    • SP2.2: Apply statistical methods to understand variation

    • SP2.3: Monitor performance of the selected subprocesses

    • SP2.4: Record statistical management data

In this process area, usage of the organizational-level measurement repository is refined. This process area describes what projects need to do to manage quantitatively. Generally speaking, we have seen that the distribution of labor is that experienced managers and measurement personnel identify measures, senior-level project personnel collect the measures, and projects use the measures. Training for each role needs to be addressed.

Project managers should do, at least, a weekly review of the project measures and how they are being used. This information is usually communicated to senior management. A measurement group is usually needed to support measurement activities. Collection of data is easier if automated tools are used. Manual collection of data can be burdensome and can lead to abandonment of this effort. Automated tools are very helpful, but remember do not go out and buy a tool willy-nilly. Most tools cannot support the very project-specific and organizationally specific measures that need to be taken. And remember the Level 3 process area, Requirements Development? Well, before you buy a tool, you are supposed to define the requirements of that tool not buy a tool and then define the requirements that it happens to meet. We have found that the best tools for collecting and storing metrics have been developed by the organization itself. So, you have programmers use them. Get them to develop a tool or tools. This approach also gets buy-in from them for some of the process improvement activities. What is the best tool? Your brain. God gave you a brain now use it. Remember: not only do you need to collect the data, but you also need to analyze them. Your brain will certainly come in handy for that part.

There can be several organizational measurement repositories, or layers within one overall repository, so as to not mix data that may lead to misleading numbers and bad decisions. Repositories require years of historical data using the same, normalized data, and reviews and analyses of these data. Training and practice in this effort need to occur. Running projects quantitatively is not an overnight transition.

A bad example of collecting data and using them follows. Most organizations simply ask, "How many years must we collect data to prove that we have met the criteria for historically accurate data?" Wrong question. One organization collected data for 15 years about its projects. The data collected for 14 years were simply when the project started and when it ended. Each project took about seven years to complete. We find it difficult to imagine any real value that was added to these projects by simply collecting start and end dates. The 15th year of data collection included the start of each phase of software development, and the end Requirements start and end, Designs start and end, Codes start and end, Tests start and end, and Installations start and end. While we can find much more value in these types of data and their collection, we believe that having only one year of that data was not enough, especially since each project ran almost seven years, and most of the projects were only in the Requirements phase. So, comparisons for bottlenecks and other trends were almost impossible , and would be inaccurate. However, the organization tried to advise us that these data met the criteria for stable, consistent data because they had data from as far back as 15 years. Sorry no cigar. By the way, this example occurred during an external evaluation of an organization seeking a Maturity Level 4 rating.

Quantitative Project Management includes quantitatively defining project objectives; using stable and consistent historical data to construct the project's defined process; selecting subprocesses of the project's defined process that will be statistically managed; monitoring the project against the quantitative measures and objectives; using analytical techniques to derive and understand variation; and monitoring performance and recording measurement data in the organization's measurement repository.




Interpreting the CMMI(c) A Process Improvement Approach
Interpreting the CMMI (R): A Process Improvement Approach, Second Edition
ISBN: 142006052X
EAN: 2147483647
Year: 2005
Pages: 205

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net