The Moving Range Control Chart


Representing and Reporting Data

Charts are based on data. To produce accurate charts, the data must be at the level of detail needed for process control. Those of you operating in low-maturity organizations (Maturity Levels 1 or 2) will not be able to produce accurate charts. Why not? Because, at your current level of functioning, your processes are not yet stable enough to produce consistently accurate data at the level of detail needed for process control. That is why SPC is not broached in the CMMI until Maturity Level 4. So do not jump right into producing statistical charts and graphs when initiating a process improvement effort ” you are not ready for it ” yet.

Data must be consistent. What does that mean? It means that the data were collected at the same time in the process and in the same manner. I once assessed an organization that had proclaimed itself to be a Level 4 organization ” that means that they were able to quantitatively predict their quality, productivity, and schedule ability in statistically accurate terms, and manage with the data. The data I found were in no way, shape, or form statistically accurate. Yes, the organization had been collecting data for 15 years. But what did the data say? What did they look like? For the first 14 years, the organization had collected data on all of their projects for "When the Project Started" and "When the Project Ended." That was it. Each project lasted from five to eight years. These data tell you just about nothing. Because the customer had requested an outside, external assessment, the organization for the last (one) year had collected data according to their system development life cycle. So they collected data relating to when each project started the Requirements phase and when each project ended that phase. Then they collected start and end dates for the Design phase, the Construction phase, the Testing phase, and the Implementation/Delivery phase. They mixed all this data in with the previous data from the previous 14 years . This is not an example of consistent data.

Another example of inconsistent data that we run across quite often is that collected from peer reviews. It is a good idea to collect data from peer reviews. If these data are collected during peer reviews for, say, code, these data can be used to show trends in programming errors found that might be solved via increased training in coding techniques or in improving the processes used for eliciting and documenting requirements, and then designing systems. But once again, the data must be consistent. For example, suppose a peer review is done on a program comprising 8000 lines of code, and eight errors are found. Another peer review is done on a program comprising eight lines of code, and six errors are found. The second peer review found fewer errors. Does that mean that the person who coded the second program is a better coder than the person who coded the first program? Of course not. You also need to consider the complexity of the programs, the length of the programs, the type of language, etc. Just collecting data for the sake of collecting data and populating your brand-new database with numbers is not a good enough reason to jumble all of these numbers together.

What questions should you ask yourselves when reviewing the data for your charts? The following seven questions are a start:

  1. Who collected these data? (Hopefully the same people who are trained in proper data collection techniques.)

  2. How were the data collected? (Hopefully by automated means and at the same part of the process.)

  3. When were the data collected? (Hopefully all at the same time on the same day or at the same time in the process ” very important for accounting data dealing with month-end or year-end closings.)

  4. What do the values presented mean? (Have you changed the process recently? Do these values really tell me what I want or need to know?)

  5. How were these values computed from raw inputs? (Have you computed the data to arrive at the results you want, or to accurately depict the true voice of the process?)

  6. What formulas were used? (Are they measuring what we need to measure? Are they working? Are they still relevant?)

and the most important question of all:

  1. Are we collecting the right data, and are we collecting the data right? (The data collected should be consistent, and the way data are collected should also be consistent. Do the data contain the correct information for analysis? In our peer review example, this information would be size , complexity, and programming language.)

Much of the data reported to senior management is contained in some sort of report, usually produced and presented monthly. These data are usually aggregated data; that is, data that have been collected from various processes and various parts of the processes. These data are then summarized and combined into percentages of something or other. Do not trust these data. They are just that ” data (numbers), not meaningful information. While on the surface these data may seem logical and accurate, they really are not. For example, most financial/budget/cost data are collected at "month-end closing." However, month-end closing within several departments often follows inconsistent processes. Each department follows processes that vary at least somewhat throughout the organization. Each department is collecting different data that measure their work and the cost of the work done in that department. The "month-end" process therefore varies. It varies in the processes used to collect the data, the data collected, the persons collecting the data, and the time when the books are actually closed and the accounts reconciled. These data from all of the departments are then aggregated and summed, making them look accurate. The point is that the data cannot be used as a basis of any sort of reasonable predictions because they consist of a mixture of apples and oranges. A control chart, based on this aggregated data, would be of no value. If you really wanted to improve your month-end closing process, you would need to chart the process for each individual department and then determine where improvements are needed.

Know your data. Make sure they represent actual activities that occur, and not just counts of how many times something or other happened .




Interpreting the CMMI(c) A Process Improvement Approach
Interpreting the CMMI (R): A Process Improvement Approach, Second Edition
ISBN: 142006052X
EAN: 2147483647
Year: 2005
Pages: 205

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net