Section 25. MSAValidity


25. MSAValidity

Overview

Process variation affects how resulting products and services appear to Customers. However, what you (and ultimately the Customer) see as the appearance does not include only the variability in the entity itself, but also some variation from the way the entity is measured. A simple example of this is to pick up a familiar object, such as a pen. If you were to measure the diameter of the pen and perhaps judge if the lettering on the pen was "crisp" enough, and then you handed the same pen to three other people, it is likely that there would be a difference in answers among everyone. It is also highly likely that if someone handed you the same pen later (without you knowing it is the same pen) and asked you to measure it again, you would come to a different answer or conclusion. The pen itself has not changed; the difference in answers is purely due to the Measurement System and specifically errors within it. The higher the Measurement Error, the harder it is to understand the true process capability and behavior.

Therefore, it is crucial to analyze Measurement Systems before embarking on any Process Improvement activities.

It is always worth a little introspection here. You should ask if, for all the experiments and analyses done in the past, was the conclusion reached really what happened or was it driven purely by a noisy Measurement System?

The sole purpose of a Measurement System in Lean Sigma is to collect the right data to answer the questions being asked. To do this, the Team must be confident in the integrity of the data being collected. To confirm Data Integrity, the Team must know

  • The type of data

  • If the available data is usable

  • If the data is suitable for the project

  • If it is not suitable, whether it can be made usable

  • How the data can be audited

  • If the data is trustworthy

To answer these questions, Data Integrity is broken down into two elements:

  • Validity. Is the "right" aspect of the process being measured? The data might be from a reliable method or source, but still not match the operational definitions established for the project.

And after Validity is confirmed (some mending of the Measurement System might be required first):

  • Reliability. Is the valid measurement system producing good data? This considers the accuracy and consistency of the data.

Validity is covered in this section; Reliability is dependent on the data type and is covered in "MSAAttribute" and "MSAContinuous" in this chapter.

To confirm Validity of data, the commonest approach is to make use of a Data Integrity Audit, which has the simple aim of determining whether the data is correct and valid. Through the Audit, the Team seeks to assure themselves that the data being used is a clear and accurate record of the actual characteristics or events of interest.

Logistics

Performing a Data Integrity Audit requires all the Team to participate, together with all other personnel that are part of the subsequent data collection. Participation is in the planning and structuring of the Audit, in the actual data collection itself, or both.

The Audit itself requires a short data collection to verify that the systems and processes used to capture and record data are robust. An Audit is typically not done for a single metric at a time, but applied to a complete data capture of multiple metrics. For example, an Audit is applied to the whole data capture for a Multi-Vari Study or Multi-Cycle Analysis, rather than just for a single X, otherwise the validation process would take too long.

Planning for the Audit usually takes a Team about 60 minutes, and the Audit itself typically runs for no more than 510 data points captured over a period of about a day, or until a major flaw is found in the data capture mechanism. If there are no problems found from the Audit, the Team should continue with the data capture as originally planned.

Roadmap

The roadmap to confirming Validity of a planned data capture approach is as follows:

Step 1.

The Data Integrity Audit is applied to a data capture for another tool, such as a Multi-Vari Study; therefore, it is necessary to have the details of the other tool, specifically:

  • Goal

  • Target metrics

  • Sampling Plan

For these, it is useful to refer to "KPOVs and Data" in this chapter.

The Data Integrity Audit is a short pilot run of the Sampling Plan with the aim of confirming validity of the data.

Step 2.

For each metric in the Sampling Plan, determine how the metric can be Validated by a parallel method of capture. Audits must be independent of the data collection, processing, and reporting systems that are being assessed. This can be accomplished by any method that makes sense, but the key is that there must be a second, independent source of data to compare against the "normal" data system.

For most organizations, data is kept in computer databases; so it is useful to have a representative from the IT organization involved at this point in the project. There are times when portions (if not all) of the data processes are already being checked automatically for data integrity.

If there are no automated systems involved, then a second manual, parallel data capture is required.

For each metric in the Sampling Plan, the Team must agree on acceptance requirements for the metric (i.e., how good the metric must be to be deemed valid). A complete list of criteria should be agreed upon before the Audit is conducted so that expectations are clear.

Step 3.

Begin the Data Collection as per the Sampling Plan for 510 data points, with the Audit data being collected in parallel as per the Audit Plan in Step 2. For the Data Collection, consider the validity of the data as follows:

  • Is the recorded data what the Team meant to record? It is useful to refer back to the Operational Definition of the metric at this point (see "KPOVs and Data" in this chapter).

  • Does it contain the intended information?

  • Does the measure discriminate between different items?

  • Does it reliably predict future performance?

  • Does it agree with other measures designed to find the same thing?

  • Is the measure stable over time?

If points captured are clearly invalid then stop the data collection.

The temptation during such an Audit is to give it a go and see what happens and then regroup, make tweaks, and redo the Audit. It is always best to try to do the Audit right the first time.

Performing a thorough data method validation can be a tedious process, but the quality of data generated from the Sampling Plan is directly linked to the success of the project.

Step 4.

Based on the Audit results in Step 3, take any actions required to mend the Sampling Plan or data capture mechanism. Sometimes Teams are tempted, after they are presented with poor Audit results (an invalid data collection system), to try to make excuses to just continue without remedying the situation. Again the Team must remember that the quality of data generated from the Sampling Plan is directly linked to the success of the project. The consequences of invalid data validation methods always vastly exceed what would have been expended initially if the validation studies had been performed properly.

Step 5.

Rerun what is effectively a confirmatory Audit.

Interpreting the Output

After the Audit is complete and the Team is satisfied with the validity of the metrics in question in the Sampling Plan, then the Reliability of each metric must be determined using an MSA such as Gage Repeatability and Reproducibility Study (see "MSAContinuous" and "MSAAttribute" in this chapter).




Lean Sigma(c) A Practitionaer's Guide
Lean Sigma: A Practitioners Guide
ISBN: 0132390787
EAN: 2147483647
Year: 2006
Pages: 138

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net