25. MSAValidityOverviewProcess variation affects how resulting products and services appear to Customers. However, what you (and ultimately the Customer) see as the appearance does not include only the variability in the entity itself, but also some variation from the way the entity is measured. A simple example of this is to pick up a familiar object, such as a pen. If you were to measure the diameter of the pen and perhaps judge if the lettering on the pen was "crisp" enough, and then you handed the same pen to three other people, it is likely that there would be a difference in answers among everyone. It is also highly likely that if someone handed you the same pen later (without you knowing it is the same pen) and asked you to measure it again, you would come to a different answer or conclusion. The pen itself has not changed; the difference in answers is purely due to the Measurement System and specifically errors within it. The higher the Measurement Error, the harder it is to understand the true process capability and behavior. Therefore, it is crucial to analyze Measurement Systems before embarking on any Process Improvement activities. It is always worth a little introspection here. You should ask if, for all the experiments and analyses done in the past, was the conclusion reached really what happened or was it driven purely by a noisy Measurement System? The sole purpose of a Measurement System in Lean Sigma is to collect the right data to answer the questions being asked. To do this, the Team must be confident in the integrity of the data being collected. To confirm Data Integrity, the Team must know
To answer these questions, Data Integrity is broken down into two elements:
And after Validity is confirmed (some mending of the Measurement System might be required first):
Validity is covered in this section; Reliability is dependent on the data type and is covered in "MSAAttribute" and "MSAContinuous" in this chapter. To confirm Validity of data, the commonest approach is to make use of a Data Integrity Audit, which has the simple aim of determining whether the data is correct and valid. Through the Audit, the Team seeks to assure themselves that the data being used is a clear and accurate record of the actual characteristics or events of interest. LogisticsPerforming a Data Integrity Audit requires all the Team to participate, together with all other personnel that are part of the subsequent data collection. Participation is in the planning and structuring of the Audit, in the actual data collection itself, or both. The Audit itself requires a short data collection to verify that the systems and processes used to capture and record data are robust. An Audit is typically not done for a single metric at a time, but applied to a complete data capture of multiple metrics. For example, an Audit is applied to the whole data capture for a Multi-Vari Study or Multi-Cycle Analysis, rather than just for a single X, otherwise the validation process would take too long. Planning for the Audit usually takes a Team about 60 minutes, and the Audit itself typically runs for no more than 510 data points captured over a period of about a day, or until a major flaw is found in the data capture mechanism. If there are no problems found from the Audit, the Team should continue with the data capture as originally planned. RoadmapThe roadmap to confirming Validity of a planned data capture approach is as follows:
Interpreting the OutputAfter the Audit is complete and the Team is satisfied with the validity of the metrics in question in the Sampling Plan, then the Reliability of each metric must be determined using an MSA such as Gage Repeatability and Reproducibility Study (see "MSAContinuous" and "MSAAttribute" in this chapter). |