Section F. Process Has Enough Capacity, But Fails Intermittently


F. Process Has Enough Capacity, But Fails Intermittently

Overview

Some processes appear on first sight to have enough capacity to meet downstream demand but for some reason completely fail to deliver at all at times. This is different from accuracy of delivery performance as discussed in Section A in this chapter (and subsequently in Section C), where perhaps a defective or incomplete entity is delivered. Here the issue is more one of process reliability versus process performance: a seemingly good process suddenly misses an entity entirely.

Examples

  • Industrial. Orders lost in the system, lost shipments

  • Healthcare. No medication arrives, lost records or charts, missed charge

  • Service/Transactional. Failed delivery, missed claim

Measuring Performance

This is in essence a process reliability problem, but can be measured using a typical quality approach if the number of failures is reasonably high (say greater than 5%):

  • Rolled Throughput Yield (RTY). Measured as the percentage of entities that make it all the way through the process every step along the way.

However, if the number of instances of failure is low, a different approach will be needed or otherwise a vast amount of data will be required.

One approach is to use reliability metrics as follows:

  • Mean Time Between Failures (MTBF). Calculated as the average number of days (or hours or minutes) between instances of failure. Clearly the drive will be to make this number increase.

  • Mean (Normalized) Time Between Failures. Sometimes the number of entities processed varies over time and hence a normalized version of MTBF is used. For example, in the hospital environment when we consider patients falling (and subsequently injuring themselves), we look at the number of Patient Days Between Falls (how many patients did we have for how many days). If the entity volume is variable, this normalized version is preferable.

Tool Approach

First we must get an understanding of the current performance with respect to failures:

Focus should just be on measuring validitya sound operational definition and consistent measure of failure versus a detailed investigation of Gage R&R. For more details see "MSAValidity" in Chapter 7, "Tools." Commonly, these are metrics that have never been tracked before, so initially data collection might have to be done manually until systems can be updated to include them.

Usually failures are few and far between and hence a longer time interval is required for data collection. The good news is that failures usually hit the business hard and there will likely be reasonably good historical data of the instances of failure. It might take some manual manipulation to get it into the right form, but at least the project shouldn't have to stall waiting on data. Analyze data for a period of typically one month to one year (depending on process drumbeat) to get a reasonable estimate of Baseline Capability. For more details see "CapabilityAttribute" and "CapabilityContinuous" in Chapter 7.

Some causes of failure are so infrequent or unlikely that they shouldn't be the focus of the project. The typical aim of this type of project is to get at least a 50% reduction in failures (or equivalently a doubling of the MTBF). This usually can be achieved by focusing on just a few of the most common or biggest impact failure types. The Pareto will highlight these if they aren't already known.


At this point, there might be some obvious solutions that spring to mind, but there really isn't much basis upon which to make a change. Further digging will be required. For novice Belts, the tendency here might be to jump into the Y=f(X1, X2,..., Xn) roadmap (as outlined in Section C in this chapter). In fact, a couple of simple mapping tools applied here might help us considerably before taking that route. The best tenet to keep in mind here is "There is only one process that I can consistently do perfectly, every time, for all time and that is no process at all." Seeking to drive out complexity will increase reliabilitysimple works best!

Use the Swimlane Map at a high level to identify failures occurring at process handoffs (e.g., missed expectations, poor accountability).

Take the opportunity at this point to remove any obvious NVA activity too.

Some Teams prefer to generate the Value Stream Map first and then morph that into the Swimlane Mapuse whichever method the Team prefers to get to the end result.

If there are many handoffs, the Handoff Map will sometimes show a bias in resource involvement and thus show where effort should be focused. At the very least, this map is a great visual tool to explain the issues to the process stakeholders.


Based on the Swimlane Map and Handoff Map, the Team might have made significant change to the process. If so, work to ensure those changes are well executed with the correct controls in place.

If there was an absolute, dramatic change made to the process and the Team knows beyond doubt that the root cause of failures was reduced, then start to capture new failure Capability data and move to the Control tools in Chapter 5.

However, if the Capability data seems to show no change in failure performance, or if the Team feels that no significant changes were made from the previous tools, then in effect the key Xs that are driving failure are as yet not known. In this case, proceed to Section C in this chapter to narrow down the Xs that cause process failure and consider the Y to be process reliability.





Lean Sigma(c) A Practitionaer's Guide
Lean Sigma: A Practitioners Guide
ISBN: 0132390787
EAN: 2147483647
Year: 2006
Pages: 138

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net