In Reply to Parent Post by cyberspider
While performing capability analysis, we hit a stone wall.
Problem is, as per thumb rule of SPC, measurement system should have [(USL-LSL)/10] as the least count or resolution.
We are operating process with very narrow tolerance eg. in some cases [USL-LSL] = 0.1 and 0.1/10 = 0.01. In most cases, we found that the required resolution (0.01 in this case) is inadquate.
Now what to do ?
Any opinion or innovative ideas ?
What I would do before any capability analysis, and before any SPC, is a Gage R&R, calculating the following metrics:
: number of distinct categories , which tells into how many different categories you can divide your process output, using your current measurement system (MS). This value should exceed 5
: precision to tolerance ratio= 6*sigma(MS) / Tolerance. There is consent that this value should not exceed 30% (some authors use 5.15 in the numerator of the formula)
ndc relates the MS to the Voice of the Process.
PTR relates the MS to the Voice of the Customer (specs)
I would not use the resolution as the only mean to decide on the adequacy of the MS, because there could be other sources of uncertainty that make the measurement system even less capable.
If you already have control charts on the process, it is easy to verify the incapability of the instrument in the Range chart, the points are "discretized" in just a handful of values. This situation invalidates the usability of the control chart.