CMM Max/Min data and Capability

Greetings all,

I'm not sure if it stems from an interpretation of ASME 14, but our CMMs (all our automated gages) will report both a MAX and MIN reading for most features of size. This creates an uncertain situation when calculating CpK. I know the arguments against CPK as a guidepost and that much of the algorithm is based on assumptions, but it is a requirement for our processes to produce within 1.33 capability or we can not sample.

How would I rectify MAX/MIN data for a CpK study that would give me useful results? Right now the lab is using just the max column or just the min column of data and doing CpK independently on each one which even I can tell is horribly wrong, and does not represent the process spread. I have 2 options I thought of:

Most Deviant from Mean - take the most deviant from mean from each part's Max and Min column. This has downfalls, like throwing Normalcy off and produces an inverted distribution on the chart. Also if a part has equal deviation on either side of the mean, it will always chose the one that comes first in the excel algorithm (creates artificial instability). When I try this I see better Normalcy than the Max / Min independently but the bell curve is still very weird.

Subgroups - this looks better on paper, but taking the maximum and minimum of a feature on a given part and making it a subgroup does not seem right to me either, and has the same inverted bell curve as above. The two readings used to determine the subgroup are still the extremes.

How would you rectify this to produce a number that satisfies at least some of the logic in the capability calculation?

Some notes: This is being calculated after manufacturing the parts, or on parts received from a supplier. We have no ability to affect the production process at this stage.
Some parts have unrealistic tolerances down to .0007" total tol.

Thank you for your time


Quite Involved in Discussions
You need to get someone to look at the software that your CMMs are using.
You need the measurements to be reported as "absolute" values with an uncertainty attached to them, not as a max and min for each measurement.
You use the reported values for your calculations.

Bev D

Heretical Statistician
Super Moderator
Just one more reason why the whole capability index thing is an abomination on the face of the earth.

In the past when I have been forced to do this I have used the attribute approach. The problem with the continuous data approach is the theoretical idea that the tails of a distribution are infinite. This is true in theory because the statistician has no way of knowing how long the tail is without 100% measurements. In reality of course the tails of any stable distribution are not infinite. So really if your max (or min) values are all within spec there is no reason to think that some mysterious part will measure beyond that. So the attribute approach takes that reality into account.
Top Bottom