Cpk for one sided dimension - GD&T true position

Ron Rompen

Trusted Information Resource
In the past, I have used Johnson curves to model a one-sided distribution with a 'hard' boundary (such as flatness or true position). Unfortunately I can't provide the equations for it, and when I did look at them (years ago) I wasn't able to understand the math.

Minitab will allow you to calculate capability with boundary limits - I haven't used it lately so I can't comment as to how effective and accurate it is.
 

Bev D

Heretical Statistician
Leader
Super Moderator
It's not your math formulas per se that are at issue.

There are really 3 issues here:
  1. Cpk as measure of how off center the process is
  2. Cpk as a measure of defect rate
  3. How Cpk was originally intended to 'work'

If one looks at Cpk as a measure of how off center the process then yes you end up where Bob is upset about when you have a one sided tolerance where the goal is to be as far away from the one tolerance as possible. However, many people don't 'use' it this way. (and I would have to ask Bob to elaborate on his issue with 'half Cpks'. All traditionally reported Cpk results are 'halfs' in that we report the worst result of the upper and the lower Cpk...Bob?)

If one looks to the Cpk value to determine the defect rate then the non-Normality of the distribution comes into play. Most distributions that have a one sided tolerance with a natural boundary have a skewed distribution with a tail towards the tolerance. In this case the standard Cpk calculation will return an erroneous defect rate because the defect rate calculation is done using the Normal Z table...

HOWEVER - and this is the profound however - Cpk was NEVER intended to do either of these things*. Cpk was intended to provide a relative measure of the spread of the process vs the tolerance(s). It never required Normality because it wasn't about defect rates. it wasn't about 'centering'. it was about reducing variation . period. think about it for awhile.

In the case of a fit or interaction or wear condition when you have several parts coming together to create a function you don't need to have parts out of spec (with poorly set tolerances based on engineering only to the target), you just need a stack-up to occur. So that the more parts you have near the tolerances the more stack-up conditions will result in physical failure such as poor fit, malfunction and wear....

In the case of a characteristic where deviation form the target (which doesn't have to be the center) causes increasing 'dis-satisfaction' - think Taguchi loss function - all you need is more parts near the tolerances to be worse off. so a 'uniform' or inspection truncated distribution with heavy tails near the tolerances - but none out of spec - will perform worse than a process that has very thin tails at the tolerances even if some parts are out of spec. In this case the SD of the fat tailed distribution will be greater than the SD of the 'wider' thin tailed distribution. This is also true for a narrow distribution that is close to a tolerance vs a wide distribution that has the majority of it's results at the target even if one or both tails goes beyond the tolerances. The target doesn't have to be the center of a bilateral tolerance.

in these cases a straightforward Cpk index has value - Normality doesn't' matter. defect rates are not important. it is a measure of variation from the target. It isn't a great measure and I still prefer the time series graph over the index as there is a lot more information in the total picture, but it can serve a purpose.

The real difficulty is that after Sullivan introduced the Cpk index the statistical purists and hacks got into the game and tried to make the index some type of Oracle of Delphi (pun only half intended) and they messed it all up into the abomination of misinterpretation and blather that it is today...and that includes AIAG.

*see "Reducing Variability: A New Approach to Quality" by L. P. Sullivan Quality Progress 1984. it is well worth the $5 if you are an ASQ member or $10 if you are not.

:topic:I just realized that I am leaving the cove where I entered it. I first came here asking about the 'famous' Ford-Mazda transmission case study and the Sony San Diego-Japan TV case. I found this article (ironically in my archives of old Quality Progress Magazines) as a result of that first question. Life really is one big circle...:bigwave:
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
ah but all capability indices are mathematical voo-doo masquerading as statistics.

As a customer requirement they are "non-value added" - but as a quick process guide they can be handy if all of teh homework is done ahead of time. If you don't have a CNX chart or total variance equation understanding of your process, a capability index is like sticking your finger into a beehive to see if there is honey (and half of the time its a hornet's nest.)
 
B

bcoolnow

I tried the spreadsheet which gokats78 posted and also read some other postings which referred to the one dimension calculations. Some entries which had a Cpk using Q1 macros did not even calculate using the QA000075 spreadsheet. Two of them were low using Q1, 0.84 & 0.34 which understandably may not always generate using other software, but one of them was 8.46 and still got a 0 for Cpk using the QA spreadsheet.
Also, one entry stated to use the CpU or CpL but these are the same as Cpk. Am I overlooking something. Maybe you need more info to help me. Maybe I do.:confused:
 
N

ncwalker

The other problem is that true position, unlike flatness or roundness, isn't really unilateral. The number we look at is, but the underlying phenomena definitely isn't.

What is true position? It is the diameter of the circle upon which the feature falls that is centered on the point where the feature is supposed to be. It is an extremely useful representation for stack-up calculations, like the fixed and floating fastener formulas. But that's it.

Were I to tell you a plane had a flatness of .15 against a tolerance of 0.10, you would know you had to make the plane flatter and also by how much.

Were I to tell you a hole had a position of .15 against a tolerance of .10, you could say you needed to move the hole .025 at a minimum, about 0.07 would be better. But you have no idea which way to move it. Because all you know is the diameter of the circle the hole falls on that is centered on the point where the hole is to be.

We can look at Cpk. And we know a Cpk of 0 means the mean of the distribution falls right on the tolerance. And a Cpk of 1 means that a point 3 standard deviations from the mean falls on the tolerance. And because the upper limit of Cpk is Cp, we can, with the knowledge of Cpk and Cp have a really good visual as to how centered the distribution is. EXCEPT for position.

I could have a cluster of points in a very tight cluster in +x/+y quadrant with a true position of 0.1. This is a tight grouping with very little variability.

I could also have a huge, cloudy scatter of points in all directions with an AVERAGE distance of 0.05 from nominal and my TP on this population would still be 0.1.

In the former example, I have the easy problem. Just shift my shot group. I am already tight and controllable. In the latter, I may very well be centered but I am very noisy. The reaction to this is totally different (and also harder to fix).

But the point is, my true position would be the same. Clearly the first example is more "capable", it just needs centering.

There is only one way I have seen that calculates Cpk with any semblance as to how the result will behave like other features do. I call it the "principle variation method" because it works very similar to a principal stress calculation.

You calculate the variance on the "X" coordinates. You calculate the variance on the "Y" coordinates. The MAX variance is the the square root of the sum of the squares of these two variances, your principle variance.

Take the square root of this and get the principle standard deviation. Then you can use the true position tolerance in your Cp calculation as:

Cp = Position Tolerance / (6 * sigma') where sigma' is principle standard deviation.

And CpU = (Position Tolerance - mean of TP) / (6 * sigma')

Yes, both over 6 sigma. Remember that TP is a DIAMETER. So the the upper limit as the distance from where you want it to the limit is HALF the true postion and so is the distance to the mean of the true position results. Taking out this half from both terms multiplies the expected 3 sigma in the denominator by 2.

With this method, the "rules" of Cpk/Cp work. By this I mean, if you are right on the limit, Cpk = 0. etc.
 

louie22

Registered
It's not your math formulas per se that are at issue.

There are really 3 issues here:
  1. Cpk as measure of how off center the process is
  2. Cpk as a measure of defect rate
  3. How Cpk was originally intended to 'work'

If one looks at Cpk as a measure of how off center the process then yes you end up where Bob is upset about when you have a one sided tolerance where the goal is to be as far away from the one tolerance as possible. However, many people don't 'use' it this way. (and I would have to ask Bob to elaborate on his issue with 'half Cpks'. All traditionally reported Cpk results are 'halfs' in that we report the worst result of the upper and the lower Cpk...Bob?)

If one looks to the Cpk value to determine the defect rate then the non-Normality of the distribution comes into play. Most distributions that have a one sided tolerance with a natural boundary have a skewed distribution with a tail towards the tolerance. In this case the standard Cpk calculation will return an erroneous defect rate because the defect rate calculation is done using the Normal Z table...

HOWEVER - and this is the profound however - Cpk was NEVER intended to do either of these things*. Cpk was intended to provide a relative measure of the spread of the process vs the tolerance(s). It never required Normality because it wasn't about defect rates. it wasn't about 'centering'. it was about reducing variation . period. think about it for awhile.

In the case of a fit or interaction or wear condition when you have several parts coming together to create a function you don't need to have parts out of spec (with poorly set tolerances based on engineering only to the target), you just need a stack-up to occur. So that the more parts you have near the tolerances the more stack-up conditions will result in physical failure such as poor fit, malfunction and wear....

In the case of a characteristic where deviation form the target (which doesn't have to be the center) causes increasing 'dis-satisfaction' - think Taguchi loss function - all you need is more parts near the tolerances to be worse off. so a 'uniform' or inspection truncated distribution with heavy tails near the tolerances - but none out of spec - will perform worse than a process that has very thin tails at the tolerances even if some parts are out of spec. In this case the SD of the fat tailed distribution will be greater than the SD of the 'wider' thin tailed distribution. This is also true for a narrow distribution that is close to a tolerance vs a wide distribution that has the majority of it's results at the target even if one or both tails goes beyond the tolerances. The target doesn't have to be the center of a bilateral tolerance.

in these cases a straightforward Cpk index has value - Normality doesn't' matter. defect rates are not important. it is a measure of variation from the target. It isn't a great measure and I still prefer the time series graph over the index as there is a lot more information in the total picture, but it can serve a purpose.

The real difficulty is that after Sullivan introduced the Cpk index the statistical purists and hacks got into the game and tried to make the index some type of Oracle of Delphi (pun only half intended) and they messed it all up into the abomination of misinterpretation and blather that it is today...and that includes AIAG.

*see "Reducing Variability: A New Approach to Quality" by L. P. Sullivan Quality Progress 1984. it is well worth the $5 if you are an ASQ member or $10 if you are not.

:topic:I just realized that I am leaving the cove where I entered it. I first came here asking about the 'famous' Ford-Mazda transmission case study and the Sony San Diego-Japan TV case. I found this article (ironically in my archives of old Quality Progress Magazines) as a result of that first question. Life really is one big circle...:bigwave:

TP is a theoretically exact dimension - a "perfect" location.. you can develop a distribution of distances between parts but you cannot calculate a capability index without a specification.
 
Top Bottom