It's not your math formulas per se that are at issue.
There are really 3 issues here:
- Cpk as measure of how off center the process is
- Cpk as a measure of defect rate
- How Cpk was originally intended to 'work'
If one looks at Cpk as a measure of how off center the process then yes you end up where Bob is upset about when you have a one sided tolerance where the goal is to be as far away from the one tolerance as possible. However, many people don't 'use' it this way. (and I would have to ask Bob to elaborate on his issue with 'half Cpks'. All traditionally reported Cpk results are 'halfs' in that we report the worst result of the upper and the lower Cpk...Bob?)
If one looks to the Cpk value to determine the defect rate then the non-Normality of the distribution comes into play. Most distributions that have a one sided tolerance with a natural boundary have a skewed distribution with a tail towards the tolerance. In this case the standard Cpk calculation will return an erroneous defect rate because the defect rate calculation is done using the Normal Z table...
HOWEVER - and this is the profound however -
Cpk was NEVER intended to do either of these things*. Cpk was intended to provide a relative measure of the spread of the process vs the tolerance(s). It never required Normality because it wasn't about defect rates. it wasn't about 'centering'.
it was about reducing variation . period. think about it for awhile.
In the case of a fit or interaction or wear condition when you have several parts coming together to create a function you don't need to have parts out of spec (with poorly set tolerances based on engineering only to the target), you just need a stack-up to occur. So that the more parts you have near the tolerances the more stack-up conditions will result in physical failure such as poor fit, malfunction and wear....
In the case of a characteristic where deviation form the target (which doesn't have to be the center) causes increasing 'dis-satisfaction' - think Taguchi loss function - all you need is more parts near the tolerances to be worse off. so a 'uniform' or inspection truncated distribution with heavy tails near the tolerances - but none out of spec - will perform worse than a process that has very thin tails at the tolerances even if some parts are out of spec. In this case the SD of the fat tailed distribution will be greater than the SD of the 'wider' thin tailed distribution. This is also true for a narrow distribution that is close to a tolerance vs a wide distribution that has the majority of it's results at the target even if one or both tails goes beyond the tolerances. The target doesn't have to be the center of a bilateral tolerance.
in these cases a straightforward Cpk index has value - Normality doesn't' matter. defect rates are not important. it is a measure of variation from the target. It isn't a great measure and I still prefer the time series graph over the index as there is a lot more information in the total picture, but it can serve a purpose.
The real difficulty is that after Sullivan introduced the Cpk index the statistical purists and hacks got into the game and tried to make the index some type of Oracle of Delphi (pun only half intended) and they messed it all up into the abomination of misinterpretation and blather that it is today...and that includes AIAG.
*see "
Reducing Variability: A New Approach to Quality" by L. P. Sullivan Quality Progress 1984. it is well worth the $5 if you are an ASQ member or $10 if you are not.
I just realized that I am leaving the cove where I entered it. I first came here asking about the 'famous' Ford-Mazda transmission case study and the Sony San Diego-Japan TV case. I found this article (ironically in my archives of old Quality Progress Magazines) as a result of that first question. Life really is one big circle...