Looking back at various posts about capability indices, I note there are several shortcomings for cpk.
Does anyone else care about these difficulties? Is there anything else I left out?
And then, is there any value in looking at a different capability index to address some of these issues, or is cpk so firmly ingrained that a system that offers some small (or even moderate) improvements is a waste of effort?
Tim F
- There is no provision for asymmetric tolerance limits. For example, if the specs are 100 +5/-1, the best cpk is obtained by centering on 102, not 100. But this is clearly not what the customer wants.
- cpk = 0 occurs whenever the mean is equal to the spec limit – no matter what the spread in data. Presumably, less spread would be better in reality, but the cpk doesn’t change.
- The whole process is predicated on a “go/no-go” mentality, rather than a Taguchi-type “closer is better” mentality. A set of data tightly clustered just inside the spec limits will generate a good cpk, but all of the parts will be on the verge of being out of spec. A set of data clustered just outside the spec limits will work just about as well, but will have a terrible (negative) cpk.
- For non-normal data, some advocate going ahead with the calculations with no concern for the distribution. Others advocate transforming/manipulating the data into something closer to normal behavior. Which to choose?
Does anyone else care about these difficulties? Is there anything else I left out?
And then, is there any value in looking at a different capability index to address some of these issues, or is cpk so firmly ingrained that a system that offers some small (or even moderate) improvements is a waste of effort?
Tim F


