Bill,

In response to the same question at the Niles Stats forum, B. Duffy replies:

Snip:

In reply to: QS-9000 and SPC Requirements posted by Marcy White on January 12, 1998 at 11:58:08:

I flipped through Chapter II Section 1 of the QS 9000 SPC manual (1995 version) and did not see anything pertaining to spec limits on the X-bar chart. Do you mean control limits? (Spec limits should not be drawn on a control chart--but they come into play in capability analysis.)

If you mean control limits, then the answer is no, the limits can not be compared to the specs on the drawing. As you state, spec limits apply to individual measurements. If you are plotting an X-bar chart, you can "convert" the standard deviation of the chart to an individuals basis by taking the distance between the X-bar limits, dividing by six, and then multiplying by the square root of the sample size. I think you can also take the value of R-bar (central line of the range chart) and divide it by the value of d2 that corresponds to the sample size.

I believe this estimate of the process standard deviation could be used in the Cpk calculation-you need not generate an individuals chart. If you do create an X-mR chart, I'd suggest a moving range size of n = 2, no bigger.

If the process does not display statistical control, the two methods may not yield similar results, so your Cpk will vary. (You may want to compare methods using data known to be random, say from a random number table, to get a feel for the difference.)

Finally, the functional form of the process distribution is not important--unless you're determined to estimate percentages of nonconforming product as tail areas of the distribution beyond the spec limits (not recommended). What matters is whether the process is stable. A stable process is reproducible within limits. Thus, the Cpk can be used as a prediction (very important to customers). In this case, the control chart, if it shows stability, fulfills requirements laid down in the theory of knowledge.

mbd

End Snip

I think B. Duffy makes some points well.

Steven,

When you average a sample you do what is called stacking. That is, you shrink your variation by a factor of SQRT of the sample size. To get a representative spec you will have to make the same adjustment. Thus, If your spec is for individual units and you want to adjust it for comparison against an average of five, divide the spec by the SQRT of five.

Your response and following example are correct, although I have not heard it referred to as stacking. I would, however, recommend adjusting the standard deviation by the SQRT of the sample size, not the spec. Perhaps I misunderstood.

I would add that when doing a process capability study, careful consideration must be paid to sigma. There are corrections available for individual measurement sigma when the sample size is less than 100. See Grant and Leavenworth

**Statistical Quality Control**, 6th edition, pp. 107-108 and Table C in Appendix 3, same source.

As far as normality testing goes, I also found this at Niles:

Snip:

Jack Tomsky Replies:

Probably the best normality test, in the sense of detecting non-normality, is the Wilk-Shapiro test. Theoretically, this test involves the ratio of two estimates of the variance of a normal distribution. The first estimate is the square of the minimum variance linear unbiased estimate of the standard deviation of a normal distribution, based on a linear combination of order statistics. The second estimate is the usual sum-of-squares estimate. Since these involve coefficients that need to be obtained from tables, the easiest way to perform this test is through a statistics software package. "Distance" tests such as Kolmogorov-Smirnov and chi-square have been shown to be generally inferior to Wilk-Shapiro.

End Snip

Regards,

Don