Resource icon

MSA 2019-11-11

Bill Levinson

Industrial Statistician and Trainer
How can you do the process capability study with a measurement system that hasn't been qualified? Also, in theory at least, we're supposed to select parts for the gage study that represent the operating range of the process. According to AIAG "logic" we determine the operating range of the process with an unqualified measurement system. None of it makes any sense, on any level.

It should be safe to assume that the operating range of the process is inside the specification limits, which will (hopefully) not be a wide enough range to bring in issues like linearity. We also hope the specification limits are at least four process standard deviations to either side of the nominal (or else the process is not capable). This could in fact be used as a starting assumption for short-range or startup SPC (when we know nothing about the process standard deviation and have to make assumptions).

Also, regardless of the distribution of the process data (normal or non-normal), it is usually reasonable to believe the measurements will follow the normal distribution. That is, even if the part dimension comes from a non-normal distribution, the measurements we get from that part should be random normal numbers whose mean is the actual part dimension and whose standard deviation is the equipment variation. A possible exception might involve a measurement with a hard lower limit of zero (e.g. impurities or trace elements) and where the gage has a lower detection limit, but that is not the traditional application.* If one is suspicious, a gage study ANOVA will return residuals that can be assessed to test the normality assumption.

If the process can work to different nominals (as might be applicable in short-run SPC, or the kind of SPC that requires the z chart for different part dimensions from the same tool), a separate study might indeed be desirable for each nominal to determine whether there is a linearity issue and also whether the equipment variation depends on the size of the part. This is not something I have ever tried but you do raise an important point.


* I have looked into this issue, however, for process capability studies for processes with lower detection limits, where "zero" means simply less than the LDL; left-censoring techniques borrowed from reliability engineering can be used.
 

Bill Levinson

Industrial Statistician and Trainer
Dr. Wheeler covers this myth very well in the following article The Normality Myth.

I have seen this, and I notice that the false alarm risk for some of the non-normal distributions exceeds that for a normal distribution by a factor of ten. Dr. Wheeler is 100% correct when he says "Statistical procedures with a fixed coverage P in excess of 0.975 are said to be conservative," and we usually accept a Type I risk of 5% (or 2.5% in each tail for a 2-sided test) when doing a designed experiment. A designed experiment is however a one-time exercise, while SPC exposes us to a false alarm every time we draw a sample. People will then chase assignable causes where none exist.

Even were this not an issue, however, consider Wheeler's example with the exponential distribution. Suppose the upper specification limit is another sigma to the right (i.e. mean + 4 sigma) which is barely capable if we require a process performance index of 4/3. We expect about 32 DPMO (cumulative standard normal distribution for z = -4) and what we actually get is exp (-5) = 0.0067 or 6700 DPMO, or 210 times as many nonconformances. Gamma distributions, Weibull distributions, and so on can do the same thing which requires us to use the underlying distribution to calculate the process performance index. The AIAG SPC manual, in fact, mentions this approach. I have attached an example I use for teaching purposes as to what happens when we assume the distribution is normal while doing a process performance study.

Then we may as well use the underlying distribution for SPC, which StatGraphics can in fact do. It sets the center line at the median (not the mean) so there is a 50:50 chance of a point being on each side, and the control limit(s) to have tail areas of 0.00135. If we try to use a 3-sigma chart for a particularly non-normal distribution, by the way, it will look obviously wrong (especially if negative measurements are impossible, in which case the LCL becomes meaningless), the Western Electric Zone Tests will not work properly if used (see for example Detection Rule 4 in the Wheeler article), and the false alarm risk will be much higher.
 

Attachments

  • nonnormal.jpg
    nonnormal.jpg
    53.9 KB · Views: 134
  • nonnormal_correct_distribution.jpg
    nonnormal_correct_distribution.jpg
    46.7 KB · Views: 155

Jim Wynne

Leader
Admin
It should be safe to assume that the operating range of the process is inside the specification limits, which will (hopefully) not be a wide enough range to bring in issues like linearity. We also hope the specification limits are at least four process standard deviations to either side of the nominal (or else the process is not capable). This could in fact be used as a starting assumption for short-range or startup SPC (when we know nothing about the process standard deviation and have to make assumptions).
We're trying to avoid assumptions, not create them. I'll say it again--it makes no sense on any level that the gage study is supposed to include parts that cover the operating range of the process, but that the operating range of the process can't be ascertained without measurement (or multiple assumptions). It's circular logic at its finest.
 

Bill Levinson

Industrial Statistician and Trainer
We're trying to avoid assumptions, not create them. I'll say it again--it makes no sense on any level that the gage study is supposed to include parts that cover the operating range of the process, but that the operating range of the process can't be ascertained without measurement (or multiple assumptions). It's circular logic at its finest.


I am thinking for example with a part with a nominal of 100 and specification limits of 95 and 105 respectively. If we draw parts at random from this process, almost all the parts will be between 97 and 103 (99.7% if the distribution is normal). I think we are pretty safe in using any 10 parts for a MSA because the difference in the part dimensions will not raise the issue of linearity.

If on the other hand the process can make parts with nominals ranging from 20 to 200, we then have to ask whether any bias and/or equipment variation depends on the part's dimension.
 

Welshwizard

Involved In Discussions
As you are talking essentially about the quality of the assessment of a measurement process to track variation in a production process, the computation of part variation from 10 parts however they have been selected will always be a relatively poor estimate.
Standard AIAG methods of computing the consumption of part variation in the form of a percentage doesn’t help because it guides us down a path of good or bad. If we must compute these type of process percentages we need better estimates of part variation.

If we are running process behaviour charts for the characteristic of interest we can obtain a much better estimate of part variation from here. If we are really interested in answering the question about how useful a measurement process is to track variation in a production process we should use the intraclass correlation coefficient, estimates required here are measurement error from a consistent measurement process and product variation.

Detectible bias throughout the range can of course be assessed if need be. If this exists it can be compensated or allowed for as long as the measurement process is consistent.
 

Bill Levinson

Industrial Statistician and Trainer
As you are talking essentially about the quality of the assessment of a measurement process to track variation in a production process, the computation of part variation from 10 parts however they have been selected will always be a relatively poor estimate.
Standard AIAG methods of computing the consumption of part variation in the form of a percentage doesn’t help because it guides us down a path of good or bad. If we must compute these type of process percentages we need better estimates of part variation.

If we are running process behaviour charts for the characteristic of interest we can obtain a much better estimate of part variation from here. If we are really interested in answering the question about how useful a measurement process is to track variation in a production process we should use the intraclass correlation coefficient, estimates required here are measurement error from a consistent measurement process and product variation.

Detectible bias throughout the range can of course be assessed if need be. If this exists it can be compensated or allowed for as long as the measurement process is consistent.

My position is that the MSA's sole deliverables are assessments of the equipment variation (repeatability) and appraiser variation (reproduciblity) for the exact reason you cited; 10 parts are not enough with which to do a good job with the process variation. I consider the part variation estimate more of an academic exercise than a practical one; I generated some data for a teaching example and found that the estimate of the part variation was roughly similar to the one I used to generate the part dimensions. Were I to use actual process data, it might be fun to compare the gage study's estimate to that from the process behavior chart (or process capability study) but I would trust the latter long before I would trust the former.
 

Bill Levinson

Industrial Statistician and Trainer
On another note, I recall that the average and range method uses the range of the 10 parts to estimate the part variation. Compare to ANSI/ASQ Z1.9 (sampling for variables) where, if the sample size is given as 10, and the range method is used, one takes the ranges of two 5 part samples and uses the average range. This is another reason to not pay much attention to the MSA estimate of the part variation.
 
Top Bottom