Search the Elsmar Cove!
**Search ALL of Elsmar.com** with DuckDuckGo Especially for content not in the forum
Such as files in the Cove "Members" Directory
Resource icon

MSA 2019-11-11

Bev D

Heretical Statistician
Staff member
Super Moderator
#31
AIAG hasn’t invested any time in developing MSA or FMEA or anything else. They are parroting work done by others and are only responding to persistent criticism by thought leaders in industrial statistics.

True knowledge comes form independent thought, research and critical thinking that challenges your assumptions not by seeking to validate your assumptions...

FWIW, the science and mathematics of categorical data MSAs is well established. The kappa statistic however, is one of those ‘easy’ yet misleading statistics. Bettter to use an iterative process, the mcnemer statistic and real thought about the process. I will add a paper I wrote on this process to the resources section today....
 

Miner

Forum Moderator
Staff member
Admin
#32
I think AIAG wants its methods to work, though, and has put a lot of effort into developing methodology for MSA and FMEA.
The reason that the AIAG methods for MSA "work" despite being mathematically incorrect (as BevD correctly states)(you cannot add standard deviations) is that they are heavily biased toward calling an otherwise acceptable gage bad. It will never call a bad gauge good. This bias will always protect the automotive customer (which is AIAG's reason d' etre) and only hurts the supplier. Dr. Wheeler's Honest Gauge Study balances the risk to both customer and supplier.
 

Mike S.

An Early 'Cover'
Trusted
#33
One thing they (AIAG) may not realize is that unfairly "hurting the supplier" is hurting the automotive customer, too, eventually. To think otherwise is short-sighted, but short-sightedness is a very common malady, in business and elsewhere.

One other point I'd like to make, FWIW: Every subject needs it's "thought leaders", experts, inventors, pioneers, people that advance and teach the state-of-the-art, whatever you call them. But obviously only a small percentage of folks fall into these categories, leaving most people to simply be practical users of the knowledge at various different levels. In the end, what most people know and use is a "dumbed-down" version or part of the whole body of knowledge. That ain't bad, it is, to a degree, necessary.
 

Miner

Forum Moderator
Staff member
Admin
#34
My night school statistics professor told us that the sample standard deviation chart contains more information than the range chart, and I realized that the reason for the R chart was that it is much easier to subtract the smallest measurement from the largest than to calculate the standard deviation with a slide rule (and hand calculations for sums and differences also are required). People still use the R chart by habit today. I suspect that the median rather than x-bar chart was used for the same reason.
That is exactly the reason why these charts were developed, and why the Xbar/Range method is used in MSA. It didn't require difficult calculations and could be done by shop floor workers.

The AIAG manual, however, includes a correction factor for a specific sample size and I don't know how it was derived. I was however able to get the formulas for derivation of the d*2 factors so I am more comfortable with those.
Many of the factors used to calculate control limits were empirically derived just as Shewhart settled on 3 standard deviations through trial and error to balance risks of false alarms with the risk of missing a change. Even Deming said that people that thought control limits were based on probability were missing the point.
 
#36
That is exactly the reason why these charts were developed, and why the Xbar/Range method is used in MSA. It didn't require difficult calculations and could be done by shop floor workers.

Many of the factors used to calculate control limits were empirically derived just as Shewhart settled on 3 standard deviations through trial and error to balance risks of false alarms with the risk of missing a change. Even Deming said that people that thought control limits were based on probability were missing the point.
Many people (whom I respect) argue that control limits based on probability miss the point; 3 sigma is considered "good enough" to detect out of control conditions without too many false alarms.

My own position is, however, that probability-based limits should indeed be used because there are many systems that follow non-normal distributions, and to the extent that the false alarm rate increases markedly. In addition, while 3-sigma may be "good enough" especially if supported by samples and the Central Limit Theorem--the average acts normal even if the data are not--we MUST use the underlying distribution to determine process performance indices. If we assume normality, the estimated nonconforming fraction may be off by orders of magnitude. As we must identify and use the underlying distribution regardless, we may as well use it for the control limits.

I think the limits for the R chart come from order statistics and the distribution of the extreme values (maximum and minimum in the sample). The control limits can be calculated to give 0.00135 false alarm risks if desired, but this is not where the D factors in the control chart tables come from.
 
#37
The reason that the AIAG methods for MSA "work" despite being mathematically incorrect (as BevD correctly states)(you cannot add standard deviations) is that they are heavily biased toward calling an otherwise acceptable gage bad. It will never call a bad gauge good. This bias will always protect the automotive customer (which is AIAG's reason d' etre) and only hurts the supplier. Dr. Wheeler's Honest Gauge Study balances the risk to both customer and supplier.
I recall that the AIAG GRR method (average and range, and ANOVA also is supported) makes the total gage standard deviation equal to the square root of the equipment and appraiser variances; they are adding variances and not standard deviations, which is the right way to do it. That is, GRR = SQRT(EV^2 + AV^2). They also do stuff with the part variation which I pretty much ignore because we know there will be part variation and the estimate from 10 parts is inferior to the one we get from a process capability study with 30, 50, or even more parts.

That is, if we do a 2-way ANOVA with appraisers and parts as factors, it is almost a foregone conclusion that we will reject the null hypothesis that all the parts have the same measurement. What we hope won't happen is that we also reject the null hypothesis that the appraisers do not have an effect on the measurements. We also hope not to see appraiser-part interaction which should be relatively rare.
 

Miner

Forum Moderator
Staff member
Admin
#38
My own position is, however, that probability-based limits should indeed be used because there are many systems that follow non-normal distributions, and to the extent that the false alarm rate increases markedly. In addition, while 3-sigma may be "good enough" especially if supported by samples and the Central Limit Theorem--the average acts normal even if the data are not--we MUST use the underlying distribution to determine process performance indices. If we assume normality, the estimated nonconforming fraction may be off by orders of magnitude. As we must identify and use the underlying distribution regardless, we may as well use it for the control limits.
Dr. Wheeler covers this myth very well in the following article The Normality Myth.
 

Miner

Forum Moderator
Staff member
Admin
#39
I recall that the AIAG GRR method (average and range, and ANOVA also is supported) makes the total gage standard deviation equal to the square root of the equipment and appraiser variances; they are adding variances and not standard deviations, which is the right way to do it. That is, GRR = SQRT(EV^2 + AV^2). They also do stuff with the part variation which I pretty much ignore because we know there will be part variation and the estimate from 10 parts is inferior to the one we get from a process capability study with 30, 50, or even more parts.
See Dr. Wheeler's article Gauge R&R Methods Compared, section The AIAG "Proportions".
 

Jim Wynne

Staff member
Admin
#40
They also do stuff with the part variation which I pretty much ignore because we know there will be part variation and the estimate from 10 parts is inferior to the one we get from a process capability study with 30, 50, or even more parts.
How can you do the process capability study with a measurement system that hasn't been qualified? Also, in theory at least, we're supposed to select parts for the gage study that represent the operating range of the process. According to AIAG "logic" we determine the operating range of the process with an unqualified measurement system. None of it makes any sense, on any level.
 
Top Bottom