p-Value(s) for Anova (Crossed) GRR (Gage R&R) Studies

S

straetfeild

Good day all,

I've had discussions with colleagues of mine here at work debating whether or not an acceptable p-value is required to use a given set of data for a GRR study (10*3*3). I can't find where MSA or any other documented parameters require me to do so. The experience of one of my peers is that his customer (a Big-3 automotive manufacturing company) required this in person, so we should have this as our standard accross the board.

Mathematically speaking, does an unacceptable p-value (p>0.05) automatically render the data unacceptable for an ANOVA-crossed GRR, and if so, is the reason because it is an ANOVA-based study?

I appreciate any advise and/or thoughts you may have, and I apologize in advance for any breach in established posting protocol.

(Edit: I tried to copy-paste the data, but was unsuccessful.)

Thanks!
 
Last edited by a moderator:

Miner

Forum Moderator
Leader
Admin
I love this question. This is the first NEW MSA question that I have seen in a long time.

To start, the current MSA methodology as defined by AIAG, endorsed by the automotive industry and codified by the statistical software companies totally ignore the p-values with one exception.

That exception is for the Operator x Part interaction. In Minitab, if the Op x Part interaction p-value is greater than 0.25 (default value), the interaction is then pooled with the ANOVA error term thus becoming part of Repeatability. If it is less than 0.25, it is shown separately as a sub-item under Reproducibility. Why 0.25? Minitab says to stay consistent with AIAG. AIAG wanted to be extremely conservative about pooling the interaction in error.

Technically, if the Reproducibility p-value is greater than alpha (usually 0.05), it also should be pooled into the error term becoming part of Repeatability. It is only when it is less than alpha that Reproducibility can be distinguished from Repeatability. However, AIAG is silent on this matter and the canned MSA routines in software do not allow it. To do it yourself, you would have to run the standard ANOVA routines and manually generate the graphs.

The p-value for Parts also provides information. If the p-value of Parts is greater than alpha, the part variation is indistinguishable from Repeatability variation. If the p-value is less than alpha, the gage can distinguish at least one part as different from the rest. Not definitive that the gage is good (p < alpha), but conclusive that it is not good (p > alpha).

Thank you for this question.
 
P

Piotr Stoklosa

Why 0.25? Minitab says to stay consistent with AIAG. AIAG wanted to be extremely conservative about pooling the interaction in error.

Does it mean that 0.25 is another rule of thumb? Or maybe it has any statistical background? Sometimes I suspect that it has something to do with alpha level: because the interaction is part * oper, so it's like squaring 0.05 * 0.05, but it gives only 0.0025, not 0.25.
 

Miner

Forum Moderator
Leader
Admin
It does have something to do with alpha. AIAG has arbitrarily established an alpha = 0.25. This means that there is a 25% chance that if the null hypothesis (there is no interaction) is correct you will mistakenly reject that hypothesis in favor of the alternate hypothesis (there is an interaction).

I strongly disagree with this approach. The alpha risk for the interaction should be the same as for the operators.
 
P

Piotr Stoklosa

Good news for everyone. In Minitab 17 they lowered default "alpha to remove interaction" to 0,05 which is now explicable with CI :).

To Miner: In your previous message you say that AIAG requires 0,25 (I can't find a source of this information, whould you be so kind to give me the reference?). Does it mean that AIAG changed its mind and accepted recognized statistical approach?

Thank you for comments.
 

Miner

Forum Moderator
Leader
Admin
I took a quick look through the MSA manual versions 3 & 4 and could not find anything either. I cannot remember whether this was in an earlier version and was dropped, but it appears to be a non-issue now.
 
G

G.Pito

I love this question. This is the first NEW MSA question that I have seen in a long time.

To start, the current MSA methodology as defined by AIAG, endorsed by the automotive industry and codified by the statistical software companies totally ignore the p-values with one exception.

That exception is for the Operator x Part interaction. In Minitab, if the Op x Part interaction p-value is greater than 0.25 (default value), the interaction is then pooled with the ANOVA error term thus becoming part of Repeatability. If it is less than 0.25, it is shown separately as a sub-item under Reproducibility. Why 0.25? Minitab says to stay consistent with AIAG. AIAG wanted to be extremely conservative about pooling the interaction in error.

Technically, if the Reproducibility p-value is greater than alpha (usually 0.05), it also should be pooled into the error term becoming part of Repeatability. It is only when it is less than alpha that Reproducibility can be distinguished from Repeatability. However, AIAG is silent on this matter and the canned MSA routines in software do not allow it. To do it yourself, you would have to run the standard ANOVA routines and manually generate the graphs.

The p-value for Parts also provides information. If the p-value of Parts is greater than alpha, the part variation is indistinguishable from Repeatability variation. If the p-value is less than alpha, the gage can distinguish at least one part as different from the rest. Not definitive that the gage is good (p < alpha), but conclusive that it is not good (p > alpha).

Thank you for this question.
sorry for being a little late with my reply (only 15 years) :)

I have a question: why did you write "AIAG wanted to be extremely conservative about pooling the interaction in error."?

On AIAG MSA 4th edition, page 198 (referring to the example case study) it looks like they pooled the interaction in Appraiser-by-part , but then, considered that the F Ratio was far lower than the F critical (for Alfa 0.05, df1=18, df2=60) they pooled it in error.

Am I wrong?
I apologize for this "stupid" question (I'm a newbie) but it is just to understand...

Thank you in advance
(waiting for you reply in 15 years ah ah)
 

Miner

Forum Moderator
Leader
Admin
I have a question: why did you write "AIAG wanted to be extremely conservative about pooling the interaction in error."?

On AIAG MSA 4th edition, page 198 (referring to the example case study) it looks like they pooled the interaction in Appraiser-by-part , but then, considered that the F Ratio was far lower than the F critical (for Alfa 0.05, df1=18, df2=60) they pooled it in error.

In normal practice (using ANOVA), you would not show an interaction unless it were statistically significant (p < 0.05). So, by default the interaction is not shown unless p < 0.05. AIAG drastically loosened the requirement for statistical evidence by showing the interaction unless the p >0.25.

I originally wrote this post when the 3rd edition was current. The example you cite was revised for the 4th edition adding the pooling portion (Maybe someone at AIAG read my post?).
 
Top Bottom