Gage R&R failed, or did it really?

alek333

Registered
Hi guys, I need some help understanding my crossed ANOVA gage R&R results and what to tell the customer.

So we are currently doing a PPAP for a new customer on a part that we’ve been making for a decade.

The production process has a detection machine that 100% tests each part and interlocks and rejects it if it fails. The detection machine takes 10 minutes to test the part and provides variable values as results. The machine then takes those variable values and compares them against a spec and interlocks if it fails outside that spec. Even though the detection machine provides variable data, we functionally use it as an attribute gage, pass/fail results only. No SPC is done using this detection machine.

Our new customer requires us to do a Gage R&R on the detection machine in order to make sure it is an effective measurement system. We have never done this before.

Even though it’s functionally an attribute gage i decided to do a crossed anova Gage R&R since the machine provides variable data and I didn’t want to do 450 measurements for a attribute gage R&R.

According to the results of the gage R&R, it failed. Study variation R&R results are like 90% and the number of distinct categories is 1.

So it failed. However the tolerance variation gage R&R hovers around 20%.

From my understanding, study variation Gage R&R is more driven by SPC requirements while tolerance Gage R&R is more about the effectiveness of the measuring system to detect pass vs fail.

Since this machine is more of a functional attribute gage in our process, could I argue that it passes gage R&R?

Am I understanding all of this correctly. Is there anything in the AIAG MSA manual to support my argument.

I really don’t want to do an attribute gage R&R study. That would be 50 parts X 3 operators X 3 trials for a total of 450 measurements and 4500 minutes of testing at the very least.

Thoughts?
 

Golfman25

Trusted Information Resource
sounds like your gage is sensitive enough For the variable study. Because it‘s a go/no go gage, do the attribute study. Cut down the number of samples due to the test time. My experience is it will be easier to submit a passed study than to “justify” One that didn’t pass.
 

Bev D

Heretical Statistician
Leader
Super Moderator
I would t cut back on the parts I would cut the number of operators to 1 if you are reasonably certain that there is no operator effect - you can use the variables data to justify that. Also, 2 passes of each part is statistically sufficient.
 

Miner

Forum Moderator
Leader
Admin
Do a variable R&R study. The machine is measuring continuous data to make a pass/fail decision. This is no different than displaying the continuous result and an operator making a pass/fail decision other than the fact that you have automated it and taken some potential for error out of the equation.

And you are correct. You are using this measurement system as an inspection device, not for SPC. Therefore, you use % Tolerance as the deciding metric. % Study Variation and ndc are used for SPC and are not relevant in this case.
 

alek333

Registered
Thank you for the response!

Is there somewhere in the MSAAIAG manual that explicitly says this? I’m trying to point to a specific section to my boss.
 

Miner

Forum Moderator
Leader
Admin
The MSA manual does explicitly states the meaning of each metric (i.e., that % Tolerance shows whether the measurement device is suitable for inspection, etc.). It does not explicitly state to only use a specific metric based on the intended use of the measurement device. That is implicitly understood. Unfortunately, there are a lot of people that do not make that implicit connection.
 

alek333

Registered
I found a section in the manual saying to use process R&R value for evaluating a gage’s ability to inspect. But I can’t find anything on number of discreet categories and how it relates to that. Anything explicit would be great!
 

Miner

Forum Moderator
Leader
Admin
Number of discrete categories is a redundant metric for % Study Variation. Both evaluate the suitability of the measurement device for use in SPC.
 

JanKees

Registered
The NDC should always be 5 or more, are your measurement results practically the same or low in variance?
If you could have more variation in results, to the max and the min dimensions, i think you should be fine.

You could test this by playing with the results in variance. Then decide if you should make parts with more variance.
 

alek333

Registered
Number of discrete categories is a redundant metric for % Study Variation. Both evaluate the suitability of the measurement device for use in SPC.
All I could find for NDC is in reference to process control and process analysis. I think that’s good enough to convince my boss.

Thanks!
 
Top Bottom