Gage R&R Criteria on page 78 MSA Reference Manual

B

Black arrow

Hello,

On page 78 in MSA Reference manual it is written that if Gage R&R is between 10-30% the measuring instrument may be acceptable. Decision should be based upon, for importance of application measurement, cost of measurement device, cost of rework or repair.

What does this really mean?

I would like to have more explanation about this. Can anyone share some real cases where the instrument has been nonacceptable based on the decision example mentioned above? Also has been accepted due to the decision mentioned above?
 
Last edited by a moderator:
B

Black arrow

Hello again, i really appreciate if anyone can explain what this really means. Does it mean that it can be acceptable keep the old measurement even though the result R&R is 25 % while it is too expensive to invest in a new measurement device or repair the old one?

Thanks!
 
Last edited by a moderator:

Marc

Fully vaccinated are you?
Leader
Hopefully more folks will be around tonight and tomorrow who can help with this one.
 
B

Black arrow

Yes. Would be very good to have some clarification on that. I have three results from different mesauring equipments (CMM to simple measuring equipment) and the results are all between 10-30 %.
 
B

Black arrow

I really appreciate if someone could answer my question above! Thanks.
 

dgriffith

Quite Involved in Discussions
I will take a stab at it--you've been waiting patiently. Unfortunately, I'm not an MSA guy and do not have access to the manual.
The 10 and 30 percent criteria look like they are the bounds of what is acceptable performance for a process or instrument or what-have-you. If the instrument(s) are measuring critical processes, then I would want a lower number (%). If it is a gauge measuring temperature in a pipe and the process needs ?20 ?C, then 30% might be okay. If I need ?0.5 ?C then I'd rather have the 10%; I think that represents a capability and I'd definitely want a more capable gauge for the later example.

There is always the risk of making a bad decision based on a bad measurement, and you have to weigh the cost of that decision against the rework or a new replacement gauge.

Hope I didn't muddy the muddy waters any worse. :2cents:
 

Bev D

Heretical Statistician
Leader
Super Moderator
Does it mean that it can be acceptable keep the old measurement even though the result R&R is 25 % while it is too expensive to invest in a new measurement device or repair the old one?

The manual's reference to "the cost of rework or repair" refers to the cost of rework or repair of product, not the gage itself. The use of a measurement system that has poor capability should be done only when the probability of a non-conforming event is small and/or the effect of such an event is also small.

Typically a poor measurement system is mitigated by guard bands on the specification limits, Trending on high capability Processes to detect shifts or drifts toward the spec limits and/or the use of multiple measurements per part.
 
J

John SB

Does it mean that it can be acceptable keep the old measurement even though the result R&R is 25 % while it is too expensive to invest in a new measurement device or repair the old one?
As others have already mentioned, there can be various reasons for accepting GR&R results between 10% and 30% (i.e. importannce of application, cost of measurement device, cost of repair, etc.). First make sure that you truly can not improve the results. Looking at the % EV and % AV for instance could show you that the operator variation is too large and needs to be reduced through better instructions, fixturing, etc. However, i have accepted numerous studies between 10 - 30 %. You must complete a GR&R disposition statement that explains the results, observations and your disposion (i.e. redesign fixturing to reduce operator variation, retrain operator, guardband manufacturing equipment, Use as is, etc.). Be sure to do as you say and say as you do! Do not state that you will rework the fixture and redo the GR&R if you do not intend to do that.
 
B

Blue_Adams

The GR&R% should only drive you to purchase a new measurement system only if the EV% points towards too much variation (Gage Repeatability) and the reproducibility is low. Please state what the spread of numbers is within the 25%.

As far as "acceptable" GR&R, this number has to be decided by you based on the criticality of what you are measuring, the process capability, the end use and the alpha risk. Think of the GR&R% as the percentage of the tolerance of the specification and that should put things in perspective. Can you afford to lose 25% of the tolerance to the variation?
 

Bev D

Heretical Statistician
Leader
Super Moderator
Think of the GR&R% as the percentage of the tolerance of the specification and that should put things in perspective. Can you afford to lose 25% of the tolerance to the variation?

I agree that we must understand the effect of the measurement error relative to the tolerance. However, the AIAG method doesn't do this in a mathematically correct way. They divide 6*Measurement error SD into the tolerance. Since the tolerance is on the same vector as the total observed variation, this is the equivalent of dividing one SD into another SD. The correct method is to divide the variances.

The AIAG method grossly overstates the 'percentage of the tolerance that is consumed by the measurement error'.

A very recent article by Dr. Donald Wheeler explains some of this: Comparing Gauge R&R Methods . I strongly recommend reading his work to truly understand measuement systems analysis...
 
Last edited:
Top Bottom