Gage R&R - Consistent Readings, Inconsistent Results?


john allen

When I was doing gage R & R, I faced this problem

The conditions are

1. The USL and LSL are 0.940 and 0.936"

2. The LC of the Vernier caliper used to measure this dimension was 0.0001".

When I conducted the study with two operators and two trials each, I got almost all the readings to 0.9380 and a couple of readings were either 0.9385 or 0.9390. With these conditions, R & R came to 91.91% I am surprised to see this, when the readings were so consistent.

Would be grateful to receive your views.

John Allen


John –

Page 39 of the MSA manual discusses the general preparations for a MSA study. Point four on this page states “The sample parts must be selected from the process and represent its entire operating range”. This is a very critical point.

The calculations of R&R take into consideration the operator variation and the equipment variation in comparison to the total variation. In the data you present, you are only reflecting 0.001 total variation in a 0.006 specification. This sounds a bit fishy to me and that you might not be selecting parts that represent the entire operating range.

I would recommend two things.

1) Find product that represents the entire operating range of your process (specification). This might take several days. If all you process varies is 0.001, consider point #2.

2) Consider the alternative calculation listed on page 60 of the MSA manual. The alternative analysis is based on the tolerance instead of the process variation.

Should you choose the alternative calculation, you need to review and familiarize yourself with all the facts. Be prepared to defend your method for calculation and be able to clearly and concisely communicate it to your customer. The alternative method is the exception rather than the norm. But keep in mind; it is an acceptable method under the MSA guidelines.

Take a look at the options and let us know what you think. Any other opinions out there?

KenK - 2009

That's the funny thing about GR&R's. A really bad gage can produce just a few distinct data values - because it can't resolve between the parts measured. This makes it look like the variation is near zero.

Yet another reason to simply plot your data and "look" at them. No matter what the data are, I always keep an eye out for just a few distinct data values - a common sign of an inadequate gage.



As others have rightly pointed out, the proximity of you readings is probably due to not enough resolution, or the parts being too similar.

But may I add a point about calculating %R&R w.r.t TOLERANCE. This method can only prove that your measurement system is good for "inspection". It cannot prove your system for doing process analysis (i.e. SPC, Cpk calcualtion, ...). For process analysis R&R should be compared to Total Observed variation.




In the automotive and QS world, the tolerance is an acceptable method for GR&R. Granted this method is more the exception than the norm. And I cautioned its use. However, I have actually seen circumstances that this method was actually more practical than the traditional “total observed” method.

Keep an open mind. A lot of great minds and time went into the MSA manual. In the world of quality, there is no cut & dry circumstances. You have to choose what is best for you, your product and your business.

Just my opinion, but I am sticking to it.

KenK - 2009

Yes, you are all correct, for the most part. MSAFAI points out an important difference between the two GR&R metrics (which we call P/P and P/T --- precision to process and precision to tolerance).

Clearly the gage has poor enough resolution that for the most part it cannot distinguish the part-to-part variation.

Yes, the AIAG MSA Reference Manual allows use of the P/T ratio, but it clearly recommends use of the P/P ratio over the P/T ratio (in Section 2, Measurement Discrimination).

Yes, the gage appears adequate to judge whether the process is getting close to the edge of the specification. The actual method that might be used for judging that is not clear to me.

The real problem is that this gage will not allow estimation of the process variation, which is used to create limits for control charts AND to calculate process capability metrics. These are two key tools for maintaining process inegrity and control.

This also completely prevents any kind of improvement based upon continuous data (which requires substantially smaller sample sizes than attribute - good vs bad - data). This could limit or prevent continuous improvement efforts.

It is hard for me to imagine that any process engineer would find such a gage acceptable. If this is a very non-critical part, I suppose its quality could be deemed unimportant.


Dear Ken,

Thank you for the explanation.

How do you say that in English ... " You took the words out of my mouth "

Thanks again.


Dear all,
Purely for acedemic interest I want make two comments.
Firstly to achieve 0.000I" L.C. in a vernier is not possible in practice.For .0001" Least Count you have to use at least a micrometer.Looking at the tolerance Micrometer is the appropriate instrument for this application.

Secondly for controllinng the dimn.within 0.004" tolerance you may employ Turning or grinding or any suitable process.But when you obtain GRR, the ground parts GRR will be higher than turned parts GRR.Beacause process variation in grinding is much lower than in turning.
The lesson here is 'calculate GRR with reference to drg. tolerance'.There will not be much difference in GRR results.
Top Bottom