What's wrong with this Shaft OD Gage R&R



Please advice what's wrong with this gage R&R.
Customer requested us to show gage R&R study for pins/shafts we produce, so we carried out whole procedure according to MSA Reference Manual.
O.D is important characteristic for them and we are measuring it by Transameter.
We selected 10 samples and asked 3 operator to measure each part 3 times. All results are in drawing tolerance and they are close each other. Spec is 0.039 and Transameter precision is 0.001. R&R we calculated 69%. We tried with another samples and also result is very high. Can anybody help? I attached results in file


  • OD R&R A1-1719.pdf
    93.7 KB · Views: 216


Trusted Information Resource
I don't believe your equipment is precise enough to ever get a good rr number. You'll probably need something else. Good luck.


Forum Moderator
See attached file. First, don't be concerned by the slight differences in results. Your spreadsheet uses the Xbar/R method, while I used the ANOVA method, so differences are expected.

Your gage resolution is fine. There are 8 measurement increments within the UCL_R versus a minimum requirement of 5, and you only have one zero range vs. a maximum of 25%.

I see two issues that would allow you to make some improvements.

The first issue is the differences in repeatability by operator. Look at the R Chart by Operator (outlined in red). The first operator has a consistently high repeatability vs. the last operator which has a consistently lower repeatability. I recommend studying the differences between these operators and standardizing the measurement method used by the last operator.

The second issue is the Operator by Part interaction. Notice that certain parts (outlined in green) are measured consistently the same by all operators while others (outlined in amber) are measured differently by each operator.

This is often caused by a variation in form in the part or a defect such as a burr that is detected by one measurement method and ignored by a different method. An example of this is an oval shaft. One operator may take a single measurement at random. Therefore, the measurement variation would include variation caused by the ovality. Another operator may search for the max and min measurements then average them, which removes this variation from the results. The green parts would be expected to have less variation in form (or fewer defects) than the amber parts.


  • Elsmar RandR.pdf
    88.5 KB · Views: 176
Last edited:


Stop X-bar/R Madness!!
Trusted Information Resource
Did you mark the parts and measure the same location? If not you may pick up roundness or taper - which is not the fault of the measuring device and should not be considered an intrinsic error of the gage. Also, does the process variation presented to the gage R&R (in your samples) represent the true variation over time? I find using historical process variation (AIAG MSA 4th edition bottom of page 121) a much more realistic evaluation.


Thank all of you for help and analyzing my data. I agree. Difference in measuring results might be caused by variation in form (ovality, roundness). These parts are produced by CNC machine (very repeatable and precise) and outside diameter as a parameter is important for customer while it doesn't say anything about shape. Although we do not check ovality during process, for this case we did it and measured several parts. Ovality is not significant and this is the range of few microns but considering gage discrimination it is measurable. Comparing to tolerance is almost nothing. So, I understand that results in R&R are mainly caused by variation in shape. We "trust" appraisers. In this case how can I convince customer that this measuring system is OK? Shall we mark parts and let appraisers to measure at the sam point?


Trusted Information Resource
Whatever changes (improvements) you will implement during analysis, you have to implement in serial production conditions. I would drop this system, but maybe I am too pessimistic.
Top Bottom