Gage R&R Study
I recently embarked upon a journey to write a course (ppt pres.) to teach fledgling companies how to perform Gage R&R Studies.
In my endeavor to do so, I consulted the chat rooms of net. I pulled off a discussion that said,
"A typical Gage R&R study involves three operators taking two measurements of ten "identical" parts. The unlabeled parts should be passed to the operator in random order so that there is no bias when performing the measurements. It is important that the operator is not aware of which part they are measuring so that no attempt can be made to try to replicate the previous measurement.
After all of the measurements have been made, a series of calculations is performed to determine the % R&R. This value reflects how effective the measurement system is. A % R&R of less than 30% is considered to be acceptable by most industry standards."
When I learned to do R&R studies, I believe in accordance with AIAG, I understood that ten parts that were selected from a manufacturing process were to "labeled" 1-10, and a spot marked on each part as to the location the measurement was to take place in order to isolate gage system error from part process error!
It seems that if the parts were measured at random in random order from operator to operator, that this would introduce a sizable amount of variation into the equation that would make the grand average numbers quite high causing the resulting percentages to escalate! Also, 10% is the acceptable limit, not 30. 11-30 is acceptable ONLY based upon the application, in other words it has to be monitored closely. 31% and above would be unnacceptable under any conditions.
The above mentioned excerpt was from Ingersoll-Rand's Athens, PA plant that produces power tools.
If you have any comments or solution to either my or their understanding of how the R&R is to be conducted, please comment!
Dave S.