L
I work in a testing lab with daily tasks of receiving inspection of our incoming raw material. We were asked to perform a gage R&R study on some of our tests. Since we are not sampling randomly from a process and just doing incoming inspection, the variation between samples are not large. But I believe that I can still get useful information like P/T ratio out of the gage R&R.
However, I was told by my supervisor to only report a result of "gage r&r standard deviation / overall average". His assumption is that the test variation comes from repeatability and reproducibility which I believe is ok. He want to use the ratio of gage r&r standard deviation between overall average to predict the uncertainty of a test (in the form of a 95% confidence interval by times a factor on the number), which I am not comfortable with. From my experience on ASTM or ISO, there are special procedures to get a precision or repeatability of a test rather than using gage r&r. Also, gage r&r in my opinion is not for uncertainty. Usually, for a single test method, there are different grades or levels of products to be test (eg. a diameter measurement applies to 1 mm grade wire and 5 mm grade wire). My concern is that, although the number he want to report is simply a ratio, the gage r&r std ratio is only applicable to the tested grade rather than across the whole grade.
Could anyone comment on using gage r&r to predict precision or uncertainty of a test? Is that valid? Any suggestion is greatly appreciated.
However, I was told by my supervisor to only report a result of "gage r&r standard deviation / overall average". His assumption is that the test variation comes from repeatability and reproducibility which I believe is ok. He want to use the ratio of gage r&r standard deviation between overall average to predict the uncertainty of a test (in the form of a 95% confidence interval by times a factor on the number), which I am not comfortable with. From my experience on ASTM or ISO, there are special procedures to get a precision or repeatability of a test rather than using gage r&r. Also, gage r&r in my opinion is not for uncertainty. Usually, for a single test method, there are different grades or levels of products to be test (eg. a diameter measurement applies to 1 mm grade wire and 5 mm grade wire). My concern is that, although the number he want to report is simply a ratio, the gage r&r std ratio is only applicable to the tested grade rather than across the whole grade.
Could anyone comment on using gage r&r to predict precision or uncertainty of a test? Is that valid? Any suggestion is greatly appreciated.