Hello!
Statistics is not my strong suit, so I would appreciate all the knowledge bombs.
I did a GRR on a drop indicator that measured a counter-bore depth of a tube.
The total %studyvar was 29.89%. I was previously getting almost a 38%, but was able to identify set-up of the inspection/part apparatus as a source of significant variation. I fixed it and was expecting the GRR to be <10%; however, it is still higher than ideal.
The data is attached.
Our acceptance criteria allows 11-30% GRR's to be accepted with a justification. I will be using the gage purely for inspection (i.e. ensure parts are within their tolerances).
Can someone help me interpret these results? I don't know what else I can do to reduce variation even further and I do not believe we have a better method accessible.
Thank you!
Statistics is not my strong suit, so I would appreciate all the knowledge bombs.
I did a GRR on a drop indicator that measured a counter-bore depth of a tube.
The total %studyvar was 29.89%. I was previously getting almost a 38%, but was able to identify set-up of the inspection/part apparatus as a source of significant variation. I fixed it and was expecting the GRR to be <10%; however, it is still higher than ideal.
The data is attached.
Our acceptance criteria allows 11-30% GRR's to be accepted with a justification. I will be using the gage purely for inspection (i.e. ensure parts are within their tolerances).
Can someone help me interpret these results? I don't know what else I can do to reduce variation even further and I do not believe we have a better method accessible.
Thank you!
Attachments
Last edited: