Hello to the group.
Does anyone have any experience with expected GRR results for standard measuring equipment. What would be a reasonable GRR result for a 0-1" micrometer with a total tolerance of .002".
We performed a GRR on a rather small part and got 17%, which our customer is not satisfied with. They want less than 10%.
Is that an unreasonable requirement?
Thanks,
As others have indicated, there could be many reasons for the results you got. It would be best if you could post your data and let us try to see what's going on.
< 10% would indicate that variation is in the parts not the operator or equipment. Anything over 10% would indicate errors in the system that may or may not need to be repaired depending on importance(since the customer is not accepting 17% then it should be deemed important).
17% would indicate some operator or equipment error. You may need to evaluate your results to determine which. This may also indicate the operators were not measuring in the same location or using the same technique.
There could be many things going on with the OP's study and data, but a result <10% isn't necessarily an indication that only part variation is at work, nor is the 17% result necessarily an indication that there is operator or equipment error.
How many operators were used?
How many parts were measured?
How many measurement trials were done to each part?
Did all operators measure in the same location on the same part?
All good questions.
I would look at using a mic stand to better control the process and also assure that the repeatable pressure is applied to the thimble. If the gage does not have a ratchet stop, use one that does. Do you really need to measure your customers part? Since the GRR evaluates the measurement process I have told my customers that I do not need to measure their parts for the GRR. Some accepted that explanation .... some did not. Good Luck!
It would be good to use a stand only if that's the way the parts are normally measured. GR&R, and MSA in general, is a method for determining whether a given measurement system is appropriate for a specific measurement task, which means that it should
always be performed on parts and the feature that is to measured in production.
I have some additional questions about the 10 parts that you used for your gage study.
Were these 10 parts all "in tolerance"? Or put another way if you took the difference between the average of largest part and the average of the smallest part how does this compare to your .002" tolerance?
In my own experience I have attempted Gage R&R with parts all inside of the tolerance and come up with similar unsatisfactory results.
For the purposes of a Gage R&R you really need to have parts that can cover the full range of your tolerance and then some. The purpose of the study is to determine if your measurement process can tell you if a part is good or bad. If all you are measuring is "good parts" for the study you have no way of knowing whether or not your gage can tell the difference between a good or bad part.
So, did you include some "bad parts" in your Gage R&R?
Unless one's doing an attributes study, there is no need to include bad parts. The purpose of GR&R is NOT to determine whether a measurement process can tell you if a part is good or bad. Again, that's only for attributes studies. In fact, depending on the purpose of the measurement system, it might not even be necessary for the measured parts to represent the full range of process variation.
I commend all of you to
Miner's excellent blog series on MSA.