Gage R&R is not a tool - it is a tool box. It tries to do many things all at one time. Many have cited - to some degree of accuracy - that it doesn't do any specific thing very well. It can suffer further from people that attempt to plug and chug, but not really analyze the results. Bev has done a good job of looking beyond the "results" page - which I highly recommend. However, in spite of its critics, it does have some very good guidance, when used with some thought.
NDC is a valuable tool, because it is the tool to fight against those people that believe that if the resolution of the tool is in the right range it is good enough. I have dealt with many operators - and tool makers - that say if the spec is +/- .001 and the tool reads to .001 it is good enough. That doesn't even meet the rule of 10:1. So, it is
far from the truth - but how do you show it?
MSA tries to show it by dividing the part variation by the GRR. However, it
assumes that you have presented to the test the full range of variation from your process. You may have heard people mention that you
must do this to make the GRR valid. True, it
helps - but quite frankly you cannot always do that. When you are doing a GRR for a PPAP, you only have one lot. You have a world of variation out there waiting for you in the future - or you live a blessed life. Even if you have a wide presentation of variation, does 10 parts represent a process capability study? Let's keep this tool in perspective!
So, what can you do?
The GRR tool does provide clues as to what the operators will measure when they get a group of parts. It will collect that level of error - and that error is the error of interest. Rather than attempting to analyze that error to the part variation presented, you may want to analyze it to the
expected part variation.
Careful now...we are leaving the MSA ndc calculation...the purists may want to close their eyes here:
Assumption 1: the process distribution is normal
Assumption 2: the 10 pc sample represent the distribution
At this point, both assumptions are weak. It would be handy to have a capability study. Now, here is one of the many chicken-and-egg dilemmas: you do not want to measure a capability study without know the gage is acceptable! But, if you did a capability study and took your 10 samples from its range, you would have a better developed GRR study. Ever think of that?
Anyway, the assumptions being the case, you would have control limits (based on X(MI)-R assuming parts were in order) of 2.4607 and 2.4629. The range is .0022. Divide that by GRR of .001 and you have an NDC of 2. You need an NDC of 10 for SPC (10 distinct categories - 5 above the mean, 5 below the mean) to be of any value. Otherwise, you will not pick up runs, etc., within the expected performance band of the control limits.
Not doing SPC? Lets look at the specification. You have .020 tolerance, divided by .001 GRR gives you an NDC of 20. Not bad there. So, the gage will read issues within the tolerance, but is not acceptable for control purposes.
That would be my call.