Z
Say you have a part with a tolerance of .010", being measured with a micrometer (.0001" resolution). The gage is only used for final inspection. You perform an Avg-Range MSA study, and see that the range between trials is either 0 or .0001", so the UCLr is very small (under .0001" and gage resolution) so anything other than zero shows up as out of control.
Folks have mentioned that using a larger sample size would result in a better %R&R, but I don't understand how. If the range across an individuals trials is consistent (0 or .0001"), is it just by adding more samples with a possible range of zero to drive down the Rbar? If Rbar goes down across all individuals, Rbarbar goes down lowering UCLr further. If the gage is strictly used for final inspection, is it OK to ignore the calculated UCLr?
The other question I'm wondering about is whether to use reference standards as the MSA parts vs actual parts. I realize it doesn't represent the "part" of the system since it's a standard, but it would seem to be a more accurate way of determining gage effectiveness w/o part variation muddying up the results (think tapered OD's, out of round, etc...). Seems that using parts would be the next step to see how GR&R compares between real parts vs standards...
I've probably read 40 or so threads here that have been very helpful already, and am absorbing the MSA 3rd Edition and formulas. Thanks for the insight!
Folks have mentioned that using a larger sample size would result in a better %R&R, but I don't understand how. If the range across an individuals trials is consistent (0 or .0001"), is it just by adding more samples with a possible range of zero to drive down the Rbar? If Rbar goes down across all individuals, Rbarbar goes down lowering UCLr further. If the gage is strictly used for final inspection, is it OK to ignore the calculated UCLr?
The other question I'm wondering about is whether to use reference standards as the MSA parts vs actual parts. I realize it doesn't represent the "part" of the system since it's a standard, but it would seem to be a more accurate way of determining gage effectiveness w/o part variation muddying up the results (think tapered OD's, out of round, etc...). Seems that using parts would be the next step to see how GR&R compares between real parts vs standards...
I've probably read 40 or so threads here that have been very helpful already, and am absorbing the MSA 3rd Edition and formulas. Thanks for the insight!
