Gage R&R with NDC=1

Breadandwater

Registered
Hello,

First time poster but I've been reading useful information on here for a while. So here goes. Thanks for the help.

Ran a gage R&R study using (3 Operators x 30 Parts x 3 Times per Operator). The NDC is =1. I believe the issue lies within the low part variation but this is something I cannot control as we receive these samples from our supplier in bulk. The spec for the part is 7.3-8.1(mm) and but most of the samples variation is between 7.7-7.8mm. Below are the results from a MINITAB gage R&R. Also attached an excel file with the raw data.

Has anyone been in a similar situation before. What solution or justification could you recommend? Thanks again.

Gage R&R

Variance Components

Source VarComp %Contribution
(of VarComp)
Total Gage R&R 0.0011293 47.28
Repeatability 0.0009111 38.14
Reproducibility 0.0002182 9.14
Operator 0.0002182 9.14
Part-To-Part 0.0012591 52.72
Total Variation 0.0023884 100.00
Process tolerance = 0.8
Gage Evaluation

Source StdDev (SD) Study Var
(6 × SD) %Study Var
(%SV) %Tolerance
(SV/Toler)
Total Gage R&R 0.0336046 0.201628 68.76 25.20
Repeatability 0.0301837 0.181102 61.76 22.64
Reproducibility 0.0147721 0.088633 30.23 11.08
Operator 0.0147721 0.088633 30.23 11.08
Part-To-Part 0.0354843 0.212906 72.61 26.61
Total Variation 0.0488713 0.293228 100.00 36.65
Number of Distinct Categories = 1


1598389882906.png
 

Attachments

  • Gage r&r example.xlsx
    80.1 KB · Views: 341

Ninja

Looking for Reality
Trusted Information Resource
Wait for Miner or BevD...but...

NDC=1 typically means that your gage (or the use of your gage) is coarse enough that it cannot amply discern the variation within the spec range.
Do you have a more accurate (discerning) gage to make these measurements? One with ~10x resolution of the one use above?
 

Welshwizard

Involved In Discussions
Hello Breadandwater,

If you follow the AIAG rules it will effectively force you to look for a measurement device with better ability to detect part variation. IF these parts have been chosen in a way which reflects process variation (not sorted by spec) then by my reckoning around 55% of the variation in the study comes from the parts with 45% coming from the measurement process.
If you were tracking the quality of these parts on a production line using a process behaviour chart, the signals that you would detect would be highly likely to be from the production process based on what I can see.

Your measurement process can detect down to 0.020 mm half of the time ( Probable Error) and your tolerance width is 0.8 mm therefore you have ample leeway to detect against spec. The third decimal place is pure noise based upon this study, your measurement process can detect an increment between 0.004 and 0.04 and your recorded increments are 0.001 mm therefore you could drop a decimal place.

Despite the warnings of a lack of consistency (breaching the control limits for operator 2) there is no sign of any detectible systematic differences of repeatability between the operators. There is a detectible difference in part averages (reproducibility) and although small it wouldn't be difficult to understand why operator 3 averages are lower (measurement technique??) than the others and claim a small benefit against the AIAG method.

To sum up, if you follow AIAG rules and you hang your hat on the mystical ndc you should be questioning the contribution due to parts, alternatively you would be encouraged to look for a different measurement process/method. It would seem in this instance that the police will be off your back with regard to the % to spec if the 30% maximum contribution is being dictated. It really does depend on whether you are using the measurement process to sentence parts and/or discern part variation.

Cheers
 

John Predmore

Trusted Information Resource
The purpose for a GR&R study is to assess a measurement system's ability to discern variation in the parts. If there is minimal variation in a sample of parts, you really can't say you have shown how well the gauge can discern differences. You may need to sample from different batches of material or different environmental conditions, before you see the full range of expected part variation. If it is true there really is minimal variation in the parts, then maybe the GR&R question for part inspection is moot. If the purpose of the gauge is to catch that one in a hundred batch of mixed parts or that one in a thousand outlier parts, you might have to seed in a few outlier parts before the GR&R study can demonstrate the ability to catch them.
 

Miner

Forum Moderator
Leader
Admin
@Breadandwater My first question is this: How are you using this gage?
  • If you are using it for inspection of the supplier's product, ndc is irrelevant. You are interested in the % Tolerance, not ndc or % Study Variation. The % Tolerance is 25.2%, which may be acceptable depending on your needs.
  • If you are using it for SPC (or other statistical purposes), then you are interested in ndc and/or % Study Variation. Based on the R chart above, your gage's resolution is fine. However, the biggest issue is the repeatability (within operator variation). Operator 3 has a lower level of variation than the other two. I recommend studying their technique and training the others in that method.
What is the characteristic being measured? Could within part variation be contributing to the issue?
 

Miner

Forum Moderator
Leader
Admin
The purpose for a GR&R study is to assess a measurement system's ability to discern variation in the parts. If there is minimal variation in a sample of parts, you really can't say you have shown how well the gauge can discern differences. You may need to sample from different batches of material or different environmental conditions, before you see the full range of expected part variation. If it is true there really is minimal variation in the parts, then maybe the GR&R question for part inspection is moot. If the purpose of the gauge is to catch that one in a hundred batch of mixed parts or that one in a thousand outlier parts, you might have to seed in a few outlier parts before the GR&R study can demonstrate the ability to catch them.
Be careful with this. If the gage is used for inspection, I don't have a problem with your recommendation. However, if a gage is used for SPC instead, I have a big problem as this will artificially inflate the study variation and lead to incorrect results.
 

Welshwizard

Involved In Discussions
Miner and John,

Just so i'm clear, and assuming that these parts have been chosen in a rational way.

From the original study:

Part Variance = 0.00125 mm
Measurement Variance = 0.0009 mm
Estimated Standard deviation = 0.030 mm

Computing Intraclass Correlation (ICC) yields 0.00125/( 0.00125 + 0.0009) = 0.58

An ICC of 0.58 puts a classification on the measurement process of a Third Class Monitor

These monitors have a better than 91% chance of of detecting a three standard error shift using all four of the detection rules together.

They have a limited ability to quantify process improvements, however the following maximum capability is possible using this measurement process with regular checks for consistency:

Cp 20 = tolerance band plus increment/ 6 x 1.118 x est standard deviation (0.03)

= 3.980

This means that this measurement process used carefully and taking the current parts as a reasonable estimate of part variation can monitor and track improvements in this process up to a Cp of 3.9. After this point the measurement process would have to be replaced as it would have degraded to an ICC of 0.2 which in turn would meanthat the the data is dominated by measurement noise.

I'm sure you are both aware of this but I attach a paper for reference which explains this approach. Dr. Wheelers book "EMP III" ISBN 0-945320-67-1 explains this robustly and thoroughly.

This approach was written to enable measurement systems to be used for hitherto relatively large contributions of both measurement and part variation. It was also written to counter and supercede the use of the ndc which was never meant to be used in the context that it has been for over thirty years, there is quite a lot of authority behind this as Don and David Chambers actually introduced what they called the "Classification Ratio" to try to explain and classify what could be seen plotted on the average chart in a typical study like this. Don developed the use of the ICC to provide a much better estimate of the usefullness of a measurement process and to try to stop any confusion arising.
 

Attachments

  • DJW222.pdf
    86 KB · Views: 81

Breadandwater

Registered
Thanks for the replies.

To clarify this is for inspection and release of parts only, Not for SPC. The parts we tested are all from 1 Lot sent to use and randomly chosen from the lot. I expect the variation to be very similar within said lot itself. These parts are already inspected/removed for defects prior to us receiving them as well.

In said case of just inspection, it seems ndc and % study variation should not carry that much weight compared to % tolerance.

But looking forward would it be safe to assume that designing a gage study with different lots used to increase part to part variation would improve the gage r&r results?
 

Miner

Forum Moderator
Leader
Admin
To clarify this is for inspection and release of parts only, Not for SPC. The parts we tested are all from 1 Lot sent to use and randomly chosen from the lot. I expect the variation to be very similar within said lot itself. These parts are already inspected/removed for defects prior to us receiving them as well.

In said case of just inspection, it seems ndc and % study variation should not carry that much weight compared to % tolerance.

For inspection use only, ndc and % Study variation should carry zero weight. Only % Tolerance is relevant.

But looking forward would it be safe to assume that designing a gage study with different lots used to increase part to part variation would improve the gage r&r results?

For inspection use only, the selection of samples does not impact the results. % Tolerance uses the repeatability and reproducibility divided by the tolerance. Part variation is not part of the calculation. Including multiple lots will not hurt, but it doesn't really add anything either. Using a wide range of tolerance would be better used in a linearity study.

What did you think about my question about within part variation?
 

Miner

Forum Moderator
Leader
Admin
Miner and John,

Just so i'm clear, and assuming that these parts have been chosen in a rational way.


Computing Intraclass Correlation (ICC) yields 0.00125/( 0.00125 + 0.0009) = 0.58

An ICC of 0.58 puts a classification on the measurement process of a Third Class Monitor

These monitors have a better than 91% chance of of detecting a three standard error shift using all four of the detection rules together.

They have a limited ability to quantify process improvements, however the following maximum capability is possible using this measurement process with regular checks for consistency:

Cp 20 = tolerance band plus increment/ 6 x 1.118 x est standard deviation (0.03)

= 3.980

This means that this measurement process used carefully and taking the current parts as a reasonable estimate of part variation can monitor and track improvements in this process up to a Cp of 3.9. After this point the measurement process would have to be replaced as it would have degraded to an ICC of 0.2 which in turn would mean that the the data is dominated by measurement noise.

@Welshwizard You are correct, and I am a fan of Dr. Wheeler's EMP approach as well as BevD's Youden plots. I tend to stay focused on the specific questions asked by the OP for two reasons. One, most of the time they are required by their customers to use the AIAG approach, and Two, I am afraid to further confuse them by adding another method unless I get an indication that they are open to a different approach.
 
Top Bottom