Gage R&R with ATE (Automatic Test Equipment) - Newbie here...

P

polymechman

I’m new to the forum and a novice at MSA. I’ve been tasked with assessing gage R&R on an inventory of automated testers (ATE) that we use for assessing a variety of parameters within the devices we build. I’ve searched and want to start a fresh thread to address tying specific things together I haven’t seen related in one post…

I have ran the GRR study and would like to throw some ideas out for discussion.
Setup:
-7 testers, 24 different parameters captured in one test run (Voltage, Timing, and Freq.)
-10 units, random – no way of forcing a spread of production – especially specification – tolerances
-3 runs each, in randomized order, on all 7 testers.
-Operator independent – at least on the measured values. If they hook it up right, and we don’t get COM error, the designers tell me the results will be operator independent.

When I run the Minitab Crossed GRR study some of the parameters have issues – particularly the voltage measurement ones. The test results file from the DAQ card is rounded to .XX. The Range is only 12x the rounded resolution of the measurement (think .45-.55). When we plug it in, the NDC=1 and the %SV/TOL is ~30% with sigma=6. We’re confused and wanted to run this by some experts…

If I were to increase the NDC, would I get better results? Why or why not?
If I were to avoid rounding, would I get better results? Why or why not?
We’re confused because the range of 6 testers is .51-.52, the 7th is .53-.54. When we remove the 7th rogue tester, we still get ~25% with a seemingly stable range. Any thoughts?

Any light that the community could shed on the overall theory as we’re trying to use it here would be greatly appreciated. What am I missing? If I’ve not given enough information, please let me know!

 
M

mclayton

Re: Gage R&R with ATE - newbie here...

Quickie answer, more later.
Your datalogging system should be using higher resolution if you have need for precision in measurement system to be PROVEN offline. The tester likely has high resolution, but that is masked by the data output system. If you send me details on your datalog system capability (it may be limited due to tester OS version, or dumbed down to avoid impacting tester capacity).

One way to get around tester capacity issue is by SAMPLE datalogging, 1 of 50 parts for example for high volume batches being tested.

Send details to my blind email in profile with data example for analysis with other MSA software (using Anova method or REML rather than Range method) and we can compared results with different levels of precision in datalog. If you can tell me ATE system OS version and vendor, that may help.
 

Miner

Forum Moderator
Leader
Admin
Please attach an Excel file with one of your problem data sets. Include the specifications and (if available) an independent measure of the process variation such as from a capability study.

Since you are a novice at MSA, I recommend reviewing my blog on MSA.
 
P

polymechman

Here is an excel file to dump into minitab for the test in question. I do not have a process capability sheet for the same parameter, unfortunately. Can you shed light on how I might use that in conjunction? :confused:

I will be checking your blog now. Thanks for the responses!
 

Attachments

  • 1ma EXCELxls.xls
    24 KB · Views: 468
Last edited by a moderator:
P

polymechman

Please attach an Excel file with one of your problem data sets. Include the specifications and (if available) an independent measure of the process variation such as from a capability study.

Since you are a novice at MSA, I recommend reviewing my blog on MSA


Miner - :agree: Nice to meet you....
I've reviewed and have some clarification that might get us to where we're heading...

1) We're using this gage to compare parts to spec. Does NDC still matter? Do I need a spread? in Entry 5a you mention that if the gage is for part inspection alone, the selection of parts doesnt contribute to the analysis. How does Minitab exclude this? Or how do I look at the results to exclude it?
2) I often wonder what to enter for the standard deviaitons in Minitab. AIAG says 5.15 and 6, depending on edition. You mention, also in entry 5a, that you should use your actual standard deviation whenever you can. Are you refering to this entry in the options of the crossed gage r&r in minitab - to replace 6 or 5.15 with an actual SD? Is this SD the SD of a run of the same parts on that same equipment, or is it the SD of the process? The confusion comes here when mixing the process monitoring and the Part Inspection in the same post. Please help!
3)In your entry 5b you mention alpha risk. Where is this found, or how do i derive it, in the GRR study?

Really helpful so far! I am so pleased to have found you and the rest of this forum as a resource!! :D
 
Last edited by a moderator:

Miner

Forum Moderator
Leader
Admin
Here is an excel file to dump into minitab for the test in question. I do not have a process capability sheet for the same parameter, unfortunately. Can you shed light on how I might use that in conjunction?
I will analyze the data soon. See answer to 2) below regarding process capability.

1) We're using this gage to compare parts to spec. Does NDC still matter? Do I need a spread? in Entry 5a you mention that if the gage is for part inspection alone, the selection of parts doesnt contribute to the analysis. How does Minitab exclude this? Or how do I look at the results to exclude it?
If the gauge is only used to compare parts to spec, focus on % Tolerance. ndc and % Study Variation are not relevant. The selection of parts does not matter either. Minitab automatically calculate all of the metrics. The only one that you can select to show/hide is % Contribution. You must determine which metrics are relevant and which are not, and ignore those that are not relevant.
2) I often wonder what to enter for the standard deviaitons in Minitab. AIAG says 5.15 and 6, depending on edition. You mention, also in entry 5a, that you should use your actual standard deviation whenever you can. Are you refering to this entry in the options of the crossed gage r&r in minitab - to replace 6 or 5.15 with an actual SD? Is this SD the SD of a run of the same parts on that same equipment, or is it the SD of the process? The confusion comes here when mixing the process monitoring and the Part Inspection in the same post. Please help!
I recommend using the 6 standard deviations. This is the current standard for MSA. This is different from the actual process standard deviation. When you enter the historical process standard deviation, Minitab will calculate one additional metric, called % Process. If the gauge is used for process control, the % Process metric would be a more reliable metric than % Study Variation in assessing the gauge suitability.

3)In your entry 5b you mention alpha risk. Where is this found, or how do i derive it, in the GRR study?
Alpha risk is defined by you. It is the risk that you are willing accept that you could mistakenly accept a result as significant that in reality is not significant. Alpha is typically set at 0.05, but can vary from 0.01 to 0.1. 0.01 is used if the risk of a mistake is high. 0.1 is used if the risk is low. 0.05 is typical. The p-value is compared to alpha. If the p-value is <= alpha the effect is significant; if > alpha, the effect is not significant.

Note that Minitab uses an alpha of 0.25 to assess the operator*part interaction unless you override it.
 
Last edited:

Miner

Forum Moderator
Leader
Admin
Your primary issue is the resolution of the reported data. Using Wheeler's techniques, the resolution should fall between 0.0003 and 0.0028. Therefore, reporting to 0.001 is recommended.

This should be addressed before reading too much more into your analysis. The current analysis shows differences between testers, but this may change when you change the number of decimal places.
 
Top Bottom