Hi,
So, I know there are a bunch of threads on this topic, however in many of them there are some disagreements or differing methods or opinions, so perhaps this new one could get a consensus or further discussions.
Situation and thoughts....
Situation: Let's say your task with doing GRR or finding a way to characterize and understand the acceptability of the automated measurement system used in discrete manufacturing via understanding how "good" the measurement systems are. (i.e. D-MFG). By D-MFG I mean the final end of line testing ultimately results in PASS/FAIL which is done based on continuous data sets let's say 3 different specifications are measured in UOM1, UOM2, UOM3 UOM= Unit of Measure. No human interaction
Would you use do this using a series of Type 1 gage studies on the three different continuous data sets from the automated end of line testing? 1 part measured 50 times against those three specs, using a Cg/Cgk ratio of 10 (more conservative basis) with STDV multiplier of 6 for criticality. and then you would do this UOM1, UOM2, UOM3 and based on the Cg, Cgk you can characterize, Bias and how could the automation test station is?
Now lets say there are three test stations? If Type 1 is the way to go would you run it three times on each station so a total of 3 sets of UOM data's? so it would be 3^2 or 9 type 1 gage studies using the same parts?
OR
would you do an attribute GRR with 20 parts, 10 good and 10 bad, two replicates on each automation station? Including in that results would be alpha and beta error %'s and screen vs effectiveness percentages? This method would basically be looking at the discrete data output of the continuous data measurements if all test pass then it says Pass if anyone of the three tests fail then it says fail and the attribute study marks it at fail.
OR
would you do a GRR 10 samples 3 trial 3 stations but assume the Appraisers are automated test stations, reproducibility is based on the automation equipment and not human operators? So basically, run a normal gage rr but the operators are just the automation equipment. not sure the validity of this since the other element take into account EV...
I think the idea is qualify using statistics that the equipment is valid beyond just calibration and IQOQPQ and then track yields at each step.
I have seen suppliers simply just use type 1 gage studies or a series of type 1 gage studies and then call it a day and say the equipment is statistically sound....
So, I know there are a bunch of threads on this topic, however in many of them there are some disagreements or differing methods or opinions, so perhaps this new one could get a consensus or further discussions.
Situation and thoughts....
Situation: Let's say your task with doing GRR or finding a way to characterize and understand the acceptability of the automated measurement system used in discrete manufacturing via understanding how "good" the measurement systems are. (i.e. D-MFG). By D-MFG I mean the final end of line testing ultimately results in PASS/FAIL which is done based on continuous data sets let's say 3 different specifications are measured in UOM1, UOM2, UOM3 UOM= Unit of Measure. No human interaction
Would you use do this using a series of Type 1 gage studies on the three different continuous data sets from the automated end of line testing? 1 part measured 50 times against those three specs, using a Cg/Cgk ratio of 10 (more conservative basis) with STDV multiplier of 6 for criticality. and then you would do this UOM1, UOM2, UOM3 and based on the Cg, Cgk you can characterize, Bias and how could the automation test station is?
Now lets say there are three test stations? If Type 1 is the way to go would you run it three times on each station so a total of 3 sets of UOM data's? so it would be 3^2 or 9 type 1 gage studies using the same parts?
OR
would you do an attribute GRR with 20 parts, 10 good and 10 bad, two replicates on each automation station? Including in that results would be alpha and beta error %'s and screen vs effectiveness percentages? This method would basically be looking at the discrete data output of the continuous data measurements if all test pass then it says Pass if anyone of the three tests fail then it says fail and the attribute study marks it at fail.
OR
would you do a GRR 10 samples 3 trial 3 stations but assume the Appraisers are automated test stations, reproducibility is based on the automation equipment and not human operators? So basically, run a normal gage rr but the operators are just the automation equipment. not sure the validity of this since the other element take into account EV...
I think the idea is qualify using statistics that the equipment is valid beyond just calibration and IQOQPQ and then track yields at each step.
I have seen suppliers simply just use type 1 gage studies or a series of type 1 gage studies and then call it a day and say the equipment is statistically sound....