[please excuse any redundant questions. I composed this prior to finding this thread]
Hi All,
First off, this forum is a fantastic resource for information... I’ve only been perusing the threads for a day or so but I’ve learned a lot of new things already. By way of background, I’m new to my company and have essentially made a career change from PC-based instrument design to test platform development and analysis. That said, I’m sailing uncharted waters and am in need of some advice regarding a starting point.
One of the first buzz words I heard when I arrived at my new job was "Gage R&R." Right now there’s no identifiable control or understood capability of the ATE that is used to test our semiconductor products. Being told to go "do a GR&R" is fine but what does that really mean and what will it buy me? Is it really the analysis that I need?
The problem as I see it:
I’m concerned that there may be a misunderstanding of what GR&R will provide. The product specifications are starting to push the specified accuracy of the ATE in some areas. In particular, voltage and frequency measurement. On paper, the specified accuracy is generally at least 10x better than the parameters of interest so I believe we’re following the 10% rule.
I’m concerned that people here want a GR&R to see how far we can push the ATE beyond specification. I believe this is incorrect and not the point of a GR&R. GR&R will only tell us how our measurement system performs from device-to-device, operator to operator. It does not tell us anything about absolute accuracy performance. I believe that’s better suited for an MSA.
Am I correct is saying that a well-constructed MSA will include a GR&R but GR&R itself will not tell you anything about pushing a tester beyond its specification? Even if there is capability beyond the specs verifying and guaranteeing that performance becomes an issues.
Some general questions:
1. Since a piece of ATE is comprised of a number of measuring instruments, I would think that GRR /MSA would be performed on each instrument or at least those considered critical. (That may be obvious but I had to ask). You don’t qualify a tester with GR&R; you qualify the components of that overall tester that are critical to the parameters being measured.
2. It seems to me, although more involved, that the correct approach may be to perform and MSA on the instruments in question. That way we’ll be forced to understand stability, bias and linearity before attempting the GR&R.
3. From all I’ve read, the reproducibility part of the GR&R involves operators. Since the tests are performed with automatic part handlers under program control I’m wondering where the reproducibility part fits in? Yes, a human sets everything up but then basically watches the automatic test of 10K devices. I suppose the handler itself could have some influence on the measurement but I’m having a tough time conceptualizing where this fits n the overall R&R.
4. Is anyone out there in the semiconductor test business that has performed this sort of exercise? I’d be most interested in understanding your experience and findings with regard to this application. I searched the web far and wide for GR&R/MSA specifically as it applies to ATE and I’m surprised at the lack of available information.
Thank you for being patent with my long-winded introduction to this forum. And yes, I have the Fluke “Calibration: Philosophy in Practice” and the AIAG “MSA” books on order. Both are due this week.
Best regards to all,
Jim