Comparison of two test equipments

T

tohbin

I have a comparison problem here on two test equipment. These two testers give different results for chips tested. For example tester 1 might classify the chips as in Bin 2 but Tester 2 classify the same chips as in Bin 3. I wonder how I can do a comparison test here to justify that these two testers give varying results. Hope somebody can help out here. Thanks.

:confused:
 
D

D.Scott

Easiest way I know is to do proficiency testing. Send your test to an independent 3rd tester. Many labs use an outside lab to check the proficiency of their tests. If you only want to check this time, use another tester in house or send it out.

Dave
 
T

tohbin

Hi Dave,
Thought of using a 3rd independent tester. But what if I want to show my results in statistical way? Any method that can be used? (my data is attribute)
 
D

D.Scott

Tohbin

Without knowing anything about the test you are doing, I can't offer any suggestion for statistical data.

I am not sure what measurement would class a chip from Bin 2 to Bin 3 but if there is a numerical value established, that could result in statistical data.

Sorry I am not able to answer your question. Maybe some of the others who recognize the test might be able to help.

Dave
 
D

Dave Strouse

What you need ..

is an attribute gage R&R to be performed.
I'm assuming your test is not destructive. If it is, you can still do R&R but it gets messier.
Take a look at Jurans Handbook, 5th edition, section 23.51 on "Measure of Inspector and Test Accuracy". This reading outlines the plan to compare the results of two inspectors (human in the example, but can just as easily be your machines).

You will need a quantity of parts spanning the range of defects.

You will also need a "check" inspector that is most likely a human or the outside third party inspector that D. Scott mentioned. This check inspector will examine each part on test and decide it's "good" versus "bad" status.

MINITAB software also has an attribute R&R routine as of release 13. It's about $700 off the shelf I think, but you can probably negotiate. There may be other software packages that do this.

If you really get hung up on the analysis I could do it for you. I would want results in some electronic format , Excell spreadsheet or even a text file with good delimiters. My days of manual transcriptions from checksheets and charts are over I hope!

:rolleyes:
 
M

Michel Saad

Tohbin,

A gage R&R is made to see the capability of a measurement equipement to be repeteable with the inclusion of all the variations due to parts, equipment and appraiser. It is not the right tool to compare two pieces of equipment. A hypothesis test is what would be required if the data was variable. Since the data is attribute, the answer is simple. It has to be excately the same. The part that is rejected in bin 2 on machine one has to be rejected in bin 2 on machine 2.
 
A

Atul Khandekar

Tohbin,

As D.Scott said, a little more detail would help. What kind of test is this? What is the parameter being checked? How is pass/fail decision made? Could there be a calibration problem?
-Atul
 
D

Dave Strouse

Mind your R's

Michael -
You may remember the 3 R's from school reading ,'riting nd 'rithmetic. ALL must be learned.

Similarlly a Gage R&R has Repeatability and Reproducability. Reproducability is getting the same results using different operators(testers in this case) and is EXAXTLY what Tohbin wants to know.

The attribute Gage R&R makes an assessment of the confidence intervals around the agreement of appraiser to appraiser ( reproducability), within appraiser (repeatability) and appraiser to standard (bias).

No appraiser wheter human or machine will give exactly the same results every time. If only the world were that simple!
 
A

Atul Khandekar

That the testers are not able to reproduce results is quite clear. Are we asking which tester is right and which is wrong? Looking at it from calibration angle... It is a good idea to obtain a 'Master' chip(s) with 'Good' and 'bad' status determined by a much better test equipment, then have the two regular testers test them and see which one's giving wrong results. (Some kind of a bias test?)

-Atul
 
Last edited by a moderator:
Top Bottom