Attribute Gage R&R for Repair Station Operators - AOI (Automated Optical Inspection)

  • Thread starter Thread starter hemix81
  • Start date Start date
H

hemix81

Hello!

Do you have any suggestions how to do an attribute gage R&R for the repair station that follows AOI (automated optical inspection)? AOI inspects defects of PCB's visually and saves information to the database.

There are many operators that inspect defects AOI has noticed and they have to fix and confirm the defects. We would like to know if there is huge difference how operators detect defects.

There is a quality specialist who is very talented; maybe she should check the status (accept/reject) of samples first. There is different information what is suitable number of samples. Let’s assume that 4 operators would participate attribute gage R&R. Each of them would evaluate samples twice. What would be good number of samples? Some articles say that it has to be 20-30, but in some examples there is only 10-15 samples used.

Appraisers shall not know that they are performing a test, because we want the results that don’t differ much from normal inspection operation. They also shouldn’t fix and confirm the defects, as the others will inspect the same samples. There is the problem, how to perform the test without telling them about it. :confused:

Do you think that Go / No go data is the most appropriate or can we use ordinal attribute gage? It is possible to classify defects like missing component, short circuit etc. I would say that it is quite difficult to define level of a defect e.g. from 1 to 5. What do you think? If AOI tells that the solder joint is missing, you agree or disagree with it. :rolleyes:

Thanks in advance. :)
Heidi
 
Elsmar Forum Sponsor
Heidi,

sorry for the late response :bigwave:

To check the samples first is a good approach in tracking difficulties and maybe you'll see differences between operators at this point. It won't be possible to draw probabilistic conclusions about the amount of differences (e. g. operator decisions differ significantly by a level of 5%), but you could see if there are differences at all between the inspectors or the parts.

I hardly see no chance in making a GRR without letting the appraisers know that they do one. The usual procedure is to detect a defect and fix it and this is done by one single appraiser consecutively. You have to argue why they must change their standard operation procedure. You can stress that you're at least not interested in faulty decisions of a special operator but that your focus is on (possible) different decisions among all operators/pieces.

So for performing a GRR you have to set up a plan representing the whole: operators (e. g. those with high experience or newbies), pieces and methods. For different methods (and perhaps also for different kind of parts) you have to make different GRRs to track the variability included in each method (and kind of part).

The more trials you make the more information about your measurement system you'll get. On the opposite the costs increase with every trial and maybe your inspection procedure does not content much variability at all. To decide how much trials for a good GRR have to be made depends on your special situation and the characteristics of the whole (number of operators, parts, defects, and so on). The guidelines dealing with 20-30 samples / 10-15 samples can be an appropriate approach, but only if they are appropriate considering your special inspection situation (and only for go/no-go measurements).

Maybe it is useful for some parts to track the amount of a defect as you described above, but IMHO there is no standard GRR method to plan that kind of studies. Usually there are attribute or variable GRRs, but no GRRs dealing with ordinal variables (and please Covers correct me if I'm wrong!)

Hope this helps :)

Barbara
 
This is a good question whcih I'm also wondering about more advsie. Any update from other folks at Cove? :applause:
 
There are two options for evaluating Attribute measurement data. If your data is Good/Bad, the correct technique to use is KAPPA analysis. If your data is ordinal (i.e., ranked in a scale from one extreme to another), use Intraclass Correlation. These techniques typically use between 20 and 50 samples.

Both techniques are discussed in Quality Progress, May 1995.
 
Re: Attribute Gage R&R for Repair Station Operators - AOI (Automated Optical Inspecti

Hi All,
I'm new in MSA and GR&R, currently i'm doing an attribute GR&R for operator on OQA for PCBA inspection.

My study run with 30 samples involve 20 operators. The group of samples selected are from different models and each sample had 5 to 10 defects. During the study i'm asking the operator to write down the defects and locations they found on the sample. I've been run for the 1st trail and get the data from operators.

What i'm confusing now is how i'm going to do the analysis? My analysis should based on the defects on the samples or only the sample (pca board)?

If base on defects and locations, i can compared the operator finding with standard and classified pass or fail. But for board, how i'm going to classified is pass or fail if the operator just find partial of the defect and location on the particular sample?

Anyones can advise? Thanks in advance.

mahuan
 
Back
Top Bottom