Designing an attribute MSA for a new piece of metrology equipment

N

NotoriousAPP

I?m trying to design an attribute MSA for a new piece of metrology equipment used to measure defects for a gold ball deposition process for semiconductor wafer processing. Each gold ball that's deposited is analyzed for certain criteria, based on the results of the analysis the ball either passes or fails. The output of the inspection will only be bins (i.e. 1,2,3,4,5,6,7) where each bin corresponds to a certain failure mode however to reduce complexity I will simply assign any gold ball that's binned a "fail" and any that are not as "passing".

I will have three wafers at my disposal. On each wafer is 86 die, on each die is 250 gold balls, each ball is inspected during the inspection step. The equipment uses automated inspection: all loading, alignment, inspection and binning is done by the equipment automatically.

Will someone please tell me if my proposal for designing and running the attribute MSA is correct:

1) Run automated inspection on one wafer to generate a list of die which the equipment believes to be passing and failing.
2) Select die which have representative defective (failing) gold ball deposition and passing gold ball deposition. Try to select die with a 1/1 ratio of good to bad parts is recommended (I believe this will be highly unlikely, a 1:20 ratio of bad:good is more likely and that's in the extreme case). I propose selecting 10 die on the wafer which are spread across the entire wafer surface.
3) Have an expert appraiser categorize each gold ball on these 10 die to confirm passing balls are indeed passing and failing balls should have failed.
4) Run wafer through the inspection 10 times. This sequence would be load, align, measure, unload, repeat.
5) Generate list for each cycle to record whether a unique gold ball passes or fails for each measurement run.
6) Use the methods described at the link below to analyze the data. In my case I'm assuming that there is only one operator (the equipment). Note, this is a first of a kind tool for a process with no process record. There is no historical or data or data from another tool to use for comparison. We're using this MSA data to accept or reject the inspection equipment from the supplier.

w w w .isixsigma.com/tools-templates/measurement-systems-analysis-msa-gage-rr/making-sense-attribute-gage-rr-calculations/


Do my methods seem sound? Any recommendations on how this could be improved?

Individual repeatibility (step 5 at link) and individual effectiveness (step 6 at link) seem straightforward to calculate since there is only one operator however this only provides repeatibility, how do I calculate reproducibility with only one equipment (equipment = operator)? Can I use the load/measure/unload sequence for the reproducibility and simply measure the wafer 3 times without load/unload sequence to gauge repeatibility?

Thanks,
Alex
 
N

NotoriousAPP

I?ve summarized my proposed model tree, would someone please have a look at it and let me know if it?s correct based on the desire of my MSA as described above.
 

Attachments

  • MSA Model for isixsigma.docx
    19.7 KB · Views: 227
S

sc00by

Few questions:
- how does your measurement system work, is single measurement covering only one ball or one die or whole wafer or else? In essence if the measurement sensor measures only one ball or one die then you will have enough samples in just wafer to check for repeatability but if your sensor will measure whole wafer in one go then 3 times measurement for repeatability will not be sufficient
- I am assuming that even a single bad ball can fail the whole die, can you confirm the case or provide some more info?

Your Repeatability and Reproducibility concept seems correct but details of it have to be tailored depending on the answer to first question above.

Your "good to bad" ratio of the samples may affect your analysis but it depends on the answer to the second question.
 
N

NotoriousAPP

Thanks Sc00by for helping out!

Great question. I'm still not certain, the equipment is new for us so I'm a little unfamiliar with the details of the inspection as to whether it scans one ball at a time, one die at a time or the entire wafer at a time. I'm at the vendor site tomorrow morning and this will be the first question I ask. Based on what I do know about the scan I would guess that it scans each die individually. I do know that measurement of an entire wafer only take a minute so time is not an issue. Based on your comments regarding repeatibility I will scan each wafer 5 times, does that seem more reasonable? Do you think it needs to be more than this?

In short, a single bad ball may not cause a die failure. The failure mode for each ball is assigned a bin number indicating that this individual ball has failed. At this time the output of the tool only identifies individual failing balls, it does not identify failing die, we have 3rd party software which we will use to fail die based on the number and type of ball failures. The criteria for a failing ball are the following:
1) More than one missing ball per side of the die (die has four sides).
2) Ball too large
3) Ball too small
4) Misaligned ball
5) Defect on area where ball is supposed to be deposited
6) Ball shape out of spec

I have modified my MSA data collection plan a bit from my previous post. I will only select 3 die on one wafer (since it's difficult to force each specific failure and classifying/validating each failure on a ball manually is very time intensive). With this model each die becomes its own MSA.
 
S

sc00by

If this measurement equipment takes the scan of a whole wafer in one go then measurement of it 5 times would be insufficient however when you say that measurement of such takes only one minute then a measurement of it 50 times (with taking out) and using standard Attribute Study calculation will be way than enough and will not take ages to perform.
If (as you suspect) this measurement machine measures each die separately then I would measure one wafer 9 times and similar way as above use standard Attribute study – in this case each die is being measured as new sample so there is no point in measurement of the other wafers (as long as failures on wafers are fairly homogenous).

In both cases you will still have to make sure that each die fails on the very same balls.

There are number of approaches you may take, like for example each failure mode on the balls to be a separate MSA analysis – this way you can make sure that measurement system recognises each failure mode in a similar way; for a quick periodic check of this system you may treat all failure modes as one failure category so you will have just one analysis; from the experience I would say that any measurement equipment performs quite well when measures standard parts but may not necessarily perform that well close to the borderline/outside spec so in your case selecting a large number of failures that are borderline failures would be one of the most difficult study you can perform.

There is one thing to mention, if you have a go at performing studies, make sure you have roughly 1/1 rate of good to bad parts or you rotate bad parts more often in order to achieve 1/1 ratio – otherwise the result will be meaningless (have a go at study calculation and see the difference it makes when you just use i.e. 1/20 ratio, your miss rate and false alarm rate will not indicate to you a problem when it occurs, statistically such study is flawed).
 
N

NotoriousAPP

We created the MSA die this morning. My intention was to force each type of failure mode so that we have strong failures and failures which are close to borderline pass/fail; essentially the entire spectrum of failure modes and severity of each failure mode. I also tried to make the ratio of good/bad solder bumps to 1:1; I think in the worse case it will be 2:1 (good:bad). We were unable to make each of the three MSA die exactly the same. Due to this, I will need to treat each die as it's own MSA. Do you see any issues with this?

You really think running the repeat runs 5 times will not be enough? Can I get away with 10 runs?

I planned on treating any failure mode as a "fail" and any non-failing die as a "pass". I will have the bin classification for each solder bump so I can analyze the MSA both ways.

We're working through the manual classification of each defect this morning to generate the reference (expert assessment) of each solder bump for the precision part of the MSA.

I'll keep this updated as we work through the MSA.
 
S

sc00by

To be honest you don't have to have any die identical, there is no need for that.
There is completely nothing wrong with having MSA for each die but you have to take into an account completely different results comming out of each of them, have you considered such case?
In such situation I would take the approach of worst case scenario.

Ratio of 1:1 is ideal one, how we have gone out of different ratio: having 25 good parts and 15 bad ones we have measured 10 bad ones twice to complete for 1:1 ratio. Logically it makes perfect sense as long as you keep the log of parts order.

If measurement is done over the whole wafer in one go then 5 or 10 times will be insufficient, the reason for that is your measurement equipment may be performing differently in each area of the scanning area (i.e. close the edge differently to the centre, this will happen especially with optical methods where lens imperfections/effects will be different in the centre and different on the outside).

If the measurement equipment will measure each die separatelly then you will be fine with just 9 measurements. (so you can use form for 3 users 3 repetitions, 50 samples).

As you are working hard to get manually checked data out of these wafers, I would use them as your reference set and keep it for the lfetime of this measurement machine.
 
N

NotoriousAPP

I have some additional questions regarding interpreting the MSA and the results:

Do you agree with my conceptual assessment of each of the attribute MSA's output:
1) Reproducibility: a comparison of the individual loads and runs between the three days of the MSA.
2) Repeatibility: a comparison of the repeated runs in between each load (static repeats).
3) Precision: a comparison of the measurement system defect classification results with the expert appraiser results.

...am I missing an output metric for the MSA?

For Reproducibility, Repeatibility and Precision, what would you consider passing or good for an automated inspection equipment?
 
S

sc00by

1) Reproducibility: a comparison of the individual loads and runs between the three days of the MSA.
- AGREE

2) Repeatibility: a comparison of the repeated runs in between each load (static repeats).
- AGREE

3) Precision: a comparison of the measurement system defect classification results with the expert appraiser results.
- AGREE


...am I missing an output metric for the MSA?
- You should assess at least two things, Miss Rate and False Alarm Rate. Obviously Miss Rate is way more critical as you don’t want to pass parts that are bad but also you don’t want to have too many parts that are rejected and are in fact good ones (I suppose your rework/re-measure loop is a manual inspection and so you’d like to minimise such cases).

For Reproducibility, Repeatibility and Precision, what would you consider passing or good for an automated inspection equipment?
- MSA Manual does not fix you to a specific limit however it states that such limits should suit the product specifics.
For example in our site we have two areas, one intermediate inspection on all components and limits set there are 2% on Miss Rate and 5% on False Alarm Rate the other area has got more strict rules of 0% Miss Rate and 2% False Alarm Rate – first gate allows one failed part to pass and three to wrong fail, second gate does not allow any failed parts to go through and only one part to wrong fail.
The reason for that is that first inspection gate picks up majority of failures but it is not critical (there are further operations and checks) whereas final inspection drives customer requirement of only few failed ppm per year.
If you compare these limits to MSA Manual example you’ll see that we use tougher limits however with this in mind, if there was a product with much lower requirements then we would have relaxed limits.
In a similar way you should apply limits to your inspection so that you meet the product quality requirements but don’t spend time/money on excessive inspection.

Hope this helps.

Regards,
sc00by
 
Top Bottom