Determining MSA Bias in Automatic Test Equipment for Electronics Modules


Stuart Woodward

Bias in Automatic Test Equipment for electronics modules

I have read through these forums and note that most of the situations discussed for MSA relate to mechanical applications.

Does anyone have any experience with automated test equipment used to measure electronic characteristics? (ie voltage, current, frequency)

My specific concerns relate to bias.

To determine bias, ideally you require a reference sample that can be measured independently and on test equipment. The difference between the two measurements would be the bias.


The test equipment is made up of a number of items

-Measuring instruments
-Wiring looms/Spring pins/Switching units/Loads
(Used to interface the Device Under Test (DUT) via it’s connector to the measuring instrument)
(Used to sequence the test routine, control measuring instruments, adjust timing, provide the comms interface to the DUT to control functions)


The DUT has multiple ( perhaps up to 100 ) analog measurements that must be assessed.


If we use a reference sample we are not assessing the system as it will be used in production because
-We need to provide an artificial means to interface the unit to the tester which can introduce errors.
-We need to modify the software to allow the characteristic to be measured.
-We only test one path/function of many

Our bias could end up small, but is this measurement meaningful?

An alternative is to use a production unit.

The difficulty here is measuring the characteristics independently. If you are not using exactly the same equipment ( generally the case ) then of course you will have differences in measurements. Which one is correct? If you assume the independent measurement is correct and now calculate the bias, how do you determine what is acceptable? The QS9000 manual is very vague on numerical acceptance criteria for bias.

Must every step be evaluated in the same way?

I do believe that bias is an important characteristic to measure. You can have a highly repeatable measurement, but if you are not measuring the correct value, with the wrong tolerance stackup you will end up sending faulty parts to the customer.

If anyone has any comments / suggestions that are relevant to my situation it would be greatly appreciated.


Jerry Eldred

Forum Moderator
Super Moderator
That is a fairly open-ended question. It is typical to use a "Golden Unit" in the semiconductor industry. We have a unit that was independently verified to have certain specific characteristics, stable enough to be valid, and independently measured on a non-automated system with adequate measurement uncertainty to validly assess the performance of the automated measurement system.

The amount of bias in the Golden Unit is a factor of the uncertainty of the independent measurement.

One of the other variables in this is where to make this assessment in the production process. I have seen very intelligent debates as to what point in the process to make the validation. If the process involves process steps A, B, C, D, and E to manufacture an item. What matters the most is that after step E, the unit meets all of its specified parameters to some acceptable confidence level. And all measurements made after steps A through D were confidence builders that units would successfully pass E.

Going back to the original question, if you measure the DUT externally with adequate uncertainty, then remeasure on the production equipment, as long as you have appropriate confidence in the data from the external measurement, and you use a production unit that may be run through automated test systems, it is not artificial even though it is a Golden Unit.

It becomes artificial when you disconnect fixturing, and inject a 'synthesized' signal to substitute for the DUT. As long as it is a real DUT (albeit a Golden Unit) that is measured on the same fixturing, and with the same methods and modes, etc., as used in production, it seems it is a valid measurement.

I think one of the other issues is in the Control Plan. This comes back to my prior comment about intelligent debates. There must be intelligent thought as to what DUT characteristics are or are not included in the Control Plan. I won't render any opinions as to what should or should not be in the Control Plan. However, I have seen a lot of headache over items in the control plan that couldn't be appropriately measured or bias determined, when in reality, it may have been at the wrong point in the process where the critical measurement was made. I am not at all trying to convince anyone to remove anything from any control plan, but to use good, sound judgment as to what is a critical measurement.

One other thing I have seen is the difference between measurement and control. Interestingly vague area. In process step A, I control my inputs with good calibrations (and perhaps some SPC), then I measure the output. The output measurement typically (in my industry) includes possibly calibration, MSA, control charts, etc. The input consists of good maintenance and calibration practices. There is a definite line between those two.

I have seen some who do their MSA's on the inputs rather than the outputs (sometimes that is necessary).

Coming back from my digressions again... I have fought the same battle regarding bias with engineers. My view on bias is that no matter how well controlled a process appears to be in terms of Cpk, various stability, repeatability, reproducibility factors, without an adequate accurate understanding of the amount of bias, those numbers are somewhat bogus. NIST was formed as the bias police (in a manner of speaking).

Enough rambling. Don't know how useful my input was. Hopefully of some value.

Ryan Wilde

I agree with Jerry that the best route is a "Golden Unit". The key is in the level of characterization that you can perform on the Golden Unit. I have seen and written software specifically for this purpose, which takes into account the actual bias of the golden unit (which was manually calibrated with very high accuracy, fully characterized equipment - not using the test station apparatus). The actual values of the golden unit become calibration constants, and your test station is checked versus those constants rather than nominal values. That will show you the bias of your SYSTEM.

One other note: If your unit that you are producing is adjustable (has variable resistors, capacitors, inductors, etc.) adjust the Golden Unit to what you want it to be (probably close to nominal) and measure the value of the variable components. REPLACE those components with thermally stable, high precision fixed components of the same value of the variable components. Your stability and drift in your Golden Unit will dramatically increase, and it also allows you to chart its drift. Adjustment of reference standards is an evil thing in most cases. If I know that my standard drifts -2 PPM per month, I know within a very tight band what its value is today, tomorrow, and next week.

Hope I am not just confusing the issue more.


JStain - 2008

Golden std w/ATE

Not to hijack the thread, but:

I have 18 ATE's, given I can bless a "Golden" end item. What would my next step be to ascertain my variations form one ATE to another?



I know this thread is pretty old, but I run into similar situation as thread starter right now, and hope someone has feedback on my opinion below.

I came from Test Engineering background, so please correct me if I have misconception here related to MSA.

The solution of MSA in ATE that I have in mind is to produce simulated source of parameter from the ATE itself, and in the same time, measure this source with NIST traceable external measuring instrument like DMM (Digital Multi-Meter). After this step we re route back the source to ATE's measuring instrument and collect the data. There's a technique in electronic measurement to remove error due to connection / route (named kelvin/ Force-sense connection).

With this approach, we can avoid to use any UNIT and apparatus, and with a simple Test Program, we can automate the data collection.

The simulated source is fully programmable, so we can generate any value within the operating range, simulate the typical reading of DUT, etc. make this convenience for linearity & bias study.

Please let me know if this approach is still valid; MSA requirement wise. Thanks!

Bev D

Heretical Statistician
Super Moderator
what you describe is closer to calibration than MSA.

There is a reason that Measurement System Analysis involves testing of the actual units and not standards. MSA includes the assessment of how the tester interacts with the unit, both physically and electronically. This includes how the unit is positioned in the fixture and how the probes/cables or other interconnections connect to the unit.

This begs the question: why wouldn't you perform an SA with the units?


Hi Bev,
This is only for practical reason. As described on previous thread we often has difficulty to determine reference for Bias and linearity study. We also try to avoid maintaining "Golden Units" if possible.
Top Bottom