# Calibration Uncertainty Philosophy - Reality vs. Basic Theory (& Guardbanding)

#### Marc

##### Fully vaccinated are you?
Date: Sat, 6 Nov 1999 11:55:08 -0500
From: "James D. Jenkins"
To: Greg Gogates
Subject: RE: Calibration Uncertainty Philosophy

Ron,

To begin with, lets establish some common understanding: Measurement Uncertainty: the uncertainty of the measurement data relative to "true value". In calibration and testing this means the combined uncertainty of the traceable reference equipment including the uncertainty attributed to the applied process, environment etc. This also includes the contibutory traits of the subject device, such as its random error contribution and display precision (as applicable), but does not include the subject units specifications.

Device Uncertainty: typically used to describe the uncertainty of a measurement device, expressed as either a containment value such as a specification or a certified value with applicable uncertainty. To be realistic, both of these types must consider uncertainty growth over time since measurement and have some appropriate containment value. Typically when using a certified value we perform a regression relative to changes in value over time and predict the value for a given time since measurement based on a calculated drift slope. It should be obvious that this process would also add uncertainty derived from the prediction. (such as uncertainty of data and fit of data to best fit straight line. etc.)

Adjustment of the "subject unit" (device being calibrated) will have absolutely NO effect on the measurement uncertainty of the data taken during the calibration (before and after adjustment). But will have a definite effect on improving the "In-tolerance Probability" of the subject units performance relative to its "error containment" (specifications) over time (calibration interval).

Type "A" assessment of measurement standards are based upon certified values over time and I agree, you should request that these devices NOT be adjusted as that might disturb the stability of the drift slope. Although it is rare for the typical user of calibrated equipment to use their equipment in this certified value manner. This is usually reserved for passive or otherwise extremely stable reference standards.

Most customers of calibration and users of measurement equipment desire that their equipment perform nominally within specifications and do not reference the measured value, rather they assume error containment by the specifications relative to displayed values. They do not correct the displayed value by some known bias. Such as in a garage when the mechanic measures the battery voltage to be 13.6VDC, it is assumed that the voltage is 13.6 +/- device accuracy specification. For this application to work reliably, a high in-tolerance probability is desired, hence so is the in-tolerance adjustment desirable as it can increase this in-tolerance probability. When computing measurement uncertainty using devices in which we quote the specifications as error containment, we express this in-tolerance probability as the "Confidence Level" of the specifications adequately containing the error (type B).

And you are correct in recognizing that a laboratories Best Measurement Uncertainties as defined in their "Scope of Accreditation" do not apply directly to all data taken in subsequent measurements of varying subject units. This "Best Measurement Uncertainty" is useful is assessing their abilities in making certain measurements and does NOT mean that all measurements made by this laboratory will have this uncertainty. What you desire is a "Specific Measurement Uncertainty" which is based on the interaction of your specific subject unit with their equipment and the applied measurement process. To be in compliance to ISO Guide 25, a laboratory must be able to provide this specific measurement uncertainty upon request. But be prepared for applicable charges as this process can be time consuming as it includes a characterization of the specific subject unit.

(NOTE: the term "Specific Measurement Uncertainty" is phrased only to give clarification between various applications of Measurement Uncertainty, and is not found as such in ISO Guide 25 or the GUM. We have found 3 different types of analyses being reported by laboratories, Best, Specific, and Generic or Typical. While we realize that the "Specific" is the most accurate and costly, the others can be beneficial and have value when taken in context to their origin and definition. Seeing that all three are currently being reported by labs it is our opinion that when using something other than a "Specific" it should be made clear to the recipient.)

In our classes we cover the various types of analyses, applications of measurement uncertainty and the concepts of the uncertainty growth principle. We teach how to develop the laboratories "Scope" and compute a "Specific Measurement Uncertainty" for their customers, when required. We also cover the application of using an economical "Generic or Typical" measurement uncertainty and when it is advisable to report a "Best Measurement Uncertainty" for identifying the labs limits of measurement accuracy.

Once one realizes the intended application of the measurement uncertainty, i.e. risk assessment, the objective of the analysis and hence the proper type required becomes quite clear.

Sincerely,
James D Jenkins

#### Jerry Eldred

##### Forum Moderator
Super Moderator
I fully agree with everything in the post about measurement uncertainty. However, there are numerous applications where adjustment to as close to nominal (minimize error/bias) is NOT desired or appropriate.
1. Quartz oscillators- By adjusting to nominal, you cut in half (in some circumstances), the probability that the instrument will remain in-tolerance for a calibration interval. Oscillators all have aging rates and thus, will normally have a predictable uni-directional drift.If you set to nominal, that oscillator will drift from nominal toward the upper or lower tolerance limit. It is more desirable to adjust to a calculated point in an opposite direction from the nominal as the drift direction, and by the calculated amount that will best assure the oscillator will hit nominal at the halfway point of the calibration interval, as well as remain in-tolerance throughout the interval.

2. Measurement tools in an SPC manufacturing environment used to test quantitative, critical product parameters. The exception in this case MIGHT be in an instance where the accuracy/stability of the measurement tool is so greatly in excess of the control limit spread that it would not significantly impact the data history of the product. Even in that instance, I would not recommend doing that.

In other instances, I believe it depends on the measurement philosophy of the particular calibration program, and the amount of drift from nominal. There are many calibration programs that have an adjust/don't adjust threshhold (for example, 70% of tolerance). It may be more desirable in cases of small drift to get some longer term plots/data of drift. If the drift is bi-directional instability, adjusting may not accomplish anything (in some cases even further destabilize the unit). And in cases where it is a relatively small percentage of tolerance, there may be value in not adjusting for purposes of establishing historical data.

So I believe the decision to adjust is not always clear cut.

#### Marc

##### Fully vaccinated are you?
Any other examples of where the 'normal' calibration and uncertainty don't stictly apply?

#### Govind

##### Super Moderator
Super Moderator
James,
Thanks for the detailed information on Measurement Uncertainty. Can you explain about "Guardbanding procedure". Since Measurement Uncertainty is one of the component of the Guardbanding, I find this posting relevant here.

There is some mention about Guardbanding in Agilent Metrology Forum. But in telecom devices, critical characteristics have to be guardbanded against measurement error, product aging,variations due to environmental changes, etc.

Combining these values by RSS method and applying k=2 takes away a huge chunck of the tolerance resulting in low yield. Iam worried if we are too conservative and increasing the alpha risk.

Please let me know if any authentic source is available with detailed methodology and formula.
Govind.

R

#### Ryan Wilde

guardbanding

I've seen no less than 4 methods of guardbanding in practice. David Deaver of Fluke wrote the white paper in the link below that actually lists 6 methods. Each has pros and cons, but basically, you are merely trying to control risk. Pick your poison, figure which method works best for you, document that it is the method your company will employ, and use it.

Ryan

Last edited by a moderator:

#### Govind

##### Super Moderator
Super Moderator
Ryan Wilde said:
I've seen no less than 4 methods of guardbanding in practice. David Deaver of Fluke wrote the white paper in the link below that actually lists 6 methods. Each has pros and cons, but basically, you are merely trying to control risk. Pick your poison, figure which method works best for you, document that it is the method your company will employ, and use it.

Ryan
In the past I have searched the web a lot on this subject.For some reason, the page you mentioned did not come in my Google search!I guess the key words were different.

Thanks Ryan for providing the information. I found the following site in my previous search, if this is of any use to your work:

https://www.isgmax.com/acr_screens/ar_guardband_worksheet.htm
https://www.isgmax.com/products_l.htm

has some freeware and good white papers on Measurement Uncertainty, Guardbanding,etc. I was reviewing this material for my potential project.
Now I will extend my review to the one you gave also.

Regards,
Govind.

Last edited by a moderator:
R

#### Ryan Wilde

Marc said:
Any other examples of where the 'normal' calibration and uncertainty don't stictly apply?

Try uncertainty on a communications analyzer. It's output has to meet a mask, and has no true "calibrated value", only that it fits in the mask of what the waveform should look like. Then you loop the known good signal back into the receiver input of the analyzer, and it must read correctly. It is pass/fail, with no quantitative value. The uncertainty I guess would be 2% of "maybe".

In other words, there are no uncertainties assignable to non-quantitative values, such as pass/fail (as in present/not present). Do not get this confused with go/nogo attribute gauges, in which both ends of a quantitative value are checked.

Another example of non-normal calibration is DC Voltage References. If you have them, and opt to set the 10 VDC back to nominal, you have probably just thrown out years of drift and stability data. If I were to readjust my references to nominal, my DC voltage uncertainty would leap from the present 0.5 ppm to 7 ppm with the turn of a trimmer. That, and I'd lose my job.

There's lots of exceptions like this, but it is basically common sense that dictates.

Ryan

G

#### Graeme

Intrinsic standards

Marc said:
Any other examples of where the 'normal' calibration and uncertainty don't stictly apply?
Some intrinsic standards ... such as ones used in thermometry.
• Ice point standard: when made and used according to the proper procedure (ASTM E563-02, for example) this has a defined value (273.15 K) and uncertainty.
• Fixed point standards used in thermometry: triple points of oxygen, water and several other elements; freezing points of silver, tin and several other elements; and the melting point of gallium. ITS-90 defines the values for these points, and the characterisitcs of the standard platinum thermometer probe used for interpolating between them. Measurement uncertainty at a skilled standards lab can be 0.01 K or less. Additional uncertainty in calibration measurements is due to the thermometer probe and measuring system being calibrated.
Note that the thermometry fixed points and other intrinsic standards have "assigned" values. They are not calibrated per se, but are checked and maintained by a system of interlaboratory comparisons. ("Other" intrinsic standards include things like caesium beam frequency standard, Josephson junction array, quantum Hall effect apparatus, stabilized He-Ne laser and so on.)