Thoughts on Traceability and Measurement Uncertainty (MU)

Marc

Fully vaccinated are you?
Leader
Date: Wed, 17 Oct 2001 11:13:26 -0700
From: "Dr. Howard Castrup"
To: Greg Gogates
Subject: Traceability/Uncertainty

Mike, Steve, et al.,

In Traceability/Uncertainty RE08, Steve Ellison presents what I believe to be a cogent argument for a definition of traceability that is not tied to uncertainty propagation through a traceability chain. The following definition, quoted by Mehul Joshi in this thread, contains at least the elements of what should be preserved in discussing traceability:

Traceability: "The property of a result of measurement whereby it can be related to appropriate standards, generally international or national standards, through an unbroken chain of comparisons."

Note that this definition does not include any mention of an accompanying uncertainty propagation analysis. Admittedly, requiring that parameters be calibrated by references that have themselves been calibrated by higher-level references in an unbroken chain traceable to a national or international standard or set of standards does not, in itself, control measurement uncertainty. However, such a requirement still has value. For one thing, the analysis of uncertainty propagation through a test and calibration support hierarchy, requires that the chain exist in the first place. Moreover, by leaving uncertainty propagation analysis out of the definition, we avoid several practical difficulties. These difficulties can be addressed by a separate uncertainty propagation analysis requirement.

In addition to traceability, other elements to be considered in uncertainty propagation analysis include the factors mentioned by Mike Ouellette in Traceability/Uncertainty RE03. These include location, environment, and time since calibration. The last factor is especially relevant to this discussion. As Mike and others have suggested or implied, the uncertainty of a calibrated parameter grows with time elapsed since calibration. With this in mind, it is important to note that the uncertainty statement that accompanies a calibrated parameter is nearly always just an estimate of the uncertainty of the calibration process -- not an estimate of the uncertainty in the reported parameter value at the time of use and under usage conditions.

I mention this as a lead-in to a comment that was made during an uncertainty analysis panel session at the 2001 International Dimensional Workshop. Responding to a question from the audience, one of the panelists, presumably in an effort to be consistent with the VIM, stated that, if the uncertainty during use is not the same as the stated uncertainty on the report of calibration, then the traceability of the calibrated parameter is lost.

Experience in the field of calibration interval analysis, replete with instances of observed changes in parameter value over time and/or reductions in in-tolerance probability as a function of time, argues that the uncertainty of a calibrated parameter begins to grow immediately following calibration and continues to grow during use. Given this, one might suspect that the reason ISO/IEC 17025 discourages projections of uncertainty over time on cal certs is that acknowledging the existence of uncertainty growth throws the VIM definition of traceability into controversy.

To see where this supposition derives from, I return to a comment made by Mike Ouellette: "... if the reported measurement is not accompanied by a (meaningful) uncertainty in that measurement, then THERE IS NO TRACEABILITY as defined by the VIM." Since a report of the measurement process uncertainty of a calibrating entity is hardly a meaningful uncertainty estimate for a parameter under conditions of use and time since calibration, by recognizing uncertainty growth, we either invalidate the VIM definition or conclude that we never have traceability.

So, Steve's objection to letting definitions dictate debate is needed, as we are apparently dealing with an example of the tail wagging the dog. The easiest solution, it seems to me, is to keep the concept of an unbroken chain of calibrations leading to a national or international standard separate from, but necessary to, the concept of uncertainty propagation through the chain. The former provides "error" control by fostering agreement between standards and equipment parameters, while the latter provides a means of assessing the quality of this control and also the quality of measurements made under conditions of use.

Dr. Howard Castrup
President, Integrated Sciences Group
 

Jerry Eldred

Forum Moderator
Super Moderator
Fascinating, well put, and still fundamental. I couldn't possibly add enough to that to spit at.

I am involved in evaluation of some 'stuff' produced by a metrology person at a site across the globe. I reviewed a treatise the person wrote replete with calculus formulas that gave me a headache, but which nevertheless missed a very basic point in calibration.

Calibration exists for the sole (fundamental) purpose of getting a handle on measurement uncertainty. In some ways it is nothing more, and nothing less. But in this persons lengthy treatise (which I'm not free to share here), not a word was mentioned about measurement uncertainty of their standards. They went on at length to discuss how (according to reading taken with their "Standard", they observed short term and long term drift. And borrowing from methodology used in MSA's, they defined a continuous monitor method of adjusting calibration intervals. Nowhere was traceability or uncertainty mentioned.

Regarding the article above, I fully agree. Calibration, and uncertainty management are a big club whose members take their little standard thingies, and see how well all their measuring stuff lines up with them. Then, every once in a while, we send our standard thingies to a senior club, where they compare our standards with the nicer ones they have. Their standards are a lot nicer than ours. They in turn (and quite invisible to us), send theirs on up the line to the Shriners Club (no offense to any Shriners here - only an illustration of a highest echelon). They send 'em back, and we repeat the process. Meanwhile, these little rascals called Entropy sneak around with these bright colored T-shirts on called Drift and Uncertainty, and try to mess up all those things we knew were working really well last time we calibrated them.

And there are, it seems, an infinite number of those little rascals in T-shirts sneaking around messing up our stuff. So, we figure out with a little math how long it takes them to totally mess up our stuff. They begin to mess the stuff up as soon as we get it back. How long it takes for them to mess everything up so bad we can't stand to look at it any more depends on how badly bahaved the little rascals at our company are.

Everything in the world that measures drifts all the time. Usually in predictable ways, often in ways we can't predict. It begins an infinitessimally small moment after we finish calibrating, and stops when we re-calibrate.

The best we can do is try to keep those little rascals locked up in their cages, and keep adjusting everything back at appropriate intervals.

Stepping out of my it's-been-a-long-week mode, I think it would be much more fair to say we can never know exactly what an instrument will measure. The purpose of understanding uncertainty is simply to try to quantify how well we understand the range around which an instrument will measure over a predictable period of time. Uncertainty analysis is nothing more than SPC applied to the calibration world with a few of the names changed.

Sorry folks, long day. Hope this is useful intelligent reading of benefit to some of us out there who get confused by uncertainty.
 
U

Unregistered

Originally posted by Jerry Eldred
Everything in the world that measures drifts all the time. Usually in predictable ways, often in ways we can't predict. It begins an infinitessimally small moment after we finish calibrating, and stops when we re-calibrate.

Actually, the best voltage reference standard available to the masses drifts, but it does so in such a predictable fashion that on any given day I can tell you its output with an uncertainty of about 0.3 PPM, which isn't too shabby. The trick is that you NEVER adjust it, you only chart it. It seems the best standards are not necessarily those that stay closest to nominal, rather those that you can predict with little uncertainty.

Let go, Luke, use the drift...

Ryan
 
D

DICKIE

I agree whole heartedly with Dr. Castrup. I take everything he says about uncertainty to heart and believe him to be the preminent expert in the field.

Greg
 

Marc

Fully vaccinated are you?
Leader
> Meanwhile, these little rascals :biglaugh: called Entropy sneak
> around with these bright colored T-shirts on called Drift
> and Uncertainty, and try to mess up all those things we
> knew were working really well last time we calibrated
> them.

Jerry, I really liked your description and the rest about the "...rascals..." A good way to present it. I pasted this thread from the ListServe because of the confusion about uncertainty and how it plays a part in the whole chain. I hadn't thought of it as Entropy.

It's interesting how all the factors add up in some critical situations. I remember years ago at Cincinnati Electronics the cal lab had a 'special' section with certain standards like resistors that were kept immersed in liquid at a certain temperature, etc. I didn't work in the area, but was in there a few times during some projects. I really didn't pull everything together in my mind until the last 4 or 5 years. QS-9000's Measurement Systems Analysis brought together the 'last mile' for me, so to speak.

Anyway, one last snippet:

************************
Date: Thu, 18 Oct 2001 10:26:29 EDT
From: CoachIngal
Subject: Re: Traceability/Uncertainty

VIM definition of traceability:

"property of the result of a measurement or the value of standards whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties."

- Paul Ingallinera
L-A-B Assessor

*********************************************

Date: Thu, 18 Oct 2001 08:45:14 -0400
From: Philip Stein
To: Greg Gogates
Subject: Re: Traceability/Uncertainty

Howard Castrup wrote, below, that

>"it is important to note that the uncertainty statement that
>accompanies a calibrated parameter is nearly always just an estimate of the
>uncertainty of the calibration process -- not an estimate of the uncertainty
>in the reported parameter value at the time of use and under usage
conditions."

But in fact many uncertainty budgets I have assessed DO estimate uncertainty growth with time. The best of this breed use appropriate statistical analyses of historical data to do so. Fluke, for example, conducts a program whereby voltage reference standards are calibrated and the report produced includes the slope of the drift. It is therefore possible for the user of the standard to estimate the actual voltage at any time in the future during the calibration interval. This is still an estimate, of course, the actual voltage is not known, but it is being predicted with a narrower confidence interval than would be possible without this program. This drift is included as part of an overall budget that encompasses other major influence quantities. For the most picky work, the drift is seen as a 'bias' and corrected for as of the time of use of the standard. The uncertainty in this 'bias' falls to the budget. Most practitioners, however, just include the maximum predicted drift for the entire cal interval as a term in their budget. Either approach is acceptable, although many thinkers, including the GUM, frown on moving uncorrected 'bias' into the uncertainty.

In my experience, most budget writers include specific influence quantities and ignore or are unaware of others. Change of the standard between external calibrations is commonly (but not universally) included.

So what about the traceability question that is being discussed here? The definition

> "Traceability: "The property of a result of measurement whereby it can be
> related to appropriate standards, generally international or national
> standards, through an unbroken chain of comparisons."

is one we lived with, more or less, for years. All we are doing today is highlighting the fact that the word 'related' in the definition must be understood as requiring a certain level of quantitation. Any relaxation of the current definition of traceability must still require 'related' to be a quantitative statement if traceability is to mean anything at all. Alternative methods of quantifying 'related' may be acceptable to some, but since we already have an accepted world-wide standard for accomplishing such a task, namely the GUM, there should be considerable justification for doing it a different way.

Phil

Philip Stein
Fellow, ASQ and member of its board of directors
A2LA Lead Assessor
Past Chair, ASQ Measurement Quality Division
 
R

Ruebenn

Re: Thoughts on Traceability and Uncertainty

Date: Wed, 17 Oct 2001 11:13:26 -0700
From: "Dr. Howard Castrup"
To: Greg Gogates
Subject: Traceability/Uncertainty

Mike, Steve, et al.,

In Traceability/Uncertainty RE08, Steve Ellison presents what I believe to be a cogent argument for a definition of traceability that is not tied to uncertainty propagation through a traceability chain. The following definition, quoted by Mehul Joshi in this thread, contains at least the elements of what should be preserved in discussing traceability:

Traceability: "The property of a result of measurement whereby it can be related to appropriate standards, generally international or national standards, through an unbroken chain of comparisons."

Note that this definition does not include any mention of an accompanying uncertainty propagation analysis. Admittedly, requiring that parameters be calibrated by references that have themselves been calibrated by higher-level references in an unbroken chain traceable to a national or international standard or set of standards does not, in itself, control measurement uncertainty. However, such a requirement still has value. For one thing, the analysis of uncertainty propagation through a test and calibration support hierarchy, requires that the chain exist in the first place. Moreover, by leaving uncertainty propagation analysis out of the definition, we avoid several practical difficulties. These difficulties can be addressed by a separate uncertainty propagation analysis requirement.

In addition to traceability, other elements to be considered in uncertainty propagation analysis include the factors mentioned by Mike Ouellette in Traceability/Uncertainty RE03. These include location, environment, and time since calibration. The last factor is especially relevant to this discussion. As Mike and others have suggested or implied, the uncertainty of a calibrated parameter grows with time elapsed since calibration. With this in mind, it is important to note that the uncertainty statement that accompanies a calibrated parameter is nearly always just an estimate of the uncertainty of the calibration process -- not an estimate of the uncertainty in the reported parameter value at the time of use and under usage conditions.

I mention this as a lead-in to a comment that was made during an uncertainty analysis panel session at the 2001 International Dimensional Workshop. Responding to a question from the audience, one of the panelists, presumably in an effort to be consistent with the VIM, stated that, if the uncertainty during use is not the same as the stated uncertainty on the report of calibration, then the traceability of the calibrated parameter is lost.

Experience in the field of calibration interval analysis, replete with instances of observed changes in parameter value over time and/or reductions in in-tolerance probability as a function of time, argues that the uncertainty of a calibrated parameter begins to grow immediately following calibration and continues to grow during use. Given this, one might suspect that the reason ISO/IEC 17025 discourages projections of uncertainty over time on cal certs is that acknowledging the existence of uncertainty growth throws the VIM definition of traceability into controversy.

To see where this supposition derives from, I return to a comment made by Mike Ouellette: "... if the reported measurement is not accompanied by a (meaningful) uncertainty in that measurement, then THERE IS NO TRACEABILITY as defined by the VIM." Since a report of the measurement process uncertainty of a calibrating entity is hardly a meaningful uncertainty estimate for a parameter under conditions of use and time since calibration, by recognizing uncertainty growth, we either invalidate the VIM definition or conclude that we never have traceability.

So, Steve's objection to letting definitions dictate debate is needed, as we are apparently dealing with an example of the tail wagging the dog. The easiest solution, it seems to me, is to keep the concept of an unbroken chain of calibrations leading to a national or international standard separate from, but necessary to, the concept of uncertainty propagation through the chain. The former provides "error" control by fostering agreement between standards and equipment parameters, while the latter provides a means of assessing the quality of this control and also the quality of measurements made under conditions of use.

Dr. Howard Castrup
Dear Sir,

Thanks for the help, truly appreciate it.
Advanced Christmas greetings.
I suppose others have personally asked you for similar advise before and i would truly be obliged if you can let me know in brief/point form in what is required of an approved signatory of a RF/Microwave lab under the 17025 scheme.
I have come to know that the technical requirements are more important than the quality aspects.
I know we have to know the standards(both operation and service aspects) by hard which is what i am trying to so so hard now.
And i also know that we have to knw something about the uncertainty measurement(BMC) and all?
Do you have a sample of such a calculation underlying an example say..using a measuring receiver HP 8902S in calibrating a signal generator say a HP 8648C?
I am still blur in getting the infos..accuracy, resolution and so on.
Appreciate the reply.

Rgds
Ruben
 
Top Bottom