Date: Thu, 28 Sep 2000 11:34:04 -0700
From: "Dr. Howard Castrup"
To: Greg Gogates
Subject: RE: Uncertainty query
Roberto,
Clearly, there is a bias in laboratory XXX's measurements relative to NIST.
Lab XXX is evidently not taking into account this bias when publishing
average values or in estimating uncertainties. If the bias is known, the
lab should correct for it. If it is unkown, they should endeavor to
estimate its uncertainty. One way to go about doing the latter would be to
compute a standard deviation based on a sample of comparisons with other
laboratories. Designing this experiment would take some care. Another way
would be to compute a standard deviation of differences between prior and
current calibrated values for their relevant standard(s), based on NIST
calibrations of same.
Also, what you may be looking at is an example of unaccounted for
uncertainty growth. It's possible that the uncertainty quoted by the lab
includes a contribution from the uncertainty of their standard as reported
by a higher-level calibrating lab (e.g., NIST). Due to random stresses of
usage, handling and storage, this uncertainty may grow with time elapsed
since calibration. Not all labs take this growth into account.
Moreover, there may be systematic drift components that the lab is unaware
of. They need to get with whomever calibrates their equipment and review
as-found and as-left values vs. time between calibrations to see if such
components are present.
Incidentally, if as-found and as-left comparisons are plotted over time,
with each measured difference weighted by the uncertainty of the measuring
process, both the systematic drift and the uncertainty due to random effects
can be estimated. For this to be effective, each supporting lab in the
traceability chain would need to do the same all the way up to NIST.
Finally, what I would like to see from lab XXX is a breakdown of the sources
of error that were accounted for and the method of uncertainty estimation.
Also, the +/- 1.0 number needs some clarification, since this is lower than
the NIST estimated uncertainty of +/- 1.6. I know NIST multiplies standard
uncertainties by 2. Does lab XXX do the same?
Incidentally, this is a good example of why indiscriminately multiplying a
standard uncertainty by 2 is not a good practice. It may be that, for lab
XXX (and NIST too, for that matter), the "uncertainty in the uncertainty" is
large. If so, then the effective degrees of freedom is small and k=2 could
produce +/- limits that are dramatically far from 95%. Dr. Doiron of NIST
provided other good examples of this in recent messages in the Proficiency
Testing thread of this discussion list.
Hope this helps.
Howard Castrup
Integrated Sciences Group
-----Original Message-----
From: owner-iso25@quality.org [mailto:owner-iso25@quality.org]On Behalf
Of Greg Gogates
Sent: Thursday, September 28, 2000 10:14 AM
To: iso25@quality.org
Subject: Uncertainty query
Date: Thu, 28 Sep 2000 10:00:50 -0700
From: Roberto Barroetavena
To: 'Greg Gogates'
Subject: Uncertainty
I need a clarification on the concept of uncertainty of a measurement.
As an example. I have a Standard Reference Material from NIST that contains
Selenium. The certified value and uncertainty is 16.0+/-1.6.
However after 25 determination by laboratory "XXX", the average value was
21.5 and the standard deviation 1.0 . How is the uncertainty (of the
laboratory "XXX") estimated?. How the bias, between the obtained average
value (21.5) and the certified value, affect the uncertainty?.
Roberto Barroetavena
rbarroet@gvrd.bc.ca