Subject: Re: Re10: Uncertainty in Cal lab and Testing labs? (fwd) Date: Wed, 10 Feb 1999 11:48:37 -0800 From: Jim Kerwin To: Greg Gogates I want to add a word of caution about one statement that was made in the correspondence shown below. Near the end of the document, after referring to the ISO-Guide approach to evaluating uncertainties, viz., taking partial derivatives of the relevant formulas, it says "an easier way of doing that is to set up the calculation, change each input parameter by the amount of its declared uncertainty, and write down the change in the calculated result. That last is the uncertainty in the result arising from the uncertainty in...input effect[s]." Taking this advice unmodified by a statement like "Be sure that the signs of the input uncertainties are chosen so as to give the maximum uncertainty in the calculated result." can produce erroneous results. A simple example shows the need for such a warning: Consider Ohm's law when used to determine the current (I) through a resistor (R) with a voltage (V) applied across it. I=V/R Assume the measured voltage is 20 volts, +/-5%, and that R = 10 ohmx, +/-1%. Taking just the positive (or negative) signs for the given uncertainties gives an absolute error of .08 amp for I, or a percent error of 4%. Choosing opposite signs for the two uncertainties in the calculation gives the larger, and hence better, uncertainty estimate for I of .12 amp or a percent error of 6%. Thus we see that attention must be paid to the signs of input uncertainties in order to avoid underestimates of uncertainties in derived results. This sign problem is not a consideration when the root-sum-square formula is used to determine the variance, as is done in the GUM, but it is a concern, in general, when a straight sum is used to get the derived quantity's uncertainty. In the given example, the correct expression to use for the latter is mag[dI/I] = mag[dV/V] + mag[dR/R], where "mag" stands for magnitude. Jim Kerwin, retired metrologist. At 10:17 AM 2/5/99 -0500, you wrote: >Date: Thu, 04 Feb 1999 11:11:17 +0000 >From: slre@LGC.CO.UK >To: iso25@fasor.com >Subject: Re1: Uncertainty in Cal lab and Testing labs? (fwd) > >I differ slightly from Lynne's interpretation. >ISO Guide 25 and 17025 require uncertainty of measurement to be quoted >with TEST results "where relevant". Typical interpretations of 'relevant' are >- where it is requested by the client >- where it is part of the specification of the test >- where it is relevant to the interpretation or validity of the result (an example >is where the result is close to a limit) > >In practice, this may make it much less frequently necessary to report it >than appears at first sight. In chemistry, essentially no tests specify >uncertainty, and very few clients request it. The view has (historically) been >that one agrees on a test specification taking the known performance of >the test (uncertainty by another name) into account, and then interprets by >direct comparison of result with the limit. So even where the result is close >to the limit, prior agreement on the use of the test constitutes an agreement >that further uncertainty information is not required. (Of course, there are all >sorts of caveats, including local verification of performance etc.) >We do have exceptions; in my own lab, we are "The" UK Referee analyst >under the Food Act and other regs, and are called in when there is a >dispute involving a result. We would then need to convey uncertainty >information in our report, as it will materially affect the criminal proceedings >that arise from the Food Act. > >The sting in the tail is that even if you don't have to report it, you DO have >to KNOW it or be in a position to evaluate it - otherwise you're not in a >position to decide whether it matters or not! So you must have the >information needed to assess uncertainties; what the effects are, how big >they might be. > >As to evaluation; the current approach is the ISO Guide to the expression >of uncertainty in measurement (ISO GUM). For testing labs, this often looks >like pretty fearsome stuff. If it doesn't look fearsome to you, you're OK (and >almost certainly not a chemist!). If it does look fearsome; panic not - you >may be able to justify the sort of approach I think will become common >practice in chemistry (at least in the medium term). Elucidation follows. > >In chemical testing, you may need to know less about detailed effects, >because you're usually using a well-established method. The characteristic >of these methods is that most of the well understood effects have been >checked for significance compared to reproducibility precision, and if >significant, prescriptively controlled to make sure that they are NOT >significant. The practical upshot of that is that a testing lab need only >demonstrate that the majority of "systematic" input effects are indeed >negligible compared to reproducibility precision, verify local precision >performance as consistent with published expectation, and show that >overall bias and its uncertainty are small (reference standard check). If >you've done that lot, the only significant 'contribution' is the observed >reproducibility. A One Component uncertainty estimate! >We normally go a bit further; there are usually one or two effects outside >the scope of a reproducibility assessment (change of sample type, for >one). We evaluate that seperately, either by varying it and looking at the >effect, or (if you can) from theory. The latter actually applies the ISO Guide >approach, involving partial differentiation; an easier way of doing that is to >set up the calculation, change each input parameter by the amount of its >declared uncertainty, and write down the change in the calculated result. >That last is the uncertainty in the result arising from the uncertainty in said >input effect. >When you have more than one significant contribution, get them all by >hook or by crook (ie following the ISO GUM conversion rules if necessary) >into the form of standard deviation of the final measurement result (ie same >units) and combine by square root of the sum of the squares >(independence assumed, as at the testing level they normally are >independent) > > >PS: If anybody has trouble with whether the above approach is consistent >with the ISO GUM, I'm prepared to argue the point; it does need to be >thrashed out. >------------------------------------ >Steve Ellison >Laboratory of the Government Chemist >Queens Road, Teddington, Middlesex >ENGLAND TW11 0LY > > >>Date: Fri, 04 Jan 1980 22:39:45 +0100 >>From: "Mr. F.E.Farrugia" >>Reply-To: fefarr@eng.um.edu.mt >>Organization: University of Malta >>X-Mailer: Mozilla 4.06 [en] (Win95; I) >>MIME-Version: 1.0 >>To: "iso25@quality.org" >>Subject: Measurement Uncertainty >>Content-Type: text/plain; charset=us-ascii >>Content-Transfer-Encoding: 7bit >> >>An explanation required. Is measurement uncertainty in ISO 17025 >>required for testing laboratories as is the case of calibration >>laboratories? If the answer is in the affirmative, any suggestions as >>how it can be worked out? >> >>FE. Farrugia