Subject: Uncertainty RE13 Date: Fri, 12 Jan 2001 13:40:36 +0000 From: Steve Ellison To: iso25@fasor.com Subject: Re: Uncertainty RE11 This was my post, originally. Kurt, you may have misunderstood my intent, here - I was being rather brief. I'm not suggesting that one calculate the averaged error. I'm saying that we have a result, y1, (with uncertainty) which appears to be outside a permitted range. We do a retest mainly to check for gross error in operation, and get a second result, y2. Now if there's a big difference between the two, we have to go away and do some more work - find out why, and if necessary repeat the entire testing procedure. But if there isn't much difference, we would probably conclude that we have two essentially valid results. The question then arises of how to interpret the information. One seriously silly way to proceed is to go on the basis of the best result (ie the one closest to compliance). This is sometimes called 'bouncing them in'. This is extremely bad practice, and it's what I was trying to avoid by referring to averaging. My view of best practice is to use all the information properly to get the best estimate of the value. In this case, it's usually the average of the valid results (though you could argue for weighted averages or other combinations if they were appropriate). I'll assume it's the average. The uncertainty for the average is smaller than that for the original result y1, since we've averaged out part of the random error, but as alan rowley correctly pointed out, that's the only bit that gets smaller on averaging so the uncertainty is not as much smaller as sqrt(2). We then have an average of y1 and y2, and a slightly smaller uncertainty. We assess complaince using this new, averaged, result, and its uncertainty. If we're lucky, we now have a clear-cut compliance or noncompliance issue; often, though, we'll just be in a slightly smaller grey area. My view is that this is the time to bite the bullet and make your decision - ideally using your decision rules. More retests begin to ! run into diminishing returns and increasing costs. Incidentally, there may be other considerations; time since first sampling may call the second result into question more than the first, for example. That ought to be built in to any use of the data - I've assumed that no such problem applies. You could, of course, do your check measurement (y2) and then go on the LEAST compliant of the two (which is one way of reading your principle of taking the bigger error of the two). Sure, if you like. It's conservative, and you'll reject a few more. Whether that's sensible depends a bit on the consequences - but if it's that serious to get it wrong, the limits ought probably to have been smaller in the first place. However, if you were talking about getting an estimate of error in a reading, with only two results to go by almost any estimate of error is going to be pretty wild unless the precision is good. The average error is actually the best estimate, but I doubt that many people would object to a more conservative view under those circumstances. In any case, think about the 95% confidence interval of your error estimate - over six times the difference between results! Are you sure you only want to quote the worst observation... that's pretty conservative by comparison to the 95% CI! (and we haven't even mentioned the whole uncertainty) Steve E. >>> Greg Gogates 11/01/2001 21:49:35 >>> Date: Thu, 11 Jan 2001 11:54:46 -0500 From: Kurt Finnie To: Greg Gogates Subject: Re: Uncertainty RE04 I'd like to comment about the following paragraph from a previous post. > But you are, as other people say, in the grey area between clear compliance and clear noncompliance. That situation either needs an experienced eye or a set of predefined rules for action. In this case, the experienced eye would probably say "it's a fair way out, and a retest would probably just confirm that - call it out". Your predefined rules ought to say what the basic interpretation is ("out" in this case), when a retest is appropriate (eg all "out of spec", only those within 1 standard uncertainty of spec...), how the combination of original result and retest result is to be handled (bearing in mind that a retest is normally there simply to make sure there wasn't a gross error; if there isn't, the original result remains valid and at worst you should probably take an average of the two results), and any limit to the number of retests (my personal preference is one retest and then stop). > I have taken the position that when I measure and verify, that I report the worst error, rather than the average error, of the two measurements. In my mind, averaging hides real probable error. Kurt Finnie, kfinnie@columbus.rr.com ----- Original Message ----- From: Greg Gogates To: Sent: Monday, January 08, 2001 12:31 PM Subject: Uncertainty RE04 > Date: Mon, 08 Jan 2001 13:03:29 +0000 > From: Steve Ellison > To: iso25@fasor.com > Subject: Re: Uncertainty > > Charles: > Leaving aside whether you have the correct uncertainty, and in the absence of specific instructions on guardbanding/interpretation of uncertainty: > > >How would you report this? > 2.88975 +- 0.00006 > > >Would you state it is in, or out?. > "On the basis of the result found, the device does not meet the specification" > > You may find that different rules are needed for different end uses, of course. But the simplest solution is to arrange things - specification and uncertainty of measurement - in advance so that a technician can report on the basis of the result found, on the understanding that the uncertainty is small enough to represent acceptable risk of incorrect compliance statements (eg a 3:1 or 4:1 TUR). After that, one retest to confirm that the original procedure wasn't flawed should be enough. > > Steve E. > > >>> Greg Gogates 05/01/2001 20:30:22 >>> > Date: Fri, 5 Jan 2001 15:29:23 -0500 > From: "gortakowski, charles" > To: 'Greg Gogates' > Subject: Uncertainty > > I have a question on uncertainty. For example if you took a dimensional reading with a piece of equipment the reading is lets say 2.88975. The requirement is 2.8894 - 2.8897. The equipment has an uncertainty of +- .000060. If you take 2.88975 and subtract .000060, you would get 2.88969. How would you report this? Would you state it is in, or out. Would you say it is out, and also report the uncertainty? I just would like some input. I need clarification in my mind. Thanks