Date: Mon, 8 Feb 1999 23:22:45 +1100 (EST) From: jfwilson@enternet.com.au To: Greg Gogates Subject: Re: Re1: Uncertainty in Cal lab and Testing labs? (fwd) >Uncertainty is required for testing in 17025. Working it out may be a >problem. Some of the same methods used for calibration uncertainty does >apply to testing. The trick is to know where the elements of uncertainty >come from and prepare the budget accordingly. Lynne This is going to be a real can of worms. Uncertainty can be easily calculated for some testing activities, but is quite difficult to perform for others - and even quite meaningless, I suggest. Let's look at a few extremes encountered in laboratories I have been involved with over the last few years: Lab A performs bend tests upon steel samples. The metal is bent around a mandrel and then examined. The acceptance criteria: "No cracking shall be visible". The laboratory has gone beyond the testing criteria and established controlled conditions : specified lighting conditions and "calibrated" eyeballs. OK, we can establish an average minimum visible crack, using fine lines or notches. But is an uncertainty meaningful here? This is a practical Yes / No test. Lab B performs tests upon window assemblies, using a box with the window fitted into it. Leakage is determined with the window and frame first covered by plastic and then uncovered. The difference is considered to be the window leakage - a critical element in air-conditioned buildings. Air volumes are determined by pressure drop across orifice plates. Here, an understanding of uncertainties allows the lab to pick the optimal condition= s - where the box leaks about as much as the window, since the uncertainties associated with low flow and high flow rapidly exceed the small leakage we are interested in. But I can understand why this professional unit are ver= y reluctant to report their uncertainties, given the result is achieved by difference. Lab C performs tests upon motor vehicles in accordance with a (nationally legislated) variant of the full-frontal barrier impact test, based upon instrumented dummies precisely located within the vehicle. The standard specifies an accuracy of B1 5% for each of the accelerometer in the triaxial head cluster. Given that results are biased to one plane, the combined uncertainty can be reduced to about =B1 8% - for the accelerometers. But this doen't take into account any other factor for dummy set-up, placement, vehicle speed, barrier material performance, etc etc. Worse, the effect of these variables upon the final result is largely unknown - people appear reluctant to allow sufficient replicate testing to be done - so factoring these into the uncertainty budget is guesswork. What then does the lab report? A Hokum figure, an "informed" guess? (And then, explain uncertainties to the government clerk or lawyer who wants to know why the lab won't definitively rule on a just-pass or just-fail). In 2005 can/will accreditation be granted when uncertainties cannot be quantified? If so, this will exclude a lot of small-to-medium industrial-grade labs, which is really where a mark of technical competence is needed. How does your local council pick which lab tests your road base - and do you expect them to report their uncertainties? Instead of fluffing around with a lot of fancy statistics, how about the accreditation bodies focussing upon areas that really piss off labs clients - like long turn-around times, delays in generating certificates, poorly presented and hard to interpret certificates, dud invoicing systems. These quickly ruin the appearance of competence, and the name of the accreditation body, but ISO Guide appears strangely silent on these non-technical aspects of laboratory competence.