Date: Fri, 29 Sep 2000 14:18:10 -0400 From: Philip Stein To: Greg Gogates Subject: Re: Uncertainty basics Measurement Uncertainty as defined by International Standards is a complex and well-defined set of computations. There are many thorough documents - most of them local versions of the international standard. Rather than attempting to answer your questions directly, I will direct you to a website with some very good answers and more references. http://physics.nist.gov/cuu/index.html Happy surfing >Date: Wed, 27 Sep 2000 07:34:42 -0400 >From: "Loebach, David" >To: 'Greg Gogates' >Subject: RE: Uncertainty Reporting per Accreditation Requirements RE13 > >I have a couple very basic questions regarding uncertainty. Perhaps this >very accomplished group can answer help me with them. > > 1. What is measurement uncertainty (in quantifiable terms >please)? > 2. How does measurement uncertainty relate to resolution? >(i.e., The instrument needle is between the 5.4 line and the 5.5 line, so I >estimate I can read this parameter to the nearest .05.) >2. How does measurement uncertainty relate to standard deviation? >3. When an instrument manufacturer says his instrument id accurate to >1% of full scale, is this a statement of resolution, uncertainty, or >standard deviation? >4. How many degrees of freedom is associated with measurement >uncertainty? >5. What good is standard deviation without knowing the degrees of >freedom associated with it? >6. Statistical confidence intervals are a clearly defined estimate of a >parameter? Does measurement uncertainty also give a clearly defined >estimate of a parameter (such as measurement error)? > > >Dave Loebach >Senior program Coordinator and Lead Assessor >OSHA NRTL Program > > ---------- > From: Greg Gogates[SMTP:iso25@fasor.com] > Sent: Monday, September 25, 2000 1:46 PM > To: iso25@quality.org > Subject: Uncertainty Reporting per Accreditation Requirements RE13 > > Date: Fri, 22 Sep 2000 10:27:17 -0700 > From: "Nielsen, Larry E" > To: 'Greg Gogates' > Subject: RE: Uncertainty Reporting per Accreditation Requirements >RE12 > > Lynne, > Assuming we're discussing a hierarchical system of measurements such >as from > NIST to a calibration laboratory to the end user, such as a testing > laboratory, at each step in the process the uncertainties are linked >but not > duplicated. In each case, the uncertainty reported by the higher >echelon > laboratory should be accepted as the Type B uncertainty at the next >lower > level. > > For example, let's say a testing laboratory is using a calibrated a >force > gage to determine the force required to open a simulated automotive >door > latch. The uncertainty in the door latch opening force is >determined by the > root sum square of the standard deviation of the force measurements >made > with the gage; the standard uncertainty reported by the calibration >lab for > the force gage; and any other uncertainty introduced in the testing >lab such > as by not having good environmental controls, etc. all multiplied by >a > coverage factor assigned by the testing lab. > > If the testing lab has properly selected its force gage, and its >calibration > service provider, any uncertainty introduced by calibration errors >in the > gage should be negligible with respect to the force required to open >the > door latch. > > If a calibration lab has some reason to believe that the calibration > uncertainty it assigns to the gage will not be representative of >what the > user will experience, it may implore the user to apply additional > corrections or to account for additional sources of error. However, >as you > correctly point out, it is up to the user to determine what effect >this > additional error may have on the uncertainty of his/her >measurements. > > What users tend to get all worked up over is that the calibration >lab will > report for example that the force gage has an expanded uncertainty >of 0.1 > percent, and the user finds out that when he uses the gage in a 87 >ƒF shop > with it clamped down with a pair of vise grips next door to an iron >foundry, > that he can only rely on it to give answers to around 0.2 percent. >Instead > thinking about what a lousy job the calibration lab did, he probably >need > only worry about what effect the additional uncertainty will have on >his > plus or minus 5 percent door latch measurements. > > Sincerely, > **************************************************** > Larry E. Nielsen > So. Cal. Edison - Metrology > 7300 Fenwick Lane > Westminster, CA 92683 > (714) 895-0489; fax (714) 895-0686 > e-mail: nielsele@sce.com > **************************************************** > > > > > > ---------- > > From: Greg Gogates[SMTP:iso25@fasor.com] > > Sent: Friday, September 22, 2000 6:55 AM > > To: iso25@quality.org > > Subject: Uncertainty Reporting per Accreditation Requirements >RE12 > > > > Date: Thu, 21 Sep 2000 17:42:46 -0400 > > From: QA_neumann > > To: Greg Gogates > > Subject: Re: Uncertainty Reporting per Accreditation Requirements >RE3 > > > > A cal lab ( unless it is part of the users > > organization ) cannot possibly know all the sources of error for >the end > > user. It is up to the user of the equipment to determine what >their > > uncertainty is, an independent cal lab can not do this. All they >can > > provide is the uncertainty that their mearusement system >introduces into > > the > > budget, which would be a type B error for the budget that the lab >would > > produce. If the cal lab has taken the time to do the type A and >knows > > what > > they type B factors would be then they can give the uncertainty. > > > > Just reporting that the client may expect different results, is an > > understatement. They will definitely get different results >depending on > > use > > factors that the cal lab would not know. The user needs to >determine the > > budgets and factors, and calculate their own uncertainty. If the > > calibration lab is going to give the laboratory an uncertainty >budget that > > would include type B errors that would normally be considered by >the user, > > one would hope that the calibration labs are encouraged to present >the lab > > with the uncertainty budget that they use. This would prevent the > > laboratory from creating a budget that would duplicate some of the >same > > sources of errors when they creat their budgets, and falsely >create a > > larger > > budget than necessary. This really needs to be > > standardized or calibration laboratories are going to be double >dipping on > > budgeted items. This will not benefit anyone. Good luck on the >RP. > > > > I'm of the opinion that calibration laboratories cannot give a >laboratory > > anything but their own uncertainty. If the cal lab starts >including > > things > > in the budgets that should normally be up to the user of the item >to > > identify, we are going to confuse the matter more that it already >is. In > > Ted's example I can clearly see why the item is included, but as a >general > > rule you can see that they are not including this. The other >remarks are > > more difficult to understand. > > Lynne > > > > ----- Original Message ----- > > From: "Greg Gogates" > > To: > > Sent: Tuesday, September 19, 2000 6:20 PM > > Subject: Uncertainty Reporting per Accreditation Requirements RE3 > > > > > > > Date: Tue, 19 Sep 2000 08:05:30 -0700 > > > From: "Nielsen, Larry E" > > > To: 'Greg Gogates' > > > Subject: RE: Uncertainty Reporting per Accreditation >Requirements > > > > > > Chuck, > > > The way you have posed the question it sounds like you are >pretty much > > > saying the same thing either way. The confusion generally >arises over > > > whether or not laboratories should simply report their ability >to > > measure > > or > > > the uncertainty of their measurements with respect to the item >or device > > > under test. My NCSL committee is currently taking up this issue >in a > > > project to revise RP-9 on calibration laboratory capability > > documentation > > > guidelines. > > > > > > The short answer to your question is that when reporting results >of > > > calibrations or tests to clients, measurement uncertainty >includes all > > > significant sources of error including contributions from >standards, any > > > errors introduced as a result of performing the measurement, and >any > > > contribution from the device under test. > > > > > > The committee is currently leaning towards recommending that > > laboratories > > > declare their capabilities both ways. In other words, define >their > > ability > > > to measure on a parametric basis, based on the accuracy or >uncertainty > > of > > > their standards. (They need to do this anyway in order to >evaluate > > their > > > contribution to measurement uncertainty). Also, to declare >their > > > capability on the basis of measurement uncertainty with respect >to > > > calibration of specific devices (i.e., gage blocks, DMMs.etc.). > > > > > > So what you may be running into in your travels is a >laboratory's > > analysis > > > of their ability to measure. But what assessors and >accreditation > > bodies > > > are looking for with respect to ISO 17025 is the measurement >uncertainty > > of > > > the device or item under test that will ultimately be reported >to > > clients. > > > > > > Sincerely, > > > **************************************************** > > > Larry E. Nielsen > > > So. Cal. Edison - Metrology > > > 7300 Fenwick Lane > > > Westminster, CA 92683 > > > (714) 895-0489; fax (714) 895-0686 > > > e-mail: nielsele@sce.com > > > **************************************************** > > > > > > > > > > ---------- > > > > From: Greg Gogates[SMTP:iso25@fasor.com] > > > > Sent: Monday, September 18, 2000 2:58 PM > > > > To: iso25@quality.org > > > > Subject: Uncertainty Reporting per Accreditation Requirements > > > > > > > > > > > > Hello All, > > > > > > > > First of all, please read the whole posting before jumping to > > conclusions. > > > > The reason for this posting is to ask this esteem group a >question > > about > > > > uncertainty reporting (with respect to accreditation >requirements) and > > > > then to obtain comments on the question. > > > > > > > > Over the last six months I have had the opportunity to review >quite a > > few > > > > uncertainty budgets (many of which were written by members of >this > > group); > > > > and, I think we as a community have a problem. I will do my >best in > > > > sharing with you what I have found, shoot the poster if you >must; but, > > I'm > > > > hoping this posting triggers some enlightened discussions in > > addressing > > > > the problem and leads to a conclusion that we all can share >with the > > test > > > > & measurement community. > > > > > > > > The question is: Drum roll please, > > > > > > > > Does the standard (17025 ) ask for the uncertainty of the >measurement > > > > being provided to the UUT or for the uncertainty of the >measurement > > being > > > > made??? > > > > This is important question; because, I have seen both methods >being > > done > > > > by the test community and the results as we all know are >definitely > > > > different. > > > > > > > > Over the last six months I have had several discussions >(sometimes > > heated) > > > > with many of you as to what is being asked for by the >standard. And, I > > > > have been told by some scholars/leaders in the community (in >no > > uncertain > > > > words) that the standard is very clear in asking for the >uncertainty > > of > > > > the measurement being provided (Method A); while other >scholars of the > > > > test & measurement community have told me (in no uncertain >words) that > > the > > > > standard is very clear and is asking for the uncertainty of >the > > > > measurements being made (Method B). So, here I am, a simple >admin. > > person, > > > > being told that both are correct. How can this be? > > > > Now, to add fuel to the fire, I sure hope that assessors are >not > > telling > > > > some labs seeking accreditation that "Method A" is correct; >while > > others > > > > assessors are telling other labs that "Method B" is correct. >This > > wouldn't > > > > be happening; WOULD IT?!!! > > > > > > > > I have read the standard(s) and have an opinion; but, I >reserve the > > right > > > > to invoke the 5th, at least for now. I look forward to all >responses. > > > > Please feel free to email me directly if you prefer or, >preferably, > > post > > > > your response to this list server. I truly would like to hear >all > > > > opinions. > > > > > > > > > > > > Chuck > > > > > > > > > > > > Charles J. Ellis > > > > Managing Director > > > > National Association for Proficiency Testing > > > > 8014 Olson Memorial Highway, # 167 > > > > Minneapolis, MN 55427 > > > > > > > > Updated Area Codes & Fax Number > > > > (763) 542-8872 > > > > fax (763) 847-6772 > > > > > > > > Numeric Page 1-888-879-3434 > > > > 6129985184# then callback number # > > > > > > > > email text to pager (100 characters) > > > > 6129985184@uswestdatamail.com > > > > > > > > > > > > > > > > > > > > > Philip Stein O- Fellow, ASQ and past member of its board of directors A2LA Lead Assessor Past Chair, ASQ Measurement Quality Division Check out http://www.measurement.com