The Elsmar Cove Forum and Site Map The Elsmar Cove Wiki More Free Files The Elsmar Cove Forums Discussion Thread Index Post Attachments Listing Failure Modes Services and Solutions to Problems Elsmar cove Forums Main Page Elsmar Cove Home Page

Measurement Uncertainty

From Elsmar Cove Quality Assurance and Business Standards Wiki
Jump to: navigation, search

Measurement Uncertainty - A parameter associated with the result of a measurement (eg: a calibration or test) that defines the range of the values that could reasonably be attributed to the measured quantity. When uncertainty is evaluated and reported in a specified way it indicates the level of confidence that the value actually lies within the range defined by the uncertainty interval.

Any measurement is subject to imperfections; some of these are due to random effects, such as short-term fluctuations in temperature, humidity and air-pressure or variability in the performance of the measurer. Repeated measurements will show variation because of these random effects. Other imperfections are due to the practical limits to which correction can be made for systematic effects, such as offset of a measuring instrument, drift in its characteristics between calibrations, personal bias in reading an analogue scale or the uncertainty of the value of a reference standard.

The term "measurement uncertainty" is defined as the tolerance for measurement inaccuracy. The two entities that create measurement inaccuracy are error and variation. It is important to note that there is no generic quantification for uncertainty that applies across a broad spectrum of measurement situations. Some auditors mistakenly believe that errors found during calibration determine the total measurement uncertainty. However, the focus should be on identifying and then reducing the uncertainty of the measurement in a real world application.

To properly evaluate measurement and test equipment, an auditor must move past a philosophy that all error and variation are caused by the measurement gage. Every measurement system is comprised of six elements:

   * Instrument
   * Operator
   * Part (or part characteristic)
   * Method (of measurement used)
   * Environment
   * Tolerance (size)

A change in any one of these elements creates a new measurement system. Each element introduces separate sources of error and variation into the measurement result or, in the case of Tolerance, affects the magnitude of other elements. Consider Method: "When I hold the part and measure it I get one reading, but she rests the part on the granite plate and measures it and gets a different reading." That's because of different phenomena such as Abbe's error; different methods usually produce different results. Consider Part: "If I measure this outside diameter in several places (using a 0.00005-inch micrometer), I get all different readings." In today's real world resolution of microns and millionths, nothing is truly flat, round or parallel. Examples like these occur every day on the production floor and have very little to do with calibration and everything to do with measurement uncertainty. Calibration involves only one of the elements, the Instrument. The other five involve variables found within the production floor Environment. These five elements are removed from the equation in a controlled environment during calibration.

Measurement Uncertainty is an important element of the ISO 17025 standard. Section 5.4.6 titled, Estimation of Uncertainty of Measurement, describes the procedures for calibration labs or testing labs in meeting the measurement uncertainty requirements.

Each instrument has an inherent amount of uncertainty in its measurement. Even the most precise measuring device cannot give the actual value because to do so would require an infinitely precise instrument. A measure of the precision of an instrument is given by its uncertainty. As a good rule of thumb, the uncertainty of a measuring device is 20% of the least count. Recall that the least count is the smallest subdivision given on the measuring device. The uncertainty of the measurement should be given with the actual measurement, for example, 41.64 ± 0.02cm.

Here are some typical uncertainties of various laboratory instruments:

   * Meter stick: ± 0.02cm
   * Vernier caliper: ± 0.01cm
   * Triple-beam balance: ± 0.02g
   * Graduated cylinder: 20% of the least count 

Here's an example. The uncertainty of all measurements made with a meter stick whose smallest division (or least count) is one millimeter is 20% of 1mm or 0.02cm. Say you use that meter stick to measure a metal rod and find that the rod is between 10.2 cm and 10.3cm. You may think that the rod is closer to 10.2cm than it is to 10.3cm, so you make your best guess that the rod is 10.23cm in length. Since the uncertainty in the measurement is 0.02cm, you would report the length of the metal rod to be 10.23 ± 0.02cm (0.1023 ± 0.0002 m).

When a quantity is graphed, it is common for the uncertainty of that quantity to be represented by error bars.


Elsmar Cove Forum Measurement Uncertainty discussion threads.


External Resources:

Essentials of Expressing Measurement Uncertainty of Measurement Results

Measurement Uncertainty Calculator from eCalibration.com

Measurement Uncertainty Equations