TAR (test accuracy ratio) vs. TUR (test uncertainty ratio) - The difference is..?

Charles Wathen

Involved - Posts
TAR or TUR?

I've been reading a bit on TUR (test uncertainty ratio) and came across another one called TAR (test accuracy ratio). Can someone explain the difference between the two?

I'm in the process of modifying my calibraiton procedure to include an Accuracy Ratio. For example: if we are calibrating a Mitutoyo Digimatic Thickness Gage with an accuracy of ±.0001", and I'm using grade 2 jo blocks, the accuracy ratio is 25:1 @0.000004" and 10:1 @0.000010". The 4 and 10 millionths would be the maximum error of the gage block. Would this "accuracy ratio" be also called a TAR or a TUR?
 

Jerry Eldred

Forum Moderator
Super Moderator
Depending on your perspective, there is not a lot of difference. Accuracy is a less descriptive term, by definition (getting into some semantics), accuracy ratio is simply the ratio between the rated accuracies of two measurands (simplified).

Uncertainty is a more cumulative and specific description. If I make a measurement, my uncertainty is the cumulative "accuracies" of the variables involved in the measurement. Uncertainty is more a calculated statistical probability to a given sigma confidence that measured value (measurand) lies within given boundaries.

You can account for all variables, but the job of the metrologist is to account for those necessary to provide adequate confidence in the measurement. For example, if you are using a standard resistor to test a high accuracy multimeter, the uncertainty of your resistance value would include the stated uncertainty of its measured value. If you are testing in air (as opposed to in an oil bath), there would be potentially some variability due to air temp/air currents.

I believe the term "accuracy" in the case of T.A.R. was discarded a number of years ago because of the ambiguities it presented, in favor of "uncertainty" used in T.U.R., because there are more definable (less ambiguous) parameters in defining uncertainty.

This is a level of semantics that is important to metrologists, but not to typical users.
 
G

Graeme

Charles Wathen said:
I've been reading a bit on TUR (test uncertainty ratio) and came across another one called TAR (test accuracy ratio). Can someone explain the difference between the two?

I'm in the process of modifying my calibraiton procedure to include an Accuracy Ratio. For example: if we are calibrating a Mitutoyo Digimatic Thickness Gage with an accuracy of ±.0001", and I'm using grade 2 jo blocks, the accuracy ratio is 25:1 @0.000004" and 10:1 @0.000010". The 4 and 10 millionths would be the maximum error of the gage block. Would this "accuracy ratio" be also called a TAR or a TUR?
Charles,

As Jerry has alluded to, there is some ambiguity in the terms. That's fine in everyday language, but in calibration (or any other scientific, engineering or technical pursuit) we must use words which have technical definitions in their correct manner. The metrological definitions of "accuracy" can apply to either a measuring instrument or a measurement result, but "uncertainty" always applies only to a measurement result. Let's look at the actual definitions first: <O:p</O:p

Accuracy of a measuring instrument: is a qualitative indication of the ability of a measuring instrument to give responses close to the true value of the measurand (parameter being measured.) [VIM, 5.18] This accuracy is a design specification and it is what is verified during calibration.

Accuracy of a measurement: is a qualitative indication of how closely the result of a measurement agrees with the true value of the measurand. [VIM, 3.5] Because the true value is always unknown, accuracy of a measurement is always an estimate. An accuracy statement by itself has no meaning other than as an indicator of quality. It has quantitative value only when accompanied by information about the uncertainty of the measuring system.

Uncertainty: is a property of a measurement result that defines the range of probable values of the measurand. Total uncertainty may consist of components that are evaluated by the statistical probability distribution of experimental data (Type A methods), or from assumed probability distributions based on other data (Type B methods). Uncertainty is an estimate of dispersion; effects that contribute to the dispersion may be random or systematic. [GUM, 2.2.3] Uncertainty is an estimate of the range of values that the true value of the measurement is within, with a specified level of confidence. After an item which has a specified tolerance has been calibrated using an instrument with a known accuracy, the result is a value with a calculated uncertainty.

Test Accuracy Ratio (TAR): in a calibration procedure, the TAR is the ratio of the accuracy tolerance of the unit under calibration to the accuracy tolerance of the calibration standard used. [NCSL, page 2] The TAR must be calculated using identical parameters and units for the UUC and the calibration standard. If the accuracy tolerances are expressed as decibels, percentage or another ratio, they must be converted to absolute values of the basic measurement units.
In the normal use of IM&TE items (that is, outside the calibration lab), the TAR is the ratio of the tolerance of the parameter being measured to the accuracy tolerance of the IM&TE.

Test Uncertainty Ratio (TUR): is the ratio of the accuracy tolerance of the unit under calibration to the uncertainty of the calibration standard used. [NCSL, page 2] The TUR must be calculated using identical parameters and units for the UUC and the calibration standard. If the accuracy tolerances are expressed as decibels, percentage or another ratio, they must be converted to absolute values of the basic measurement units. <O:p
Note: the uncertainty of a measurement standard is not necessarily the same as its accuracy specification.

The references used in these definitions are ones that (IMHO) should be available in every calibration facility: <O:p
GUM = ANSI/NCSL Z540-2-1997, U. S. Guide to the Expression of Uncertainty in Measurement. Boulder, CO: NCSL International, 1997. <O:p
NCSL = NCSL Glossary of Metrology-related Terms, Second edition. Boulder, CO: NCSL International, 1999. <O:p
VIM = International Vocabulary of Basic and General Terms in Metrology; BIPM, IEC, IFCC, ISO, IUPAC, IUPAP, and OIML. Geneva: ISO, 1993.


From the definition of accuracy of a measuring instrument, is should be clear that the TAR is only an indicator of the potential quality of a measurement. However it is in very common use, probably because it has the advantage of using only readily available information – the tolerance of the work to be done and the specifications of the measuring instrument.

“Accuracy of a measurement” is also only an indicator of quality, unless you also know the uncertainty of the measurement system. But if you know the uncertainty then you also have the information needed to calculate the TUR anyway.

I would say that the information you have appears to be from accuracy specifications, so the ratio would be called the TAR.

To get the uncertainty of your calibration process you need to analyze the system and develop an uncertainty budget for the system and method used to calibrate the thickness gage. You would have to include the uncertainty reported by the lab that calibrates your gage blocks, and results of a randomized study done over time (like a gage R&R study) to evaluate variation due to temperature, time of day, technicians or other factors, and any other influences that may have a significant effect on the measurements. You may find that the uncertainty is less than the combined specifications (great!) or more than the specifications (opportunities for improvement!)

 
Last edited by a moderator:
Top Bottom