4:1 Test Accuracy Ratio (TAR) or Stated Measurement Uncertainty (MU)

T

tmklaves

These questions pertain to comparing standards ANSI/NCSL Z540-1-1994 to ISO 17025. First, I am confused if ANSI/NCSL Z540-1-1994 specifies a "Test Uncertainty Ratio" or a "Test Accuracy Ratio" of 4:1 instead of a stated measurement uncertainty. Also, does ISO 17025 make the same distinction?
 

BradM

Leader
Admin
Re: 4:1 Test Accuracy Ratio or Stated Measurement Uncertainty

Hello there!:bigwave:

You pose a good, and very common question. If you may, take a look at this thread:

http://elsmar.com/Forums/showthread.php?t=24451

There is a link to a Transcat document in Benjamin's post that does a pretty good job clarifying the different ratios.

In short, if you are calculating uncertainties, then you would use a test uncertainty ratio. If you are using accuracies, then use an accuracy ratio.

To utilize a Test Uncertainty Ratio, you will need to calculate the stated uncertainties to determine the given ratio.

Not sure if any of that made sense. :)
 
T

tmklaves

Re: 4:1 Test Accuracy Ratio or Stated Measurement Uncertainty

Thanks for the link, the TransCat document does a good job explaining the difference between TUR and TAR and how to properly us the TUR ratio. However, I still question if ANSI/NCSL Z540-1-1994 is referring to TAR or TUR. The following quote is form paragraph 10.2.b of ANSI/NCSL Z540-1-1994 regarding calibration requirements:

"The laboratory shall ensure that teh calibration uncertainties are sifficiently small so that the adequecy of the measurement is not affect. Well defined and documented measurement assurance techniques or uncertainty analysis may be used to verify the adequacy of the measurement process. If such techniques or analyses are not used, then the collective uncertainty of the measurement standard shall not exceed 25% of the accepted tolerance"

The first option calls for a workup of the uncertainty budget for the reference standard, the second option says if this is not available, the collective uncertainty should be 25% of the tolerance. However, to know the collective uncertainty, you must also calculate uncertainty budget. I don't get the difference.

Tom K
 

howste

Thaumaturge
Trusted Information Resource
Re: 4:1 Test Accuracy Ratio or Stated Measurement Uncertainty

Thanks for the link, the TransCat document does a good job explaining the difference between TUR and TAR and how to properly us the TUR ratio. However, I still question if ANSI/NCSL Z540-1-1994 is referring to TAR or TUR. The following quote is form paragraph 10.2.b of ANSI/NCSL Z540-1-1994 regarding calibration requirements:

"The laboratory shall ensure that teh calibration uncertainties are sifficiently small so that the adequecy of the measurement is not affect. Well defined and documented measurement assurance techniques or uncertainty analysis may be used to verify the adequacy of the measurement process. If such techniques or analyses are not used, then the collective uncertainty of the measurement standard shall not exceed 25% of the accepted tolerance"

The first option calls for a workup of the uncertainty budget for the reference standard, the second option says if this is not available, the collective uncertainty should be 25% of the tolerance. However, to know the collective uncertainty, you must also calculate uncertainty budget. I don't get the difference.

Tom K

Does anyone have an answer for Tom's question? I find myself coming to the same conclusion he did...
 

BradM

Leader
Admin
Re: 4:1 Test Accuracy Ratio or Stated Measurement Uncertainty

Calculating uncertainty is the way to go. If you calculate the collective uncertainty of your measurement system, it should be 25% of the Unit Under Test accuracy.

If you have not calculated the collective uncertainty, then the measurement accuracy of your standard has to be 25% of the Unit Under Test accuracy.

Did that answer the question?:)
 

howste

Thaumaturge
Trusted Information Resource
Re: 4:1 Test Accuracy Ratio or Stated Measurement Uncertainty

Let's assume that the collective uncertainty of the measurement system hasn't been calculated. If a gage block or mass standard (weight) is used for the calibration, what would be the measurement accuracy of the standard? There are no divisions or increments to be read on the standard, so would the "measurement accuracy" be the accuracy of the instrument used to calibrate the standard?
 
G

George Weiss

Re: 4:1 Test Accuracy Ratio or Stated Measurement Uncertainty

I still remember calibrating to the Z540.1:1994 back in the 1995-2000 years.
I had a 1% spec on a hand held meter, and a calibrator which had a spec of less than 0.25%, which was a TAR of better than 4:1, and so I just went forward with the calibration. I was audited by a NAVY auditor once, and that is all he wanted to see. 4:1 TAR accuracy on the standard/DUT. Now we have Uncertainties, and devisors, and multipliers. OMG……….guard bands……….
Even the NEW Z540.3:2006 says 4:1 TAR/TUR, and forget false-pass risk analysis for that particular test. :bigwave: :agree1:
 

BradM

Leader
Admin
Re: 4:1 Test Accuracy Ratio or Stated Measurement Uncertainty

Let's assume that the collective uncertainty of the measurement system hasn't been calculated. If a gage block or mass standard (weight) is used for the calibration, what would be the measurement accuracy of the standard? There are no divisions or increments to be read on the standard, so would the "measurement accuracy" be the accuracy of the instrument used to calibrate the standard?

Good question.:agree1: The two standards that you mentioned (gage blocks and mass standards) are calibrated to industry standard specifications (or grades); classes and grades.

Mass standards classifications can be found at Rice Lake and Troemner. Gauge blocks are listed under ASME B89.1.9-2002, for one.

I know you know some of this... but just for explanation....

The classifications of each go along with the construction of them. You don't buy a set of inexpensive Class 3 weights, and then calibrate them to an Ultra Class tolerance. :tg: Same with gauge blocks. So you send them to the competent vendor, and have them calibrate both, providing actual data and such.

Depending on your application, the blocks and weights should be four times more accurate that what you are verifying.

***

Think of it like this.... you have a standard thermometer (N.I.S.T traceable of course), and a well constructed ice bath. You're calibrating some thermometers. The ice bath... is.... .01C. It does not have to indicate anything; where the standard thermometer will. Same with blocks and weights. The gauge block (if properly certified), is 1 inch. Period. So the device should produce 1.000 inch or whatever.

A lot of babbling there.:D Not sure I helped or not.
 
Top Bottom