Traceability and uncertainty

Hershal

Metrologist-Auditor
Trusted Information Resource
Quote from previous thread: I am puzzled by your reply. An external calibration lab, certified to ISO 17025, would be required to calibrate that precisely.

But many internal calibration departments use a 4 decimal place mic. and calibrate to 3 decimal places. They typically use pins to measure diameters with rather generous tolerances. Is that not adequate and approrpiate for their needs?

If not, there are thousands of certified companies who would have nonconforming calibrations.


Hjilling,

This is a professional forum, not a place for contests and I am not suggesting that is your intent, only that I don't want this to go that route.....

With respect to your reply to my comment in the other thread.....

I do not argue that there are quite a few organizations that do internal cal well.....but there is a reason that Metrology is known as a science.....

Checking calipers with a gage block to see if they are close takes little training, but calibration involves much more. For example, traceability.

Traceability has two components, and both must be present or there is no traceability. The two components are (1) an unbroken chain of comparisons to National or international standards, and (2) uncertainties at each step. The unbroken chain is the first part. The so-called "NIST numbers" are not valid for traceability according to NIST. Therefore, the specific certificate number of the specific calibration of the item used as a measurement standard is the traceability path. If that number is not available, the link is broken and no traceability. Uncertainties must be calculated for each calibration unless the 4:1 rule can be proven. Even so, the lab must calculate uncertainty at some point to be able to prove the 4:1. Without uncertainties, even for internal cal, no traceability.

For uncertainties, for the example you give of the caliper used to calibrate pins, let's look at uncertainty contributions. First are the Type A contributions. These are the readings. Depending on the item involved, the number of readings may vary from 3 to 30, usually averaging around 10. If there are 10 pins, we now see about 100 readings. Each pin will have its own Type A calculations based on standard deviation.

Type B contributions include but are not limited to: the uncertainty of the calipers' last calibration, the difference in temperature from the calipers' calibration and the pins' calibration (applied to the caliper), parallax error for dial calipers or resolution error for digital calipers, thermal expansion for the pin taking the difference of the temp at calibration and subtracting 20 C, any residue that may be on the pin or an assigned amount for cleaning to account for wear, and uncertainty of the calculations themselves. Resolution error for digital is one half the value of the least significant digit. Previous calibrations are normal distribution and are divided by 2 before dropping into the formula. Other contributions are rectangular distribution (typically) and are divided by square root 3.

Type A and Type B are joined in a root-sum-square approach and reported at 95% confidence. This is for each pin, or if a total number is sought, all the Type A uncertainties are included in the calculations.

For using the 4:1, there is a trap.....if the 4:1 Test Accuracy Ratio (TAR) is used per ANSI/NCSL Z540-1, then the automatic uncertainty contribution from the calipers last cal becomes 25% of the rated accuracy of the calipers at any given reading, in addition to the other Type B contributions.

This is barely scratching the surface for calibration. The hard stuff comes later. This just gets one to traceable calibration.....nothing more.

Hope this long-winded explanation helps you understand my comment in the TS thread (now moved to TS).

Regards,

Hershal
 
Elsmar Forum Sponsor
Hershal,

Rest assured, my intent is not to get into contests. I will leave that for others on this forum.

However, my questions are sincerely based on the question of practicality. Metrology is certainly a science, but the ISO and even TS standards do not appear to require Metrology. In fact, the 2000 revision changed the term calibration to calibration/verification. I assume this was to end the endless arguing as to whether verification constituted calibration.

While I don't dispute the accuracy of your explanation, I simply don't see it required by the standard, unless perhaps the measuring accuracy requirements are very high. It appears the standard merely wants verification that the measurement system is capable of the necessary accuracy.

So, if pins are used to assess a tolerance of say, +/-.002, would not a 4 decimal mic or caliper far exceed the 4:1 rule, and approach 10:1. Why would it need all the extra calculations if the need is so basic? (please remember, this is limited to an internal lab, not externals which supposedly follow much more stringent ISO 17025 requirements).

Thank you,
 
Last edited:
hjilling said:
So, if pins are used to assess a tolerance of say, +/-.002, would not a 4 decimal mic or caliper far exceed the 4:1 rule, and approach 10:1. Why would it need all the extra calculations if the need is so basic? (please remember, this is limited to an internal lab, not externals which supposedly follow much more stringent ISO 17025 requirements).

Thank you,

One thing to take into account is that the gage used to evaluate a characteristic of a part needs to be accurate enough to properly discriminate. I frequently use a 10:1 rule of thumb which says that if your total tolerance is .004", you need a means of measurement that can accurately represent .0004" incremental steps. This is separate from any calibration requirement for the gages themselves. Standard class ZZ gage pins run in increments of .0005" and have an accuracy of +.0002/-0" for plus pins and +0/-.0002" for minus pins. (As a preference, I would measure a +/- .002" tolerance characteristic with a higher class of gage pin with finer increments and accuracy - Deltronic pins in steps of .0002" at worst)

To maintain the manufacturer's accuracy of the pins, you need a gage that is accuratly able to resolve a 4:1 ratio or better with the pins themselves, not the part characteristic being measured by them. At a minimum, for calibration of a .0002" tolerance gage, you'd want a device capable of accuratlely measuring increments of .00005" (4:1 ratio)

Now, you could downgrade the accuracy of the pins and record that they are only accurate to +x/-x and therefore use a calibration master that is not as capable, but then that downgraded accuracy would affect your ability to accurately measure any part characteristic tested by them, i.e. a Deltronic pin, mfg accuracy +.00004/-0" downgraded to an accuracy of +/-.0001", with which you could no longer effectively measure features with a total tolerance down to .0004". The best they should be used for would be features with a total tolerance of .001"
 
Everything's relative

I’ve worked in businesses where the general tolerances were relatively liberal—sheet metal products where nothing got much tighter than ± 1/16”-- and very small--electroplating of precious metals on electronic contacts where deposits are measured in millionths. The degree to which uncertainty plays a part of decision-making in product measurement is very different in those two areas. In the former it’s negligible insofar as measurements on the production floor are concerned, and in the latter, when there are significant disputes about the sixth decimal place, it’s critical.



My point is that the divergent opinions in this thread are not the result of someone being wrong and someone else being right. Those types of disagreements are usually easy to settle, at least to observers. The confusion happens when both parties are right—at least partially—but their perspectives are different. Einstein famously pointed out that time and space are not separate things, but inextricably intertwined, and dependent upon the perspective of the observer. Thus it could be possible for two different people to observe the same phenomenon and report contradictory facts about it (thus proving that the phrase “contradictory facts” isn’t an oxymoron). Something similar is happening here. Hershal, a learned expert in metrology, stresses the need for the traceability/uncertainty relationship in assessing the calibration status of a device, and hjilling, also an expert in his own field, opines that there are situations where things might be taken too far, and perhaps we shouldn’t worry about facts that aren’t relevant in the context at hand.



The fact is that many, if not most of us don’t have to worry too much about uncertainty budgets because if we’re conscientious about calibration and maintenance of gages, the uncertainty will be too small to hurt us. At the same time though, someone has to be looking out for it as chains of calibrations progress, or most of us wouldn’t have the luxury of not having to worry about it.
 
Back
Top Bottom