Quote from previous thread: I am puzzled by your reply. An external calibration lab, certified to ISO 17025, would be required to calibrate that precisely.
But many internal calibration departments use a 4 decimal place mic. and calibrate to 3 decimal places. They typically use pins to measure diameters with rather generous tolerances. Is that not adequate and approrpiate for their needs?
If not, there are thousands of certified companies who would have nonconforming calibrations.
Hjilling,
This is a professional forum, not a place for contests and I am not suggesting that is your intent, only that I don't want this to go that route.....
With respect to your reply to my comment in the other thread.....
I do not argue that there are quite a few organizations that do internal cal well.....but there is a reason that Metrology is known as a science.....
Checking calipers with a gage block to see if they are close takes little training, but calibration involves much more. For example, traceability.
Traceability has two components, and both must be present or there is no traceability. The two components are (1) an unbroken chain of comparisons to National or international standards, and (2) uncertainties at each step. The unbroken chain is the first part. The so-called "NIST numbers" are not valid for traceability according to NIST. Therefore, the specific certificate number of the specific calibration of the item used as a measurement standard is the traceability path. If that number is not available, the link is broken and no traceability. Uncertainties must be calculated for each calibration unless the 4:1 rule can be proven. Even so, the lab must calculate uncertainty at some point to be able to prove the 4:1. Without uncertainties, even for internal cal, no traceability.
For uncertainties, for the example you give of the caliper used to calibrate pins, let's look at uncertainty contributions. First are the Type A contributions. These are the readings. Depending on the item involved, the number of readings may vary from 3 to 30, usually averaging around 10. If there are 10 pins, we now see about 100 readings. Each pin will have its own Type A calculations based on standard deviation.
Type B contributions include but are not limited to: the uncertainty of the calipers' last calibration, the difference in temperature from the calipers' calibration and the pins' calibration (applied to the caliper), parallax error for dial calipers or resolution error for digital calipers, thermal expansion for the pin taking the difference of the temp at calibration and subtracting 20 C, any residue that may be on the pin or an assigned amount for cleaning to account for wear, and uncertainty of the calculations themselves. Resolution error for digital is one half the value of the least significant digit. Previous calibrations are normal distribution and are divided by 2 before dropping into the formula. Other contributions are rectangular distribution (typically) and are divided by square root 3.
Type A and Type B are joined in a root-sum-square approach and reported at 95% confidence. This is for each pin, or if a total number is sought, all the Type A uncertainties are included in the calculations.
For using the 4:1, there is a trap.....if the 4:1 Test Accuracy Ratio (TAR) is used per ANSI/NCSL Z540-1, then the automatic uncertainty contribution from the calipers last cal becomes 25% of the rated accuracy of the calipers at any given reading, in addition to the other Type B contributions.
This is barely scratching the surface for calibration. The hard stuff comes later. This just gets one to traceable calibration.....nothing more.
Hope this long-winded explanation helps you understand my comment in the TS thread (now moved to TS).
Regards,
Hershal
But many internal calibration departments use a 4 decimal place mic. and calibrate to 3 decimal places. They typically use pins to measure diameters with rather generous tolerances. Is that not adequate and approrpiate for their needs?
If not, there are thousands of certified companies who would have nonconforming calibrations.
Hjilling,
This is a professional forum, not a place for contests and I am not suggesting that is your intent, only that I don't want this to go that route.....
With respect to your reply to my comment in the other thread.....
I do not argue that there are quite a few organizations that do internal cal well.....but there is a reason that Metrology is known as a science.....
Checking calipers with a gage block to see if they are close takes little training, but calibration involves much more. For example, traceability.
Traceability has two components, and both must be present or there is no traceability. The two components are (1) an unbroken chain of comparisons to National or international standards, and (2) uncertainties at each step. The unbroken chain is the first part. The so-called "NIST numbers" are not valid for traceability according to NIST. Therefore, the specific certificate number of the specific calibration of the item used as a measurement standard is the traceability path. If that number is not available, the link is broken and no traceability. Uncertainties must be calculated for each calibration unless the 4:1 rule can be proven. Even so, the lab must calculate uncertainty at some point to be able to prove the 4:1. Without uncertainties, even for internal cal, no traceability.
For uncertainties, for the example you give of the caliper used to calibrate pins, let's look at uncertainty contributions. First are the Type A contributions. These are the readings. Depending on the item involved, the number of readings may vary from 3 to 30, usually averaging around 10. If there are 10 pins, we now see about 100 readings. Each pin will have its own Type A calculations based on standard deviation.
Type B contributions include but are not limited to: the uncertainty of the calipers' last calibration, the difference in temperature from the calipers' calibration and the pins' calibration (applied to the caliper), parallax error for dial calipers or resolution error for digital calipers, thermal expansion for the pin taking the difference of the temp at calibration and subtracting 20 C, any residue that may be on the pin or an assigned amount for cleaning to account for wear, and uncertainty of the calculations themselves. Resolution error for digital is one half the value of the least significant digit. Previous calibrations are normal distribution and are divided by 2 before dropping into the formula. Other contributions are rectangular distribution (typically) and are divided by square root 3.
Type A and Type B are joined in a root-sum-square approach and reported at 95% confidence. This is for each pin, or if a total number is sought, all the Type A uncertainties are included in the calculations.
For using the 4:1, there is a trap.....if the 4:1 Test Accuracy Ratio (TAR) is used per ANSI/NCSL Z540-1, then the automatic uncertainty contribution from the calipers last cal becomes 25% of the rated accuracy of the calipers at any given reading, in addition to the other Type B contributions.
This is barely scratching the surface for calibration. The hard stuff comes later. This just gets one to traceable calibration.....nothing more.
Hope this long-winded explanation helps you understand my comment in the TS thread (now moved to TS).
Regards,
Hershal