Methods of Reviewing Calibration Intervals

R

rahayu

methods of reviewing confirmation intervals

Hi!
We've been using a OD gauge (75mm) in our lab for our daily OD meter daily verification. We've sent the gauge for calibration once a year.Below are the list of correction factor and measurement uncertainty (MU)written in the calibration certificate

Year Correction factor MU
1998 0 +/- 4um
1999 0.002 +/- 6um
2000 -0.003 +/- 0.007mm
2001 -0.003 +/- 0.002mm

The question is, I would like to analyse whether the interval need to be extend or reduced, waht are the method to be used.

In ISO 10012 : Part 1 : 1994,under A.3 Method of reviewing confirmation intervals, under A.3.1 method 1 is the Automatic or "Staircase" adjustment method. Under this method, "the subsequent intervals is extended if it is found to be within tolerance..". What does it mean by "tolerance" here?

Thank you.
 
G

Graeme

Be aware of the differences ...

Hello,

Since it has been a while and nobody else has posted a reply, I will have a go at it. I must say at the beginning, though, that dimensional measurement is not my main area of expertise. (I am an electronics person. :) )

Calibration interval analysis is a difficult area, especially when you are looking at a very small quanitiy of items - one, in your case. You can get a lot more information from NCSL Recommended Practice RP-1, Establishment and Adjustment of Calibration Intervals. You can purchase a copy from NCSL International, www.ncsli.org .

In your example, the tolerance that is referred to is the performance specification if the item being calibrated - the 75mm OD gage.
  • This is usually the manufacturer's published specification, if any. (I am not sure how the performance of these gages are specified by their manufacturers.)
  • It could also be a usability requirement that you set based on your own measurement needs. For example, you might decide the gage is no longer usable if the correction is more than some specified value.
In the method you describe, the calibration interval would be increased if the reported values are within those specifications.

This is about the simplest method to use, but it is in many ways the least useful. RP-1 notes several problems with this method.
  • This method makes adjustments based on essentially random results. Deming's funnel experiment is used to teach the futility of this.
  • This method cannot account for a target reliability for the equipment. For instance, there is no way to set and achieve a minimum reliability goal of (for example) 95% probability of being in-tolerance at the end of the period.
  • This method does not settle to a stable value for the calibration interval. If it accidentally arrives at a "correct" interval, the result of the next calibration will inevitably change it. Even if you attempt to compute a mean from the interval changes, the time required to reach a stable value often exceeds the useful life of the gage.

Since your gage has only a single measured parameter, I would suggest using Method 2, the next section from the one you cited. Plot the points on a run chart, or on a process decription ("control") chart for individual variables. You will be able to see any long-term trends, and the overall scatter of the points. Once you have "enough" points, a regression analysis will help you predict the future behaviour. If you also plot the calibration uncertainty as error bars, you will see how that relates to the reported value. Note that in all but the last calibration, the uncertainty of the measurement has been larger than the reported correction value, assuming I am interpreting your table correctly.

There is nothing "wrong" with keeping the same calibration interval for a gage like this over its lifetime, especially if you have only a small number of them. Yes, you can save money by calibrating less frequently. However, you also have to evaluate the increased risk of the tool going out of tolerance before the next calibration. Other methods of calculating calibration intervals can account for this risk, but require a large population of identical tools, or a long time period with fewer tools, to accumulate the data for a statisically valid analysis.
 
A

Al Dyer

This will sound negative, but is reads like you either have a bad calibration house or someone is sending different gages. There should not be that much variation in a gage over 4 years where it gets better, worse, better, worse.

As a line of thought I would think that any gage would get worse over time and usage.

I assume this is and ID/OD attribute gage?
 
Top Bottom