Date: Mon, 11 Oct 1999 11:11:30 -0700
From: "Dr. Howard Castrup"
To: iso25
Subject: Definition of Calibration
Karl Haynes added a message to this thread in which he mentions that interval analysis programs may be sensitive to whether parameter adjustments have been made.
This is true. It's also interesting that support for the renew-if-failed (adjust only if out-of-tolerance) paradigm was initially derived from a study that was motivated by just the sort of thing Karl described. In the '70s, a report was written arguing that parameters should not be adjusted during calibration unless they were found out-of-tolerance. This conclusion was forgone. It was the one the authors of the report needed to reach. This is because the organization sponsoring the report employed an interval analysis methodology that didn't work unless the renew-if-failed policy was in effect.
Of course, if you issue a recommendation for doing nothing (not adjusting), management is going to love it. So, from the late '70s through the '80s, support for the renew-if-failed policy was considerable.
Meanwhile, also in the late '70s, an interval analysis methodology was developed that worked best if the policy was renew-always (adjust at every calibration). This methodology is far more powerful and versatile than the other methodology and can be used to fine-tune intervals to meet reliability targets. It can also be tailored to accommodate a renew-as-needed policy (adjust if found outside "adjustment limits").
Both methodologies are documented in NCSL RP-1. The first methodology is called Method S1, and the second is called Method S2. Another methodology, referred to as Method S3, can also be found in RP-1. With Method S3, it doesn't matter what the policy is. In addition, Method S3 has all the power and versatility of Method S2.
The Calibration Interval Subcommittee of the NCSL Metrology Practices Committee is in the process of refining Method S3. All the math has been done. It now remains to implement the method in a working software algorithm. This effort is in-progress.
In the meantime, users of the program 'SPCView' have discovered that the decision of whether or not to adjust during calibration has a profound impact on the calibration interval. We're also finding out that adjusting up or down to compensate for a "known" drift does not always yield the expected result. This is because, while the drift for a given parameter may be downward, for instance, the slope for the uncertainty in the drift may be positive. This sometimes means that adjusting the value of a negative drift parameter upward actually causes the interval to be shortened rather than lengthened.
Incidentally, someone mentioned that, from the standpoint of interval analysis, by simply calibrating a parameter, its uncertainty is modified and this constitutes a kind of 'renewal' -- regardless of whether a physical adjustment has taken place. This is true. Consequently, vendors that feel compelled to adjust in-tolerance parameters need not be concerned about upsetting interval analyses that assume a renew-always policy.
But, back to the issue of this thread. Since I deal in uncertainty analysis, risk analysis and uncertainty growth control, I cannot conceive of paying for a calibration without getting information in return. To me, the commercial cal activity does not provide a viable product if all it offers is my equipment returned with an updated cal sticker. In this case, I have NO useful information for adjusting calibration intervals, evaluating the quality of measurement made with the calibrated items, or for determining whether the commercial lab actually did anything.
To be truly useful, the result of a calibration should include (1) a re-calibrated item, (2) the as-found and as-left values for each of the parameters calibrated, (3) the dates of service, and (4) estimates of the measurement uncertainties for each calibration.
Howard Castrup
President, Integrated Sciences Group
Chairman, NCSL Metrology Practices Committee
From: "Dr. Howard Castrup"
To: iso25
Subject: Definition of Calibration
Karl Haynes added a message to this thread in which he mentions that interval analysis programs may be sensitive to whether parameter adjustments have been made.
This is true. It's also interesting that support for the renew-if-failed (adjust only if out-of-tolerance) paradigm was initially derived from a study that was motivated by just the sort of thing Karl described. In the '70s, a report was written arguing that parameters should not be adjusted during calibration unless they were found out-of-tolerance. This conclusion was forgone. It was the one the authors of the report needed to reach. This is because the organization sponsoring the report employed an interval analysis methodology that didn't work unless the renew-if-failed policy was in effect.
Of course, if you issue a recommendation for doing nothing (not adjusting), management is going to love it. So, from the late '70s through the '80s, support for the renew-if-failed policy was considerable.
Meanwhile, also in the late '70s, an interval analysis methodology was developed that worked best if the policy was renew-always (adjust at every calibration). This methodology is far more powerful and versatile than the other methodology and can be used to fine-tune intervals to meet reliability targets. It can also be tailored to accommodate a renew-as-needed policy (adjust if found outside "adjustment limits").
Both methodologies are documented in NCSL RP-1. The first methodology is called Method S1, and the second is called Method S2. Another methodology, referred to as Method S3, can also be found in RP-1. With Method S3, it doesn't matter what the policy is. In addition, Method S3 has all the power and versatility of Method S2.
The Calibration Interval Subcommittee of the NCSL Metrology Practices Committee is in the process of refining Method S3. All the math has been done. It now remains to implement the method in a working software algorithm. This effort is in-progress.
In the meantime, users of the program 'SPCView' have discovered that the decision of whether or not to adjust during calibration has a profound impact on the calibration interval. We're also finding out that adjusting up or down to compensate for a "known" drift does not always yield the expected result. This is because, while the drift for a given parameter may be downward, for instance, the slope for the uncertainty in the drift may be positive. This sometimes means that adjusting the value of a negative drift parameter upward actually causes the interval to be shortened rather than lengthened.
Incidentally, someone mentioned that, from the standpoint of interval analysis, by simply calibrating a parameter, its uncertainty is modified and this constitutes a kind of 'renewal' -- regardless of whether a physical adjustment has taken place. This is true. Consequently, vendors that feel compelled to adjust in-tolerance parameters need not be concerned about upsetting interval analyses that assume a renew-always policy.
But, back to the issue of this thread. Since I deal in uncertainty analysis, risk analysis and uncertainty growth control, I cannot conceive of paying for a calibration without getting information in return. To me, the commercial cal activity does not provide a viable product if all it offers is my equipment returned with an updated cal sticker. In this case, I have NO useful information for adjusting calibration intervals, evaluating the quality of measurement made with the calibrated items, or for determining whether the commercial lab actually did anything.
To be truly useful, the result of a calibration should include (1) a re-calibrated item, (2) the as-found and as-left values for each of the parameters calibrated, (3) the dates of service, and (4) estimates of the measurement uncertainties for each calibration.
Howard Castrup
President, Integrated Sciences Group
Chairman, NCSL Metrology Practices Committee