Standards for determining calibration frequencies for gages

M

MK

Calibration Frequency

Are there set standards for determining calibration frequencies for gages?
 

CarolX

Trusted Information Resource
MK,

Nope. It is up to you to deterimine how often.

We calibrate standard measuring equipment (micrometer, caslipers, etc.) monthly. Everything else is yearly.

Regards,
CarolX
 

Jerry Eldred

Forum Moderator
Super Moderator
There are a few options for setting calibration intervals.

1. Use manufacturer's recommended intervals (drawbacks: they don't all give you a recommended interval; you don't know for certain that it is adequate; it doesn't allow for equipment aging/variety of use conditions etc).

2. Set fixed intervals. Using some info from (1), and good "engineering intuition", set conservative fixed intervals (normally should be short enough to give a high confidence that the unit will maintain in-tolerance throughout the interval). This is convenient because you can create a somewhat predictable workload, and adjust for even workflow from month to month. This also has the same drawbacks as (1), that it doesn't account for variety of conditions, usages, equipment aging and user needs.

3. Set intervals statistically based on a percent probability (typically about 95%) that the instruments will remain in-tolerance for the length of the interval. This is done by evaluating prior calibration history of the cal'd units at a given interval. Measure the percent of them that were in-tolerance at that interval. If the percent in-tolerance falls within your defined in-tolerance confidence level, the interval remains the same. If too many of the units were in-tolerance, you are calibrating too often and so need to lengthen the interval. If too few of the units were in-tolerance, the interval needs to be shortened. Drawbacks are that it is more cumbersome to maintain. Our in-house database has a module programmed into it which automatically adjusts intervals so that all instruments in the program (about 12,000 units at our site) stay in-tolerance for the defined percentage confidence level. Advantages are that you optimize the best calibration interval length. This helps keep our operating cost at a minimum and our instruments at an acceptable confidence level that they will remain in-tolerance.

Problems with the statistical method are that it is more difficult to manage in a small lab (not enough population sometimes to make statistically significant evaluations). I used to run a small lab, and set up an interval analysis system on paper. I ran a report once a year and did interval analysis and adjustment once each year.

NCSL (National Conference Of Standards Laboratories) has a recommended practice for the establishment and adjustment of calibration intervals. That is probably one of the best practices on that topic. I believe their web address is ncsl-hq.org .

I will also not recommend a specific confidence level. That needs to be done locally to meet your particular company's needs.
 

Marc

Fully vaccinated are you?
Leader
Originally posted by JerryStem:

When we were going thru ISO Guide 25 accreditation by A2LA, I asked the program manager there about due dates.

Their opinion was to not even list one when certifying a customers equipment/standards.
True, true. You only calibrate it. You cannot determine it's use cycle unless you are part of the company.
 
J

JerryStem

When we were going thru ISO Guide 25 accreditation by A2LA, I asked the program manager there about due dates.

Their opinion was to not even list one when certifying a customers equipment/standards. Since we weren't "guaranteeing" the calibration for 6 mo/1 yr/... they said why list one? (If I certified a foil for 6 months and the next day the customer rips it in half, obviously I can't say it's still the right thickness).

I know this is off topic slightly, but just wanted to give my 2 cents. They also went into cycle dates, saying they had no guideance for them either. You set your own but it had to be "reasonable". 10 years was a little long...

Jerry
 
D

Dan Larsen

Extending on Jerry's point (3)...

The statistical approach is good if you have the time and resources. YOU set the calibration frequency, and the driving force, I think, is the requirement that you justify all prior results if you find a gage out of calibration. If you set the calibration frequency too close, you'll never have a situation to react to, but you may be spending too much on calibration. If you set the calibration frequency out too far, you'll likely have an out of calibration situation that will be difficult and time consuming to justify (and may have a recall situation as well). Look at the history of the gage, not just whether it's "OK" or "NOT OK", but the actual found values. Set the calibration frequency based on the expected drift of the gage, so that you calibrate befor it drifts to the point you have to react.

I suggest (for small companies with limited resources) that they start out with a high frequency, then evaluate the situation after three cal cycles. If the gage doesn't drift, extend the cal cycle. Repeat. When you see a change, recalibrate and hold the cycle. Monitor for three more cycles and see if it works for you. This may not be statistically accurate, but it works from a logical standpoint. And in my mind, statistics (properly applied) is nothing but logic!
 

Marc

Fully vaccinated are you?
Leader
Calibration Intervals

You might look at RP-1. If you have access to NASA Reference Publication 1342, you'll find this to be a useful source also.

RP-1 is currently under revision by the Calibration Interval Subcommittee of NCSLI's Metrology Practices Committee, but the current edition is pretty up to date. The software that comes with RP-1 applies a statistical decision algorithm that is useful for items with sparse service history data. The software has been updated and is available at www.isgmax.com.

Also see http://Elsmar.com/Forums/showthread.php?t=927
 
Top Bottom