Adjustment of Calibration Intervals (Shortening or Lengthen)

Av7255

Registered
Thank you all in advance for the help.

Basically, I am trying to change the culture toward the calibration standards and methods at my work - we have a lot of local procedures in place for processes and responsibilities for each engineer.

Currently there is an instruction in place that states if an out of Tolerance is found during a calibration effort (this goes for all equipment, from Oscilloscopes, to calipers, to flow modules.. etc) then the asset will have the interval cut in half; to lengthen the interval back to its original interval, then said asset must have 3 calibration cycles without any out of Tolerance finding.

So now we have assets with a wide range of intervals, nothing is consistent, and we are wasting time, money, downtime, and everything else under the sun.

We had an asset with a 3 year cycle, an out of Tolerance was found, but it left in tolerance, now it’s at a 18 month cycle, then another out of tolerance was found and now 9 months - we are currently at this asset being at. 1.5 month cycle due to so many out of Tolerance findings.

I am working on a proposal to follow Establishment and Adjustment of Calibration Intervals RP1, that way we can save money, have more time with equipment etc. But each time an asset has an out of tolerance , we provide an impact report to cover the time between the calibration where everything was in tolerance, until the out of Tolerance finding.

This all is weird to me, as long as the asset was re adjusted back into tolerance, we should not be consistently reducing intervals. I want to apply a A3 model to create a better system. All the assets that we reduce intervals on always come back “ as left in tolerance “

I feel like all the frequent calibrations put extra stress on the equipment, because now it’s subject to whatever happens when it leaves our facility.

Can anyone help me add to my point that this practice is not value added, perhaps more specific information to assist with my current issue? Thank you!
 

GunLake

Involved In Discussions
I can't help you prove this does not add value because i think it does, in a way I do the same except i rarely ever go below 6 months. At that point i will replace it or find out why it's going out of tolerance. Reason for this is, You have a gauge that is out of tolerance at 3 years. So you go to 1.5 years and its out of tolerance again, So 1 year. Then 6 months, Then you switch it back to 3 years because you're sick of checking it so often, Well you know for sure 2 years of that gauge being used is out of tolerance. Normally if i find an out of tolerance and i decide to change it from lets say 1 year to 6 months, In 6 months i will change it back to 1 year if i find it within tolerance.

It's a bit easier for me, Since we calibrate almost everything in house. About the only thing i would change IMO is instead of cutting it in half do .5. So 6 months, 1 year, 1.5 years, 2 years, etc. To me it makes it easier and more consistent. and the 3 calibration cycles, I would just do it at 1 cycle.

I'm sure other people have different opinions but this is how I've done it for the last 7 years.
 
Last edited:

John Predmore

Trusted Information Resource
I can think of three failure modes when a device goes out of tolerance - wear (or similar phenomena which cause drift in one direction over time), wear-out or other deterioration (an increase in uncertainty which may be labeled noise, which appears as random bidirectional fluctuation), or a bump (or similar incident which causes a sudden change). Setting a proper calibration interval attempts to monitor and correct for the occurrence of wear-like phenomena. Periodic re-calibration can detect wear-out as it progresses (if somebody looks for it), and trend charts may predict when deterioration will exceed allowable limits, then at some point the device must be refurbished or replaced. A bump is the hardest failure mode to predict. A bump could happen one year after a device is calibrated or one hour after. Reducing the calibration interval is no assurance against a bump. Shortening the interval will lead to devices often found in-tolerance, which is what you reported.

Based on my theory, I have an idea what may be causing your problem, but no specifics. With this theory, you might investigate and find evidence, you might identify what is occurring to a particular device, you might devise a solution. Or, I might be wrong. What you do with this theory is up to you. Good luck.
 

Mike S.

Happy to be Alive
Trusted Information Resource
Something to think about....

When possible and practical I like to have a "check standard" for pre-use verification of important measuring equipment. As John said, a device can be calibrated today and go out of calibration tomorrow, and you may use it who knows how long until you find that out. Now you have a potential mess -- determining the impact of the OOT condition.
 

Quality Julie

Julie Brown
The process does add value because it is more costly to recall products. Reducing the interval when out of tolerance condition is found is the proper method. If you are down to a 3-month interval from a 3-year interval you should be investigating what is causing the out of tolerance condition.
 

Tagin

Trusted Information Resource
The process does add value because it is more costly to recall products. Reducing the interval when out of tolerance condition is found is the proper method. If you are down to a 3-month interval from a 3-year interval you should be investigating what is causing the out of tolerance condition.

^^ This.

The existing policy is at best a passive risk mitigation of symptoms. In a proper risk mitigation, you would do some kind of investigation for root cause (e.g., your A3) and thus mitigate the cause.
 

dwperron

Trusted Information Resource
"Can anyone help me add to my point that this practice is not value added, perhaps more specific information to assist with my current issue?"

Not really, because you are wrong.

Calibration interval adjustment is designed to reduce risk in making an out of tolerance measurement.
Out of tolerance tools lead to risk of product recalls, that is where you really lose money and time.
I think your current algorithm is too aggressive at reducing intervals and too slow in raising them, but that is based on my experience, not with knowledge of your situation.

If you are adjusting instruments and their interval is shortening that is an indication of a problem with the equipment, not the calibration program. Just because your instruments were adjusted back into tolerance does not guarantee that they will now hold that accuracy, which makes your thought of not reducing intervals on these make no sense.
If you have an instrument that is on a short interval "due to so many out of Tolerance findings" then you should not be using the instrument! It is not providing reliable results, and puts your process at risk.

Not changing the intervals probably will mean that you will continue to have out of tolerances. That leads to another "impact report to cover the time between the calibration where everything was in tolerance, until the out of Tolerance finding." How much time is spent on these exercises? Keeping your instruments in tolerance will reduce the impact reports.

My experience in the calibration world shows that most calibrated instruments hold their accuracy. I normally see a calibration failure rate of about 5%. If you have so many out of tolerance events that they have become burdensome to your organization you probably have an equipment / use issue, not a calibration issue.
 
Top Bottom