Calibration Cycle: Calibrate When Issued OR Calibrate by Due Date?

R

Russ

We have two gage labs here at my combined facility that have yet to be combined. One calibrates gages as they are issued, when needed and the other keeps all their gages calibrated by due date whether the gage is being used or not. I see the strong points of each approach and would like to know how everyone else approaches this issue.:bonk:
 
J

Jim Biz

IMHO

Wow- I would really think - calibrating upon issue would be the cleanest route to go - if the resources are available to get it accomplished. >>> every time one sets an extended date type "timespan" - there can be holes/errors that go undetected > then you need to deal whith the "how do you go back & recheck product" issue.
 

Jerry Eldred

Forum Moderator
Super Moderator
The basic need is to know that the gage meets specs when it is used. If you have a single site under a single quality system, my take is that there needs to be one recall system. Otherwise, there can be potential confusion over which calibration program a gage falls under. If they are all adequately labelled, and certain gages are unambiguously relatable back to a particular gage lab, I would think that isn't an issue.

As to cal when issued versus cal on a cycle, the dollar cost is one of the factors. IF you have a lot of gages that spend most of a cal cycle on the gage lab shelf without being issued (you'd have to do some math), then the cal when issued approach would be better from that standpoint.

As to the quality aspect, cal when issued gives you the freshest relationship between the gage and a traceable accuracy.

How the two labs assign the calibration interval length is also a factor. When they issue a given type gage, at one lab, is that gage assigned the same calibration interval as at the one that calibrates them regardless of issue date? If the intervals are the same on equivalent gages, cal when issued may be less costly in terms of total manhours (you have to do the math for your context). And it may also be a better solution in terms of quality, as you haven't used up part of the cal interval before the customer gets the gage.

Looking at the other side of this issue, has any data been collected as to how much drift occurs over time for gages left on the shelf. There is a third possibility. I strongly underscore that to consider this method, you need some data to prove how effective it is.

We calibrate thermocouple sets for our customers. We calibrate units when received. We then put them in storage cabinets until the customer needs them. These thermocouple are high temperature types (Type R - Platinum vs Platinum/Rhodium). They don't drift in a cabinet. So when the customer comes to pick them up, we assign a start date for the calibration interval. We have some sit in the cabinet for 6 months or more. The nature of this application is that these units are highly stable for many years in ambient environment. They are used at 1000 Deg C or greater. Virtually all of the aging and drift on these occurs in high temp environment. We've saved a lot of expense with this method.

I repeat that this method is only acceptable if you have objective data that the gages don't drift or age (etc.) while on your shelves.

There are many acceptable methods. What you have to prove is that what ever method you use assures that the gage meets the needed accuracy tolerance for the length of the assigned calibration interval (to some degree of statistical confidence).

I should note that we also do many other types of equipment here, and the remainder of equipment are in normal calibration intervals.
 
K

Ken K

I would also think that "calibration at issue" would be the way
to go but…

What do you do with a gage that is only issued for a couple
of hours? Do you re-calibrate after return or do you wait until
re-issue?
My point is if you have gages that are used for short periods
of time, but used frequently, you could end up calibrating
this gage hundreds of time a year.

And what if a gage is issued, used to measure finished goods,
is dropped or damaged without knowledge of the user, and
brought back and sits on the shelf for nine months before
being calibrated again and is out of tolerance? How can you
retrace those out of tolerance finished goods back to that gage?

The practice seems feasible until you realize the scenarios which
might evolve from it. I would hesitate using the system unless
you are highly confident none of the above or other such
instances could possibly happen.

Something to consider when the labs do combine.
 
R

Russ

calibration cycle

("you could end up calibrating
this gage hundreds of time a year")
No I would still only calibrate once a year unless I had a reason to check it more often.

("How can you
retrace those out of tolerance finished goods back to that gage?")
We can trace all parts back to the gages used through our gage software.

Thanks for everyones input so far!
 
Last edited:
R

Ryan Wilde

Are the "calibrated when issued" gages fully calibrated, or is it a check standard system? I don't know that full calibrations before issue would be feasible, but a check standard system is quick and probably the best method to reduce errors due to gage conditions.

A hybrid of the two systems would probably be a great system if it is set up well.

Oh, and Russ, up until last April, I lived just East of you (Elwood). Now I live just East of New York City. It's pretty much the same, except it doesn't take 45 minutes to drive through Tipton, and NYC seems to be without a pork festival. ;)

Ryan
 
R

Russ

calibration

Ryan-
Not sure what you mean by "fully calibrated" over a "check standard system"?
 
R

Ryan Wilde

Russ,

Example:

Full calibration of an attribute gage (Let's say a Class X Plug Gage at 12mm diameter - single member - Go): This would require a full temperature soak in a stable 20°C room for at least 4 hours (although 24 hours is more accepted) in order to adequately measure the plug (Tolerance +1 µm/-0 µm). The generally accepted method is six measurements - front, middle, back, spin 90°, front middle, back. Document all six measurements. Size of gage can either be reported as all 6 measurements, or the largest measurement found (as this will be the smallest diameter that will fit in any given hole). Correlate the 90° points to ensure that the gage is basically round.

In other words, fully documented, with traceability (which means accompanying uncertainty according to the VIM), etc.

Using an in-service Check Standard to verify gage condition (Let's use the same gage):

A corresponding set of rings may be used (calibrated of course) designated for checking plug condition is used. If the plug fits in the go ring, but not the no-go ring, the plug is verified to be within the dimensions of the two rings. BUT...what is the actual size of the plug? Can you do a Gage R&R without knowledge of the actual gage attributes? Is the gage oval? If you are planning on meeting ISO 17025, how will you compute uncertainty?

In other words, nothing has gone seriously wrong since the last calibration.

Unless you are using very creative calibration procedures that do not check all attributes, I don't see it as feasible to fully calibrate a gage each time it is issued. But, with a mixture of systems, it is a very good assurance of quality. Calibrate on a cycle, in-service check upon issue, and if it works out well, extend the scheduled calibration cycle based on the in-service check data combined with calibration history, therefore saving money - which is the bottom line.

Ryan
 
Top Bottom