Factors for Determining Gage Calibration Intervals - Extended Intervals


Dave Hiser

Gage calibration intervals

The calibration intervals where I work were recently changed to allow somes gages to be increased from 360 days to 1080 and 720 days. This was done in secrecy by a few of the bosses cronies. Of the 1200+ gages in our control about 30% were given increased intervals. Management was able to do this due to the lack of documentation when most of the breakdowns, adjustments, and servicing to the equipment are not documented. The procedures in our lab were old and not the revised electronically controlled ones. Some of the staff were not told of the updates in the procedures, either. :confused: My question is "Do we or can you base gage calibration intervals based only on degree of usage, and leave out environment, measurement uncertainty, stability, etc. Oh, we also never perform GR&R's. Is there something in the 17025 Guide that I can use to substantiate my claim this was not done correctly, and needs to be rectified?
Elsmar Forum Sponsor

Jimmy Olson

Hi Dave. Welcome to the Cove.:bigwave:

As far as I know there is nothing in 17025 that specifies anything about the intervals or how they should be adjusted or anything related (sorry). For the most part, it's up to you to decide what to set your intervals and you can base that on whatever you like. Technically speaking, what you management did is acceptable. That doesn't mean it's a good idea though.

I've known several companies that have based their intervals on the frequency of use, but still take other factos into consideration. You may want to consider the manufacturers reccomendation and show that to the managers. Another possibility to consider to change their minds is look at how the gages are stored. If they are being stored on a production floor where they are subject to being abused (even thought they aren't use), then pushing the interval to long would definately be a bad idea.

If you subcontract your calibration, that is probably one factor the managers thought of. They probably figured the less something is calibrated, the less it costs. You could do a comparison of the cost of calibration to the cost of repair or replacement.

I guess I'll stop rambling on. I hope this helps somewhat. Someone else might be able to find something in writing that you can use. If you have access to any of the NCSL publications, try looking through NCSL RP-1 "Establishment & Adjustment of Calibration Intervals" for something that you can use. Good luck.



To carry on from what Richard said ... at the very least, top management needs to improve (start?) communication so that there is knowledge and understanding of policy and procedures at all appropriate levels of the staff.

:rolleyes: Of course, as an ex-government type, I have seen organizations where it was believed to be not appropriate for workers to know what they were supposed to be doing ... put your lunchbox and brain in the locker by the door, and enter the mushroom caves. :rolleyes:

I am not sure from your initial message where your work fits into the organization, but I will assume that you are in an in-house calibration lab. In that situation, then ISO/IEC 17025 requires an effective calibration program for the laboratory calibration standards. The workload items - the calibrated things used in the rest of the organization - would be controlled under the organization's overall quality management system. In ISO 9001:2000 that is under clause 7.6 'Control of measuring and monitoring devices'. This is one of those sections where specific actions and records are identified as required, but the need for a documented procedure is only implied.

What is the purpose of analyzing and adjusting calibration intervals?
Even the most casual reading of NCSL RP-1 will reveal that the intended purpose of analyzing calibration intervals - and adjusting them if needed - is to provide some level of assurance that the instruments are still performing as intended (in tolerance) at the end of the time period. The level of assurance must be determined by your company. What does top management believe is an acceptable and economic risk for using an instrument that is out of tolerance?

Costs of using out of tolerance instruments are varied and depend somewhat on your industry. They can include product recalls, excess warranty claims, or problems that affect the public health or (in my business) safety. All of these might involve lawyers and corporate liability. Those costs can be managed by calibrating more frequently to achieve a higher end-of-period in-tolerance rate, but that costs money as well. There are also other risks associated with calibrating too frequently, such as higher inventory to cover out of service instruments, or more qualified technicians needed to do the work. Some also believe you are getting close to the process tampering that Deming warns against.

But, balancing cost and risk is one of the things that top management is supposed to be doing, isn't it?

Jerry Eldred

Forum Moderator
Super Moderator
About all I can do is repeat what has already been said....

FIrst of all, as one who has been in the MIL-STD-45662A, Z540, 17025, ISO9000, QS9000 and other worlds, I would be generally disturbed at any arbitrary changing of calibration intervals.

I agree that NCSL RP-1 could be a good attack point. I believe it is possible that the management doesn't understand what a calibration interval really is. Forget all of the standards (above). Even forget about RP-1 for a moment. The calibration interval is the period of time, to within some confidence level that an instrument may be expected to remain within specified tolerances. Therefore, you set an interval based on the degree of confidence you desire. If the interval of a unit is 360 days, that is the period of time that meets that criteria. After gathering historical data, you may determine that interval is too short or too long. But regardless, the definition of a calibration interval has not changed.

So when that interval is arbitrarily changed by anyone, their action by definition (still forgetting the standards documents), says that they implicitly believe they can maintain an acceptable confidence that the measurements will remain in tolerance for the entire new calibration interval. The question I would pose if I were there (and this does get my dander up a bit - having been in the business more than 25 years) would be in the area of confidence in the measurements. Based on the increased interval, do they believe they will maintain an acceptable confidence that the measurements will remain in tolerance?

Moving on to the other aspects of the questions....

To adjust intervals based on degree of usage means you need to assign intervals by individual units. That means each unit must have it's in-tolerance history reviewed and intervals adjusted for that unit. The degree of usage still requires you have some statistical basis for establishment or adjustment of the interval. You need to define what "degree of usage" is, establish a one to one relationship between an amount of usage and a probability that the unit will remain in tolerance for a specified amount of usage, then adjust intervals based on that.

I know many companies are looking for ways to squeeze costs. In my company there are initiatives to take items off cal cycles or give items longer cal cycles as a cost cutting method. And in these days, I understand that needs to be done. My advice to our people has simply been to make sure there is some sense to what they want to do. For example, handheld DMMs used by maintenance techs, we can get away with spreading out intervals more than we can on metrology tools used to measure product critical parameters. So I believe the same criteria could apply in your circumstance. The involved people should know what measurements they can get away with loosening, and which ones which may induce product reliability issues.

Enough rambling. Hope this is of some help.

Ken K

Interesting discussion. Would like to add my thoughts.

I cringed when I first read the original post. It still bothers me.
To double or triple a calibration interval without historical data to back it up is in my opinion looking for trouble, unless...

...you perform verifications between calibration intervals.

We discussed this with our ISO 17025 auditor before our pre-assessment. We didn't want to become accredited for calibration, just testing. So we put all our equipment we use to test everything listed in our scope on a yearly interval to start with. An outside lab will calibrate these for us. During the year, we will perform verifications on these. If everything stays stable, as it has been in the past, we will increase some to longer calibration intervals.
This way, our verifications will identify any problems we should encounter.

Now, I don't know if you calibrate in-house or have an outside vendor do it, but verifications might help you somewhat.

Dave Hiser

Gage calibration intervals

:) I want to thank those who replied with excellent responses. I am a UAW member who actually works, and am trying hard to do the RIGHT THING.
Here is a little history behind my original post. We are an in-house lab that repairs most of our own gaging. Many items don't last but a week or month in the harsh production enviroment. There was a transformation last year from Gage Trak to IQS which allowed us to be intranetted with other plants. During the transfer of data, many of the records were transfered in error, which created many of the rejects to show up as passed, and due to our lack of documentation of adjustments, repairs, and servicing there was incomplete records.
This in turn allowed not upper management, but lower management to take advantage of the data transfer which showed us in better shape than what we were really in.
I took a risk and changed most of the intervals back to there original recall periods. Of course, this has not gone over well and I am currently being put on notice for doing so. That is one of the reasons for asking the experts here at the Cove, some good sound fundamentals that I can use to persuade those in charge that this was not done in the best interest of the company or the customer.
Any more helpful tips and literature would be extremely helpful.

To steal a quote "I'm smiling because I don't know what's going on!"
Thread starter Similar threads Forum Replies Date
M Statistical techniques for determining which factors are affecting the yield? Statistical Analysis Tools, Techniques and SPC 2
P Human Factors / Usability validation in the time of COVID Human Factors and Ergonomics in Engineering 9
C Kaizen Events - Factors Affecting Failure Lean in Manufacturing and Service Industries 13
D Is Human Factors testing mandatory for a 510(k) submission? Human Factors and Ergonomics in Engineering 16
J Selecting factors for screening DoE design Statistical Analysis Tools, Techniques and SPC 8
T Determination of TENSION SAFETY FACTORS - Table 21 IEC 60601-1 Other Medical Device Regulations World-Wide 5
J DoE (Design of Experiments) - Multiple responses with different factors Using Minitab Software 2
P Usability/human factors engineering requirement (standard) for IVD medical device Other Medical Device Related Standards 2
A Interesting Discussion Human Factors as Root Cause AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 13
P Global medical device human factors/usability requirement IEC 62366 - Medical Device Usability Engineering 3
M Medical Device News FDA News - 14-09-18 - Benefit-Risk Factors to Consider for Substantial Equivalence Other US Medical Device Regulations 0
H Minitab 15 - Factorial Design - 3 factors: 4x3x2 - How to? Using Minitab Software 4
P Interesting Discussion Addressing Human Factors in Corrective Action AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 23
M I have 3 different factors - DOE help Statistical Analysis Tools, Techniques and SPC 3
A How do I create a 3^k factorial design with factors be treated as continuous where I Using Minitab Software 0
V Screening DOE with 7 Input factors and 4 responses - Significant factors Using Minitab Software 1
H Taguchi mixed model DoE [L16 (4^3 2^6): factors interaction and ANOVA calculation Using Minitab Software 3
S Taguchi Design with ANOVA - 3 factors each with 3 levels Quality Assurance and Compliance Software Tools and Solutions 13
R Is there any Excel .xls Spreadsheet available for DOE with 3 factors? Six Sigma 1
T New EU Medical Device Regulations - Q1/2 2017 and Human Factors Other Medical Device Related Standards 10
M New MHRA guidance on Human Factors - Usability Engineering IEC 62366 - Medical Device Usability Engineering 1
L BSI White Paper on Human Factors/Usability Engineering IEC 62366 - Medical Device Usability Engineering 12
M Requiring action on factors alone of RPN in FMEA APQP and PPAP 7
S IEC 62366 vs. FDA Human Factors Requirements Human Factors and Ergonomics in Engineering 2
S DOE Analysis - Experiments with 5 Factors Statistical Analysis Tools, Techniques and SPC 11
Ajit Basrur FDA issues Guidance Document - Benefits-Risks Factors to consider for 510(K) US Food and Drug Administration (FDA) 1
C MU (Uncertaincy of Measurements) definition and MU Factors Measurement Uncertainty (MU) 4
E ANOVA for Taguchi method - Four factors and three levels and nine runs (Minitab) Using Minitab Software 6
M How to compare 2 independent samples having 3 factors 3 levels Statistical Analysis Tools, Techniques and SPC 5
Y Taguchi Design with 6 levels and 3 factors in Minitab Using Minitab Software 11
J When to use k-Factors vs TUR Measurement Uncertainty (MU) 6
R Taguchi design for 3 level 4 factors Using Minitab Software 12
V Selection of Design - 8 Factors, almost sure about Target Levels Using Minitab Software 8
R 4 Factors 3 Level DOE to determine which factors are significant Using Minitab Software 2
C Cost Factors of Iso 8 vs. ISO 8/9 Cleanroom Other Medical Device and Orthopedic Related Topics 2
M Human Factors Testing for CE Marking of a Cosmetic Device CE Marking (Conformité Européene) / CB Scheme 9
Z A DOE in Minitab - 3 factors and 3 levels for each factor Using Minitab Software 12
J What are the Design Factors in APQP? APQP and PPAP 3
Q Risk Factors Checklist identifying the Risks for meeting the Customer Indent AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 4
B DOE Question: 3 factors with 2 levels and 1 factor with 3 levels Six Sigma 2
C Analyzing 3 Factors 3 Levels using Minitab Using Minitab Software 9
J Metal Stampers Feasibility form that includes Risk Factors IATF 16949 - Automotive Quality Systems Standard 5
J DOE Analysis Experiment, 5 factors, 4 factors having 3 levels, and 1 factor having 5 Using Minitab Software 38
D What Factors to consider to determine the Number of Auditors Internal Auditing 3
E 5^2 DOE (Design of Experiments) with 3 Replications, 5 Levels 2 Factors Statistical Analysis Tools, Techniques and SPC 6
N Applying Correction Factors to Furnaces Manufacturing and Related Processes 5
D Resolution V Factorial Design with 5 factors (3 numerical and two text) Using Minitab Software 1
N CE Mark and Third Party Companies Evaluation Factors CE Marking (Conformité Européene) / CB Scheme 5
L DOE with 2 factors (3,4 levels) and optimized factor Using Minitab Software 12
C Packaging Shock Testing - Examples of Fragility and g-factors Other ISO and International Standards and European Regulations 3

Similar threads

Top Bottom