Adjustment of Calibration Intervals (Calibration Frequency)

Jerry Eldred

Forum Moderator
Super Moderator
I am in the middle of a thorough evaluation of the intervals for 86,000 calibrations for my company. I'm curious to benchmark some of the criteria used by everyone out there. There doesn't seem to be a consensus or best practice as to some of the base statistical parameters. Please post your inputs or thoughts on the below parameters:

1. Reliability Target (percent in-tolerance): 80% to 95% seems to be the norm. What does everyone think?

2. Confidence Factor (K factor for making cal interval adjustments): I've seen everywhere from 20% to 99.9999%.

3. Maximum Interval (electronic test equipment, mostly): 3 - 5 years seems to be the norm.

4. Minimum Interval (not so critical): Some special cases probably once a day; but for most instruments, I've seen 1 - 3 months as the norm.

5. Renewal (optimize or adjust to nominal): Only when out of tolerance, at 70% of tolerance, at 95% of tolerance, always, are the most common.

6. Interval Evaluation/Adjustment Method: Common answers are NCSLI RP1 Method A3 based on % in-tolerance, statistical evaluation of data, other NCSLI RP1 methods.

OTHER OPTIONS:
1. Fixed interval: Pick a convenient interval and leave there.
2. The Old "1-2-3" Method: I don't like, but a common method (or variant) is every 3rd calibration (or other low number), calculate % IN-Tolerance for each instrument. Based on % IN-Tolerance, either bump up by 20%, leave the same, bump down by 20%, or 50%. This (in my opinion) is an over-reactive method that over-adjusts, often leaving either too short an interval or too long an interval.

Looking forward to your inputs. -Jerry
 

Hershal

Metrologist-Auditor
Trusted Information Resource
Hi Jerry,

Reliability target is ok.

Confidence is typically k=2 to approximate 95%.....actually k=2 is 95.5%.....99% is k=3.....k=2 is easier, and more appropriate for a non-NMI calibration.

For interval I suggest using RP-1 from NCSLI.....it has been my experience that electronic is better suited to a max of 12 months rather than a max of 36 months.....remember electronic instruments are more likely to drift than a set of calipers or a gage block.....I could see gage blocks going to 36 or even 60 months IF they are kept in a tightly controlled environment and rarely used.

For min, 1-3 months is typical.

If adjustment is required, always shoot for nominal.....gives the greatest chance of staying in tolerance longer.

RP-1 is a good method.....if you have access to the Navy or AF interval information, that is better.

For some things, a set interval, based on storage or use, is ok. Gage blocks are an example. Oscilloscopes (older ones) may have MIPs (Multiple Interval Parameters).

Just my thoughts. Hope this helps.

Hershal
 
R

Randy Stewart

Back in the day when I was a cal tech, we used the 10% figure for increase in interval. As long as you call out the cal frequency units and keep it the same we never had an issue.
If you call out a 12 month interval for a thread plug then the 10% increase would be 1.2 months. Just keep the measuring units the same.
 
G

Graeme

My thoughts ...

Jerry,

Like you, virtually all of my workload is electronic test & measuring equipment. Here's what I do. It's based on experience in the Government (Department of the Navy), a regulated industry where safety is paramount (aviation) and other experience acquired from a number of sources.

1. I use 95% as the reliability target - the probability of a single item being in-tolerance at the end of the calibration recall period. In another industry I might go to 90% but I'm personally not comfortable with anything lower than that. Anthing higher than 95% is really not cost-effective because it takes way too long to collect "enough" data.

2. I use 95% as the confidence factor - the probability of making a correct in/out tolerance decision. Some would like to see 100% here, but the only way to get that is if it never leaves the cal lab, in which case it is perfectly reliable but totally useless to the end user.

3. I use a maximum interval of 60 months (5 years). The only product I have ever seen a longer inteval on is waveguide directional couplers (72 months) but about the only way they change is if they are physically damaged.

4. The minimum interval used right now is 1 month, but that is a special case. The "normal" minimum is 3 months. Some infrequently used equipment may be labeled "Calibrate before each use."

5. Adjustment (renewal) is performed if the measurement is more than +/- 75% of the calibration limits, or at a different value if specified in the calibration procedure. If the TAR is less than 4:1 I may specify a different adjustment limit in the procedure.

6. The lab supervisor recently purchased a commercial interval analysis program and that is being used now. It does some fancy statistical analysis of the data. Before that, I used a spreadsheet I developed that is based on method A3 but uses the binomial distribution to estimate an interval. (I have attached a copy of the spreadsheet.)

Other - fixed interval: I've never used that ... it seems somewhat foolish to me because you never have the opportunity to improve reliability by reducing an interval, or save money while maintaining the same reliability by increasing the interval.

Other - 1-2-3: I have never used this type of method but I have seen it used in many places. I will observe that RP-1 specifically states that this and other reactive methods (A1, A2 and A3 if used without statistical methods) never settle to a stable value. As you said, the interval is always either too long or too short.

Additional thoughts:

I never try to do interval analysis on a single item or even on a small number of similar items. It tales so long to get a statistically significant sample size that it often exceeds the useful lifetime of the equipment. (And that's almost exactly what RP-1 says as well.)

I try to group things by model where possible and have all of them on the same interval. For example, my client has about 350 Fluke 87 digital multimeters which is a large sample. However, there are also a lot of 3-1/2 digit multimeters from various manufacturers, and while there are over 100 total there are fewer than 10 from any single manufacturer. Since they are similar in type, function and accuracy all of those are treated together as a group for interval analysis.

I am very conservative in extending calibration intervals - I don't take the numbers at face value. I suggest doing no more than doubling the current interval even if the numbers say it could be more. One analysis showed that a particular model of instrument, that had been on a 12 month interval, could be extended to something like 9-1/2 years -- it was actually changed to 24 months, and will never be more than 60 months.

After changing an interval, it is best to wait for at least the time (old interval + new interval) before analyzing that group again. An axiom in RP-1 is that all items in the group must have been on the same interval, so that amount of time allows something that was calibrated the day before the change to complete the old interval and one cycle of the new interval.

For ease of calculation, all time intervals should be in the same units. That is why I alsways use months. It is a convenient size, but also years are not appropriate for a lot of things and weeks or days rapidly ballon into numbers that are hard to grasp. (How long is 1,461 days? How long is 208 weeks?) If calculations result in a fractional part on a result, it is always rounded down to an integer. (Both of those numbers equate to 48 months.)

This lab's definition of "calibration" specifically excludes anything that the equipment manufacturer may call calibration but is really setup, normalization, standardization or error-correction storage that is performed by the operator before use. The definition also recognizes that a large number of "calibration" procedures in equipment manuals are really factory adjustment procedures. According to NCSL RP-3, a fundamental assumption for writing a calibration procedure is that the item to be calibrated is in good working order and ready for use in every respect -- and the purpose of calibration is to verify that status by using traceable standards. On the other hand, many manufacturer procedures are adjustment or alignment procedures intended for use after the item has been manufactured or repaired, and is therefore in an unknown state.

Increasing and decreasing calibration intervals can be justified with different amounts of data. Increasing an interval requires a large sample size in order to get statistically significant results. Decreasing an interval can be justified with much less data. In the case of the product that is on a 1 month interval - it had been at three months, but between the three units there had been five failures in the previous 12 calibrations. Even without doing the math it is easy to see that this is way less than 95% reliability, and they would have to pass all of the next 100 calibrations to have a chance of meeting it. There was not a chance of that happening, so it was an easy decision.
 

Attachments

  • A3 testing.xls
    20 KB · Views: 2,012

Charles Wathen

Involved - Posts
As usual, Graeme is right on. I went through the same dilemma 4 years ago, until I stumbled upon a software program called Interval Analysis by Dr. Howard Castrup. You can get the free software here: http://www.isgmax.com/Freeware/A3Test.exe
main page:
http://www.isgmax.com/home_l.htm

It's basically the same as Graeme's spreadsheet program.

I perform this analysis once a year by grouping most of the MTE in groups. We have ~13,000 instruments in our database. Once I have the analysis completed, I send the results (via email) to our entire Engineering staff for review and comments. Most of the Engineers recommend that I do this more often, but I tried to explain to them that it's based on history, and this can take some time.

Since doing interval analysis, auditors have been impressed that this is even done. As Graeme mentioned, intervals less than one month are removed from our calibration system and tagged with a do not use label. The MTE that falls into this category will need to be replaced, redesigned, or tolerances opened so it does not fail in future analysis.
 

normzone

Trusted Information Resource
---" I am very conservative in extending calibration intervals - I don't take the numbers at face value. I suggest doing no more than doubling the current interval even if the numbers say it could be more. One analysis showed that a particular model of instrument, that had been on a 12 month interval, could be extended to something like 9-1/2 years -- it was actually changed to 24 months, and will never be more than 60 months.

After changing an interval, it is best to wait for at least the time (old interval + new interval) before analyzing that group again. An axiom in RP-1 is that all items in the group must have been on the same interval, so that amount of time allows something that was calibrated the day before the change to complete the old interval and one cycle of the new interval. "---

What kind of language would you use in a calibration procedure ( AS9100 ) to reference a caveat allowing extending calibration interval for tools demonstrating consistant calibration status or little use?

:thanx:
 
Last edited:

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
If adjustment is required, always shoot for nominal.....gives the greatest chance of staying in tolerance longer.

If your most significant variation is drift, and, for example, the drift is upward, wouldn't it be better to adjust down to below the nominal to allow for the most drift? That would give the greatest chance of staying in tolerance longer for that case.

Something to ponder.:cool:
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
I might also ponder what is the validity of stability data collected every 3 years or so? It may be good, I don't know, but I ponder it...:cool:
 
D

dmadance

I would be careful about adjustments to nominal at each calibration. Depending on the cycle it might actually decrease the percentage of in-tolerance conditions. If you are familiar with the "funnel experiment" that is often used in quality control training, that is one concern. The funnel experiment shows that if you drop balls through a funnel and you want to hit a target, if you adjust the position of the funnel after each drop of the ball, the net effect is to increase the variability of the landing positions. If you instead drop 100 balls and get some idea of the mean position and the variance, one can make an adjustment that will be meaningful.

Some instruments, such as certain frequency standards, adjustments do add additional short-term drift, and when an adjustment is made, one may have to wait for induced drift to stabilize to be sure the adjustment will allow the unit to be in-tolerance at the end of the calibration interval.

I would say adjustment intervals can't be general, and should depend on the instrument type and the length of the calibration cycle.

With regards to other parts of your post, I personally have heartburn with setting an upper quality level for adjustment of calibration intervals. Say your upper target is 95%. If you are successful that means that 5% of all instruments are guaranteed to be out-of-tolerance. When one is implementing something like six sigma, one would be horrified with an ensured 5% defect rate. Much better I would think is to just have an upper limit of the calibration cycle then an upper limit on quality. That being said, this is a what I would call a management decision, it is up to the powers-that-be to determine what type of product quality they are willing to provide.
 
Top Bottom