Determination of Calibration Intervals and Frequency Analysis

Marc

Fully vaccinated are you?
Leader
Date: Mon, 11 Oct 1999 11:11:30 -0700
From: "Dr. Howard Castrup"
To: iso25
Subject: Definition of Calibration

Karl Haynes added a message to this thread in which he mentions that interval analysis programs may be sensitive to whether parameter adjustments have been made.

This is true. It's also interesting that support for the renew-if-failed (adjust only if out-of-tolerance) paradigm was initially derived from a study that was motivated by just the sort of thing Karl described. In the '70s, a report was written arguing that parameters should not be adjusted during calibration unless they were found out-of-tolerance. This conclusion was forgone. It was the one the authors of the report needed to reach. This is because the organization sponsoring the report employed an interval analysis methodology that didn't work unless the renew-if-failed policy was in effect.

Of course, if you issue a recommendation for doing nothing (not adjusting), management is going to love it. So, from the late '70s through the '80s, support for the renew-if-failed policy was considerable.

Meanwhile, also in the late '70s, an interval analysis methodology was developed that worked best if the policy was renew-always (adjust at every calibration). This methodology is far more powerful and versatile than the other methodology and can be used to fine-tune intervals to meet reliability targets. It can also be tailored to accommodate a renew-as-needed policy (adjust if found outside "adjustment limits").

Both methodologies are documented in NCSL RP-1. The first methodology is called Method S1, and the second is called Method S2. Another methodology, referred to as Method S3, can also be found in RP-1. With Method S3, it doesn't matter what the policy is. In addition, Method S3 has all the power and versatility of Method S2.

The Calibration Interval Subcommittee of the NCSL Metrology Practices Committee is in the process of refining Method S3. All the math has been done. It now remains to implement the method in a working software algorithm. This effort is in-progress.

In the meantime, users of the program 'SPCView' have discovered that the decision of whether or not to adjust during calibration has a profound impact on the calibration interval. We're also finding out that adjusting up or down to compensate for a "known" drift does not always yield the expected result. This is because, while the drift for a given parameter may be downward, for instance, the slope for the uncertainty in the drift may be positive. This sometimes means that adjusting the value of a negative drift parameter upward actually causes the interval to be shortened rather than lengthened.

Incidentally, someone mentioned that, from the standpoint of interval analysis, by simply calibrating a parameter, its uncertainty is modified and this constitutes a kind of 'renewal' -- regardless of whether a physical adjustment has taken place. This is true. Consequently, vendors that feel compelled to adjust in-tolerance parameters need not be concerned about upsetting interval analyses that assume a renew-always policy.

But, back to the issue of this thread. Since I deal in uncertainty analysis, risk analysis and uncertainty growth control, I cannot conceive of paying for a calibration without getting information in return. To me, the commercial cal activity does not provide a viable product if all it offers is my equipment returned with an updated cal sticker. In this case, I have NO useful information for adjusting calibration intervals, evaluating the quality of measurement made with the calibrated items, or for determining whether the commercial lab actually did anything.

To be truly useful, the result of a calibration should include (1) a re-calibrated item, (2) the as-found and as-left values for each of the parameters calibrated, (3) the dates of service, and (4) estimates of the measurement uncertainties for each calibration.

Howard Castrup
President, Integrated Sciences Group
Chairman, NCSL Metrology Practices Committee
 

Marc

Fully vaccinated are you?
Leader
Another tidbit:

From: "Frans J.C. Martins"
Subject: Re: Uncertainty and Proficiency Testing
Date: Tue, 12 Oct 1999 21:15:52 +0200

Dear Jan and group,

> Date: Fri, 8 Oct 1999 14:44:45 -0700
> From: Jan Johansen
> Subject: Uncertainty and Proficiency Testing
>
> Dear ISO-25er's,
<SNIP>
> 1. The measuring instrument is a 5 * DMM. The best resolution on the 20 VDC
> range is .0001. How can the nominal value be reported to .00001? It is my
> contention that a value should never be reported past the maximum resolution
> of the readout.

One cannot report a value to better than one can resolve. The uncertainty of measurement could be calculated to show more resolution, but it is pointless to quote even that part of the uncertainty which is not covered by the resolution of the UUT. Rather round the UOM off to represent a more meaningful value.

> 2. The definition of uncertainty say's ' . the result of a measurement ,
> that characterizes that characterizes the dispersion of the values that
> could reasonable be attributed to the measurand.'
>
> 3. The device providing the measurand is the 5 * DMM. The one year
> specification for the DMM is +/- .0015% Rdg + 3 counts or +/- 0.0018 VDC.
> 4. It doesn't matter who's calibrator you use (Fluke, Wavetek), the majority
> of the uncertainty is in the DMM since it is the unit providing the
> measurand. Since a specification is a valid Type B uncertainty, it appears
> that most of the labs did not include it and only reported the uncertainty
> of their standards, NOT the measurement uncertainty.

Specification is not normally included in an uncertainty budget. The UOM refers to the measurement process only and should at least include for DC voltage:
Resolution of UUT
Resolution of standard (could be ignored - included in certificated UOM reported for the standard)
Calibration uncertainty of standard from its certificate
Repeatability of measurement: The standard deviation calculated from a number of measurements.
Short term drift of the UUT and standard
Effects of temperature if exists.
Effects of leads and cables if exists.

Regarding specification, the user of the UUT could use the spec to determine the accuracy in between calibrations, however, with repeated calibrations, thus building up a history of the drift rate of the UUT one could arrive at the point where the specification is no longer relevant since the history of the UUT over a period of time proved otherwise.

> 5. We could have connected the DMM to our 10 V in-house reference, but it
> would not have improved our uncertainty by much, because of what is
> providing the measurement.

The UUT resolution and repeatability are the main contributing factors. If you used a calibrator such as a 5500/ 5520, or 9000/ 9100, or where a TUR of 1:1 is approached, the plot thickens and the factors relating the standard has to be more criticall observed and included in the budget.

Hope this helps!

Regards in Quality
Frans J.C. Martins Reg. Eng. Tech, Rgt. Prod. M
NHD: TQM (TWR), ND: Prod. M. (TWR)
Cert. in QA I & II (C&G of London)
Sen. M: ASQ, SASQ M: PMI(SA), SAIM, SAIMC
_______________________________________

W.W.J.D.? F.R.O.G.
 

Marc

Fully vaccinated are you?
Leader
An OLD tidbit:

received="Tue Jun 2 22:49:39 1998 MDT"
sent=Tue, 2 Jun 1998 23:47:56 -0600"
ISO Standards Discussion
Subject="FYI: Calibration/Pfrang"

iso: FYI: Calibration/Pfrang
FYI: Calibration/Pfrang
ISO Standards Discussion
Tue, 2 Jun 1998 23:47:56 -0600

From: (Doug Pfrang)
Subject: FYI: Calibration/Pfrang

1. As I said before, you do not necessarily need to CALIBRATE a tool unless you first determine that there are no other cheaper alternatives for VALIDATING that tool in your process. Therefore, to all of the ISO consultants out there who have tried to advise people on whether or not to calibrate a tool: you should never suggest that any tool be calibrated until you first determine whether or not that particular company has any other cheaper alternatives for validating that tool (e.g., brute force trial and error). Virtually none of the ISO consultants who have posted to this list appear to do this; instead, they appear to use some other surrogate (and incorrect) criteria for determining whether or not to calibrate a tool. Often, the suggestion is to calibrate the tool simply because the person making the suggestion can't think of any other way to validate the tool (i.e., because they haven't explored this option with the client), and they know that calibration will be easy to sell to the ISO auditor. While this is certainly the easiest advice to give from their point of view (since it is virtually guaranteed to work, and thus make them look good), it is not necessarily the most cost-effective solution for the client.

2. Keep in mind that validating a tool by calibration does not mean the tool is any more reliable than if it is validated by brute force. In either case, the tool can drift out of accuracy between the time you validate it and the time you use it. Therefore, the time interval that you choose between revalidations is NOT a function of your METHOD of validation, and it is NOT a function of the calibration cycle recommended by the manufacturer of the tool; it is SOLELY a function of how stable that particular tool is in YOUR particular process. In other words, you do not automatically have to revalidate a tool annually just because that is what the manufacturer recommends. You may freely adjust the revalidation cycle longer or shorter based on YOUR OWN experience with that tool. If the manufacturer recommends that you recalibrate a tool annually, but you find it perfectly adequate --in YOUR process -- to revalidate that tool every decade, then you are perfectly justified in revalidating that tool every decade. Similarly, if the manufacturer recommends that you recalibrate a tool annually, but you find it unacceptable -- in YOUR process -- to wait more than six months,then you must shorten the revalidation cycle to six months. YOUR experience with the tool, not the manufacturer's recommendations, governs the length of the revalidation cycle. Thus, if you have a tool (for example, a tape measure) and your experience shows that -- in YOUR process -- you can go ten years before revalidating that tool, then that duration is perfectly acceptable as long as you have some reasonable basis for setting the revalidation cycle at that length. How do you show you have a reasonable basis? Just keep a record declaring that you have used this type of tool for ten years and -- in YOUR experience -- these tools haven't gone out of spec in that time frame, so YOU are going to set the cycle at ten years for YOUR process.

3. Many people have posted statements to the effect that "the reason we don't need to calibrate this tool is because we don't need to make accurate measurements with it." This analysis is also wrong. The ACCURACY you need for your measurement is IRRELEVANT to the question of whether or not you should CALIBRATE a tool. The ACCURACY you need for your measurement is only relevent to your SELECTION of which tool to use: the tool you select must be capable of the accuracy you require. Once you SELECT a tool that is capable of the accuracy you require, whether or not you choose to CALIBRATE that tool does NOT depend on the accuracy you require; the decision to calibrate it depends SOLELY upon your assessment of whether or not calibration is the best way to VALIDATE that tool for your particular process. Therefore, to say that "I don't need to CALIBRATE this tool because I don't need to make accurate measurements with it" is wrong. The fact that you don't need to make accurate measurements guides your SELECTION of which tool to use, but your decision not to CALIBRATE that tool is based on the fact that you can VALIDATE that tool easily by brute force trial and error. You try the tool in your process and it works for you; therefore, you have VALIDATED it without having to CALIBRATE it, and THAT is the reason you do not have to calibrate it. But then they make the mistake that most people do -- the mistake which leads to most ISO nonconformities related to calibration -- they neglect to document their validation, which is discussed below.

4. The root cause of almost every ISO nonconformity related to calibration is NOT that the company has failed to CALIBRATE a particular tool; the root cause of the nonconformity is that the company has failed to provide an appropriate QUALITY RECORD showing that a tool has been VALIDATED for the given process. ISO auditors do not looking for calibration records per se; they look for QUALITY RECORDS showing that tools have been VALIDATED for the processes in which they are used. The reason that companies often (wrongly) interpret such nonconformances as requiring the tool to be calibrated is that calibration provides a handy QUALITY RECORD for the company's files, which resolves the nonconformity because it provides adequate evidence of VALIDATION of the tool. However, in virtually every instance, the company could also resolve the nonconformity without calibrating the tool, if it would simply VALIDATE the tool using some other appropriate method and produce a QUALITY RECORD showing that the tool has been VALIDATED and the method by which validation was done.

-- Doug Pfrang
 
R

Roger Eastin

You know, Marc, you have posted several selections from Mr. Pfrang and he obviously knows his stuff when it comes to IM&TE. I need some help here, though. He uses the word "validate" in his discourse, but I don't see that word used in the standard except when it refers to an instrument's calibration. I need to get over this hurdle, because based on validation (for instance, his development of the brute force method of validation), he states that "calibration" is not necessarily the cheapest option. How can validation be used instead of calibration? Is that word in the standard but I am not seeing it?
 

Jerry Eldred

Forum Moderator
Super Moderator
REPLY TO "AN OLD TIDBIT"

The author of that post is well versed in systems, but I quite honestly have some difficulty with some of the 'roads' he chose to travel. I found his views to be quite confusing (with more than 20 years in metrology).

I think he missed some key points.

1. The choice whether or not to calibrate is a very simple one. Do you require a specified accuracy in your process? If your process requires that your product meets any given specified parameter (such as a DC voltage level, a torque strength, a length, a weight), then you MUST calibrate. Its that plain and simple.

The area the author describes are issues outside of point one. If you do not have to calibrate, then there may be tools that still require some sort of validation (I don't like this term. It's use really seems to be restricted to pharmaceuticals industries. But validation is something over and above calibration. After you have calibrated all of your measurement tools properly, there is still the validation. Which as I understand is kind of like a process qualification. But since I am not a pharmaceuticals guru, I won't try to venture views on that.

Once you have determined that you need to calibrate (per (1) above, intervals must be set accordingly. I might add that if your system is QS9000 compliant, then there is the note after 4.11.2 (I believe that is the correct paragraph number) that you must comply with ISO 10012-1. Alternatively there is compliance to ANSI/NCSL Z-540.

Interval determination is based on how well your calibrated instruments meet their tolerance specs. A typical method is to annually review all calibrations done on a certain type of instrument (usually a manufacturer model or manufacturer / model family). Review what percent of those calibrated over the course of the year were received in-tolerance. Based on the percentage figure, adjust the calibration interval accordingly. By the way, I have yet to see an industry standard as to what that percentage figure should be. 95% seems to be a common figure. Below 95%, shorten the interval, above 95%, lengthen. This should be an incremental function.

Back to the original post. I have some difficulty with 'brute force trial and error'. This sounds dangerously close to what I have seen and been troubled by in some semiconductor processes. The thought is, to get to some fictitious "nominal" point where your process works, then once you have found it, keep it there through some internal intercomparisons. That doesn't work. Try as you might, that process will drift over time. And without calibration, you will never know where it has drifted to. This is precisely (in the non-metrological sense) what NIST traceability is all about. There are a lot of very educated people in the industry who do not understand NIST traceability or its importance. Calibration and NIST traceability are one and the same. You can't have one without the other. (or some other international standards, of course).

Thats my two cents.

------------------
 

Marc

Fully vaccinated are you?
Leader
I guess I should start out by saying I do not totally agree with Doug P. I posted this as 'food for thought' and I pointed out this was an OLD posting - well 2 years old.

Jerry's point is well taken and I concur. Roger, as you point out it is somewhat confusing and I myself am somewhat confused by the 'validate' bit. Especially the 'brute force' bit.

Thinking a bit, as I remember Verification, not Validation, is a part of calibration. I took Doug's rant as 'Don't calibrate everything in the shop' unrelated to critical (inspection and test) measurements. For example, should a toolmaker keep all his/her instruments calibrated when 'important' dimensions are checked on the output item.

All in all I believe Jerry's response was much clearer and to the point. maybe Doug had a beer or a smoke before he wrote his rant... As always, Jerry, I thank you for an clear, insightful response!
 

Jerry Eldred

Forum Moderator
Super Moderator
Marc - You're Welcome. Just hope I have been able to help.

As an aside, I started in my new position today back in the world of metrology. So, after having tied up some loose ends in my auditor position, I will be 100% metrology. I really enjoyed the subcontractor world (learned a lot, high pressure, high profile, and even made regular presentations to vice presidents and so forth, but metrology is my life (day job life, anyway). So I look forward to getting my library set back up by next week and trying to have a real impact.

------------------
 

Nicole Desouza

Involved In Discussions
REPLY TO "AN OLD TIDBIT"

The author of that post is well versed in systems, but I quite honestly have some difficulty with some of the 'roads' he chose to travel. I found his views to be quite confusing (with more than 20 years in metrology).

I think he missed some key points.

1. The choice whether or not to calibrate is a very simple one. Do you require a specified accuracy in your process? If your process requires that your product meets any given specified parameter (such as a DC voltage level, a torque strength, a length, a weight), then you MUST calibrate. Its that plain and simple.

The area the author describes are issues outside of point one. If you do not have to calibrate, then there may be tools that still require some sort of validation (I don't like this term. It's use really seems to be restricted to pharmaceuticals industries. But validation is something over and above calibration. After you have calibrated all of your measurement tools properly, there is still the validation. Which as I understand is kind of like a process qualification. But since I am not a pharmaceuticals guru, I won't try to venture views on that.

Once you have determined that you need to calibrate (per (1) above, intervals must be set accordingly. I might add that if your system is QS9000 compliant, then there is the note after 4.11.2 (I believe that is the correct paragraph number) that you must comply with ISO 10012-1. Alternatively there is compliance to ANSI/NCSL Z-540.

Interval determination is based on how well your calibrated instruments meet their tolerance specs. A typical method is to annually review all calibrations done on a certain type of instrument (usually a manufacturer model or manufacturer / model family). Review what percent of those calibrated over the course of the year were received in-tolerance. Based on the percentage figure, adjust the calibration interval accordingly. By the way, I have yet to see an industry standard as to what that percentage figure should be. 95% seems to be a common figure. Below 95%, shorten the interval, above 95%, lengthen. This should be an incremental function.

Back to the original post. I have some difficulty with 'brute force trial and error'. This sounds dangerously close to what I have seen and been troubled by in some semiconductor processes. The thought is, to get to some fictitious "nominal" point where your process works, then once you have found it, keep it there through some internal intercomparisons. That doesn't work. Try as you might, that process will drift over time. And without calibration, you will never know where it has drifted to. This is precisely (in the non-metrological sense) what NIST traceability is all about. There are a lot of very educated people in the industry who do not understand NIST traceability or its importance. Calibration and NIST traceability are one and the same. You can't have one without the other. (or some other international standards, of course).

Thats my two cents.

------------------
Hi Jerry,
I have been searching posts about calibration of equipment to find what the best practice is for handling a non conformance related to OUT of calibration equipment. I want to establish which steps to take when measurements were taken with out of calibration or late for calibration test equipment (calipers for example) and we have already shipped the parts out to our customer. We haven't actually done this but just planning for the worst case scenario.
1. How to notify the customer
2. what to say in the notification (are we recalling the parts?)

I think that would fall under control of non - conforming material.


would you happen to have any suggestions?
thanks,
Nicole
 

Jim Wynne

Leader
Admin
You've replied to a thread that is nearly 20 years old, and to Jerry Eldred, who hasn't logged in here for well over two years. It's always best to start a new thread in these cases.

You seem to be asking two different questions--how to deal with nonconforming material that's been shipped due to measurement with an out-of-tolerance device, and what to do when you know that product was measured with an out-of-calibration device but the parts aren't necessarily out of tolerance. First, if you know that the material shipped is likely to be nonconforming, the customer should be notified immediately and containment should be done in general--identifying in-house material, or material shipped to other customers--that might have been measured with the same device. On the other hand, if you've found that product has been measured with an out-of-calibration (but not necessarily out of tolerance) device, you should investigate to determine what actions to take. For example, if a device is past its calibration date but still in tolerance, there isn't anything you need to do except fix the calibration system. If a device is out of its calibration tolerance, but has been used to measure dimensions that won't be affected by the issue, there is also not much to do but work on the system.

The key to the whole thing is containment and notification when necessary, and understanding how the out-of-calibration condition might have affected decisions regarding product conformance.
 

Nicole Desouza

Involved In Discussions
You've replied to a thread that is nearly 20 years old, and to Jerry Eldred, who hasn't logged in here for well over two years. It's always best to start a new thread in these cases.

You seem to be asking two different questions--how to deal with nonconforming material that's been shipped due to measurement with an out-of-tolerance device, and what to do when you know that product was measured with an out-of-calibration device but the parts aren't necessarily out of tolerance. First, if you know that the material shipped is likely to be nonconforming, the customer should be notified immediately and containment should be done in general--identifying in-house material, or material shipped to other customers--that might have been measured with the same device. On the other hand, if you've found that product has been measured with an out-of-calibration (but not necessarily out of tolerance) device, you should investigate to determine what actions to take. For example, if a device is past its calibration date but still in tolerance, there isn't anything you need to do except fix the calibration system. If a device is out of its calibration tolerance, but has been used to measure dimensions that won't be affected by the issue, there is also not much to do but work on the system.

The key to the whole thing is containment and notification when necessary, and understanding how the out-of-calibration condition might have affected decisions regarding product conformance.

Thanks Jim,
I didn't realize how old the post was. Thanks for your explanation as well.

Nicole,
 
Top Bottom