ISO 9001:2008 Calibration of Measurement Equipment Requirements

Helmut Jilling

Auditor / Consultant
The discussion is (I thought) about what's required by the standard and not about what might be prudent. We should all do what makes sense for the application, but not allow the tail to wag the dog. Nonetheless, we all have to choose our battles carefully. I've been in situations where there was extensive use of linear scales (measuring for cutting fabric and sheet vinyl) where the tolerance was so liberal (and nonconformities never happened) that it simply made no sense to calibrate--it served no useful purpose.

Hi Jim,

The standard allows the QMS to define the type, method, frequency, records, etc., pretty liberally, to suit an organization's needs. It is easier to scale that part of the metodology to reduce non-value-added calibration, than it is to mount a convincing argument to an auditor that some gages don't need any calibration or verification.

In the example of the steel rules, I would expect that they were verified when put into use, and periodically verified at least to its condition. That can be done at any frequency the organization can demonstrate is effective. It would be much harder to persuade me that no calibration would also be acceptable.
 

Jim Wynne

Leader
Admin
Hi Jim,

The standard allows the QMS to define the type, method, frequency, records, etc., pretty liberally, to suit an organization's needs. It is easier to scale that part of the metodology to reduce non-value-added calibration, than it is to mount a convincing argument to an auditor that some gages don't need any calibration or verification.

In the example of the steel rules, I would expect that they were verified when put into use, and periodically verified at least to its condition. That can be done at any frequency the organization can demonstrate is effective. It would be much harder to persuade me that no calibration would also be acceptable.

If I had objective evidence to support a decision to not to calibrate, you would have to persuade me that calibration is necessary to ensure valid results. I don't know why that simple phrase is so hard for so many people to grasp.
 

Helmut Jilling

Auditor / Consultant
If I had objective evidence to support a decision to not to calibrate, you would have to persuade me that calibration is necessary to ensure valid results. I don't know why that simple phrase is so hard for so many people to grasp.

I am afraid you subtly changed the wording of the requirement, which changed the meaning significantly.

The standard does not say,
where "calibration is necessary to ensure valid results..."

It actually says "where necessary to ensure valid results, measuring equipment shall be calibrated or verified..."

At a glance it may seem a subtle difference. But, "where necessary" applies to the measuring equipment, not the calibration exercise. In other words, it does not say calibration is only necessary when you deem it to be needed. It says when you use a gage that has to show valid results, it shall be calibrated. You have the option to exclude gages which do not need to provide valid results. You don't have the option to exclude needed gages from calibration. A VERY significant difference in emphasis.

I am still waiting for a client to show me gages which would be suitable to use even if they show INvalid results.

However, as I indicated in my post, we may agree the standard gives a significant amount of room to define how much and how comprehensive the calbration or verification must be. In some cases, it could be a very brief and basic exercise.
 

Jim Wynne

Leader
Admin
I am afraid you subtly changed the wording of the requirement, which changed the meaning significantly.

The standard does not say,
where "calibration is necessary to ensure valid results..."

It actually says "where necessary to ensure valid results, measuring equipment shall be calibrated or verified..."

At a glance it may seem a subtle difference. But, "where necessary" applies to the measuring equipment, not the calibration exercise. In other words, it does not say calibration is only necessary when you deem it to be needed. It says when you use a gage that has to show valid results, it shall be calibrated. You have the option to exclude gages which do not need to provide valid results. You don't have the option to exclude needed gages from calibration. A VERY significant difference in emphasis.

I am still waiting for a client to show me gages which would be suitable to use even if they show INvalid results.

However, as I indicated in my post, we may agree the standard gives a significant amount of room to define how much and how comprehensive the calbration or verification must be. In some cases, it could be a very brief and basic exercise.

I don't want to do a grammar lesson here, but the adjective "necessary" modifies "calibration," not "measuring equipment." That's the only way to parse it that makes sense. Look at it this way: What's necessary (in certain instances), calibration or measuring equipment?
 

BradM

Leader
Admin
This is a really good discussion, all. :agree1:

If I'm measuring something with a linear scale, and the results produced are valid (i.e., there's no evidence that lack of calibration has ever resulted in anything bad happening), it's wasteful to do calibration and it's not required by the standard.

How would one know the results are valid? Against what standard would you say it's valid? When I use anything to measure something else, I am making some kind of assumption that item is more correct than the item I'm measuring. If your standard has always been 10 psi off, and all the devices have been 10psi off, there's an assumption that the error will be observed. When you find out you start having problems, how many products have you shipped?

Saying, it's not wasteful to perform calibrations, and can be considered a prevention cost, IMHO.

I do agree there is a lot of wasteful calibrations. I think people calibrate a lot of instruments/equipment that are not used; very wasteful IMO.

How can you call the measurement valid if a calibrated device is out of tolerance and no one knows it? The act of calibration, in and of itself, doesn't ensure valid results.

I simply don't follow that at all. Properly calibrating a device... well... let me back up. If prior to any adjustment I verify the performance of the instrument, that is a qualification. I am qualifying it's performance to some specification. Then, adjustment and final readings and all.

You're correct, using an out of tolerance instrument will not produce a valid result. Hence, having an effective calibration program will increase the validity of the measurement process.

When something measured is found to be perilously close to a specification limit, no one should assume that the measurement is accurate, regardless of the calibration status of the device, and I mean even if the thing was calibrated an hour ago. The verification process should include verifying the device, (or use of another device of known accuracy) or the method or the operator, or all three.

Totally agree on this one. :agree1: Well stated. The process needs to be validated so that it produces valid results. Having properly qualified instruments will be necessary.

Jim, I have learned from you on this subject to exert a little more thought to a calibration process. :yes: I think that so many programs just start having everything under the sun calibrated, usually within way too frequent intervals. Saying, calibration activities come out of knee-jerk reactions to the standards, and not a critical review of the processes and what is needed.

I just would be concerned with creating a scenario where you calibrate nothing, then justify calibration; instead of the opposite of calibrate your instruments, then back off from that. Sometimes I find the hunt is not worth the kill. :agree1: I can have the instrument calibrated annually, and satisfy most all requirements for that instrument.
 

BradM

Leader
Admin
However, as I indicated in my post, we may agree the standard gives a significant amount of room to define how much and how comprehensive the calbration or verification must be. In some cases, it could be a very brief and basic exercise.

:agree1: Exactly. And something I would venture that most calibration programs miss. They don't manage their process. They see an instrument, and have it calibrated; and then many times on too frequent of a cycle. This translates to time and money.

It's like cleaning out a closet or a garage. Make one pass; if you haven't touched it in a year (or whatever); you don't need it. Take the instruments/equipment and get rid of it (or quarantine). Then what is left, determine what is important, risk factors, the ability to perform cross-verifications, etc.

There are all kinds of methods to perform cross-verifications of instruments to assure that no out of tolerance will affect the process by catching instruments that are drifting. But this takes time, and will require initial data to build the case.
 

Helmut Jilling

Auditor / Consultant
I don't want to do a grammar lesson here, but the adjective "necessary" modifies "calibration," not "measuring equipment." That's the only way to parse it that makes sense. Look at it this way: What's necessary (in certain instances), calibration or measuring equipment?

Jim, I will respectfully disagree. You are moving the subject and object in the sentence, which changes the meaning in the way I indicated. You may hold to your view if you wish, but I don't agree. The standard does not you the choice whether to calibrate or not. I have never heard it interpreted that way.
 

Jim Wynne

Leader
Admin
Jim, I will respectfully disagree. You are moving the subject and object in the sentence, which changes the meaning in the way I indicated. You may hold to your view if you wish, but I don't agree. The standard does not you the choice whether to calibrate or not. I have never heard it interpreted that way.
The subject of the whole clause is to establish requirements for what must be calibrated (measuring equipment) and how to determine when calibration is required: when necessary to ensure valid results. If the idea were that all measuring equipment, without exception or qualification, must be calibrated, then the phrase "when necessary to ensure valid results" isn't necessary. It's there for a reason. If you're saying that 7.6 requires calibration of all measuring equipment, you have to account for the presence of "when necessary..."
 
C

Chance

If I'm measuring something with a linear scale, and the results produced are valid (i.e., there's no evidence that lack of calibration has ever resulted in anything bad happening), it's wasteful to do calibration and it's not required by the standard.

How can you call the measurement valid if a calibrated device is out of tolerance and no one knows it? The act of calibration, in and of itself, doesn't ensure valid results.

See above. When something measured is found to be perilously close to a specification limit, no one should assume that the measurement is accurate, regardless of the calibration status of the device, and I mean even if the thing was calibrated an hour ago. The verification process should include verifying the device, (or use of another device of known accuracy) or the method or the operator, or all three.
Jim,

I am doing in-house calibration/validation of our scales because of the reason - money wasted. I have some confusion, I bought a test weight 500g with traceable cert to NIST, I look at the accuracy of the test weight it says, 50mg to 50g. What does this mean?
Per your post
"How can you call the measurement valid if a calibrated device is out of tolerance and no one knows it? The act of calibration, in and of itself, doesn't ensure valid results." - How to avoid this situation?

Thanks,
Chance
 
T

The Specialist

Jim,

I am doing in-house calibration/validation of our scales because of the reason - money wasted. I have some confusion, I bought a test weight 500g with traceable cert to NIST, I look at the accuracy of the test weight it says, 50mg to 50g. What does this mean?
Per your post
"How can you call the measurement valid if a calibrated device is out of tolerance and no one knows it? The act of calibration, in and of itself, doesn't ensure valid results." - How to avoid this situation?

Thanks,
Chance

When you buy test weights, you buy a certain 'class' (accuracy) with the more accurate being more expensive.

Also, be sure that you read the 'actual' weight and not the 'nominal' weight.

See this table for help:

Statement of Accuracy
This document states that the mass (weight) has been compared to a known standard. The standards used have traceability to NIST and the certificate will list the nominal value, description of weight(s) or weight set(s), material, class and tolerance. This is neither a traceable nor accredited document.
Statement of Accuracy
with Serial Number
Same as Statement of Accuracy above, plus a serial number is included on the weight(s) or weight set(s) and corresponding certificate.
Traceable Certificate
Includes all the information on the Statement of Accuracy with Serial Number plus: this document includes actual weight values as well as uncertainties and tolerances. Traceable to NIST. The uncertainty to tolerance ratios are not guaranteed. This is not an accredited document.
NVLAP Certificate
This document conforms to ANSI/NCSL Z540-1 and includes all the necessary information that is required by the superceded Military Standard Spec 45662A. The following information is provided: nominal and correction values, tolerance for the specific class, assumed density and the environmental conditions present at the time the tests were performed. Weights that include this certificate are traceable to NIST. This is an accredited document.


ANSI/ASTM E617:
Standard Specification for Laboratory Weights and Precision Mass Standards
Mostly used in the United States, this specification covers various classes of weights and mass standards used in laboratories ranging from Class 0 to Class 7. Tolerances and design restrictions for each class are described in order that both individual weights and weight sets can be chosen for the appropriate applications. This specification also recognizes OIML R 111 that describes classes E1, E2, F1, F2, M1, M2 and M3.
OIML R 111:
Weights of Classes E1, E2, F1, F2, M1, M2, M3
This international document describes the physical characteristics and metrological requirements of weights that are used for the verification of weighing instruments, for the verification of weights of a lower class of accuracy and with weighing instruments. This document includes a recommendation for seven classes of weights in tiers of uncertainty.
NIST:
Specifications and Tolerances for Field Standard Weights
These specifications and tolerances are specific for reference and field standard weights (NIST Class F). Reference and field standard weights are used to test weighing devices where the weight of the item is required to determine the item's price. This document sets minimum requirements for standards used primarily to test commercial or legal for trade weighing devices for compliance with NIST Handbook 44. These devices include but are not limited to delicatessen scales, jewelry scales, postal and parcel post scales and dairy product scales. This specification permits the use of a weight at its nominal value in normal testing operation, where the tolerance on the item under test is at least three times as great as the tolerance of the weight. This Handbook also specifies the design, marking, adjusting cavities, and density of these weights. Any variation in design from Handbook 105-1 must be submitted to NIST for approval. More information on these weight specifications is available in Troemner's (broken link removed).

Weight Applications

In order to select the appropriate weight for your laboratory, you must first determine exactly how you intend to use the weight. Your unique application will help determine exactly which Troemner weight will suit your needs. The following guidelines explain the applications of the different classes of weights:
  • ANSI/ASTM Class 0 – Used as primary reference standards for calibrating other reference standards and weights.
  • Troemner UltraClass – Available exclusively from Troemner, these weights were developed to meet the most demanding calibration needs with the ability to be adjusted. Troemner UltraClass weights are the most precise two-piece weights available with weight tolerances 50% greater than ANSI/ASTM E617 Class 1 tolerances. Troemner UltraClass weights combine high precision with the advantage of two-piece construction (1 g and larger) avoiding costly replacement issues associated with one-piece weights. Troemner UltraClass weights and weight sets are available in a full range of weight denominations.
  • ANSI/ASTM Class 1 – Can be used as a reference standard in calibrating other weights and is appropriate for calibrating high precision analytical balances with a readability as low as 0.1 mg to 0.01 mg.
  • ANSI/ASTM Class 2 – Appropriate for calibrating high-precision top loading balances with a readability as low as 0.01 g to 0.001 g.
  • ANSI/ASTM Class 3 – Appropriate for calibrating balances with moderate precision, with a readability as low as 0.1 g to 0.01 g.
  • ANSI/ASTM Class 4 – For calibration of semi-analytical balances and for student use.
  • ANSI/ASTM Class 5 – For student laboratory use.
  • ANSI/ASTM Class 6 – Student brass weights are typically calibrated to this class. This class also meets the specifications of OIML R 111 Class M2.
  • ANSI/ASTM Class 7 – For rough weighing operations in physical and chemical laboratories, such as force measuring apparatus.
  • NIST Class F – Primarily used to test commercial weighing devices by state and local weights and measures officials, device installers and service technicians. Class F weights may be used to test most accuracy Class III scales, all scales of Class IIIL or IIII, and scales not marked with a class designation.
  • OIML Class E1 – Used as primary reference standards for calibrating other reference standards and weights.
  • OIML Class E2 – Can be used as a reference standard in calibrating other weights and is appropriate for calibrating high precision analytical balances with a readability as low as 0.1 mg to 0.01 mg.
  • OIML Class F1 – Appropriate for calibrating high-precision top loading balances with a readability as low as 0.01 g to 0.001 g.
  • OIML Class F2 – For calibration of semi-analytical balances and for student use.
  • OIML Class M1, M2, M3 – Economical weights for general laboratory, industrial, commercial, technical and educational use. Typically fabricated from cast iron or brass. Class M2 is commonly used for student brass weights.
Also... attached file
 

Attachments

  • NIST.pdf
    298.8 KB · Views: 271
Top Bottom