Determining which equipment to calibrate

Jerry Eldred

Forum Moderator
Super Moderator
There are a few differing schools of thought on that. First, any test equipment used to measure product specs must be calibrated. Second, equipment used to perform maintenance on production tools should be calibrated. equipment for safety compliance should be calibrated (for other reasons- to assure safety of personnel).

Where it gets questionable depends on some variables. If you two of the same type test equipment where one is used in a "calibration required" context and the other is a "no calibration required" context, there is potential product risk, and both "should" be calibrated.

If there are types of equipment that never are used in any "calibration required" context, it becomes a judgment call. But depending proportionately on how much of your test equipment is calibration required versus no calibration required, it may or may not be simpler just to calibrate all of them.

This is my personal opinion, based on 25 plus years in the calibration world -- But I do struggle with a philosophy of calibrating only those items used on product. There is an implicit viewpoint that the calibration is there to satisfy a written requirement rather than for a reliable instrument to add value to a manufacturing process. If I have for edxample, a Fluke 77 DMM (no positive or negative attitude about that meter implied; I'm only using it as it is a common model). If I use that Fluke 77 to measure something that impacts product quality directly, it is truly a no-brainer as to whether I should calibrate it. However, if I only use it to troubleshoot things not directly associated with product (lets say my HVAC tech uses it on air conditioning equipment in the offices), then there is not that pressing need to have it calibrated.

However, as I have spent many unhappy hours over the years preaching this same soapbox to upper management, I am adding value to the company by calibrating this meter. When my HVAC tech uses it, if the AC function is not working properly, he (she) could electrocute him (or her) self. The purpose of calibrating is not to satisfy a written requirement, but to be sure when you measure something, that it is correct (within tolerances). If I did not need to know something quantitative, I would not need a measuring instrument. I have long had the philosophy that the decision whether or not to calibrate any quantitative measuring instrument is based on the answer to the question..

"Do I care if my reading is wrong?" On that Fluke 77, if I am using it to measure line voltage (go/no go), if I want to know if the voltage is 0 VAC versus 120 VAC, if my meter reads 60 VAC, and I expect to see 120 VAC, is that okay? I think the point I am making is that many of those measurements we call non-quantitative are actually quantitative. It may not be important enough of a measurement that we care about +/-1%. But it may be important enough that we care about +/-5% or +/-10%.

The answer is, I believe, that if the particular test equipment is quantitative by design, I lean toward calibrating it, as it was originally purchased because there was a need for a quantitative reading.

The variable in those circumstances may well be longer cal intervals, as there is a reduction in the required confidence level. But the need for some degree of confidence in the readings is nevertheless required.

Sorry for the soapbox. This is just one of my passionate areas. There is a common belief that calibration exists for paperwork purposes. But I know differently. It is just that when you have an effective calibration program, test equipment works well enough that management does not see problems. This sometimes leads to the assumption that since all the test equipment is working so well, why do we need to have a calibration lab.

Anyway.... I hope this is of some help.

------------------
 
C

chap

Determining which equipment to cal

I am having a problem determining which pieces and types of equipment I need to calibrate in order to comply with ISO 9000. We are an integrator of control systems and have some general pieces of test equipment (multimeters, scopes, analyzers) the majority which is used as a troubleshooting tool. ( go/no go, is a signal/voltage present). My thought was to calibrate only the equipment we plan to use during a formal product compliance test (the equipment to be used would be spelled out in the test procedure.) and leave the equipment used in trouble shooting as "no cal required."
 
R

rock

Hi Chap,
Two points.
1)Convince me that formal product compliance testing could never be done using uncalibrated instruments.

2)I've seen a lot of time wasted trouble shooting using faulty equipment.

Element 7.6 of the standard allows you the lattitude you are proposing. Is there any risk to the customer and is it worth the risk?

Mike
 

Jerry Eldred

Forum Moderator
Super Moderator
1)Convince me that formal product compliance testing could never be done using uncalibrated instruments.

>> If there is a quantitative parameter being tested, without calibration to verify that the test equipment being used has remained within it's specified accuracy, and/or to quantify the amount of drift in it's generated or measured accuracy, there is absolutely no way to quantify how well the product complies with it's specifications. If the manufactured/tested product has a specified operational parameter such as a cord for a telephone for example. You use a crimper put connectors on the ends of the phone cord, you use a pull tester to verify crimp strength on the cord. You specify let's say 10 lbs force to pull the connector off the phone cord. In this circumstance, you need calibrated crimpers to ensure repeatability and reliability of the crimp. You need calibration on the pull tester to ensure you test the cords the same every time. If you never calibrate either one of those two tools, the crimp fixturing will wear over time, and pull strength on the connectors will either diminish, or conceivably become too tight (potentially causing intermittent opens or shorts or decreased life span of the cord in the field. If the pull tester is never calibrated, over time, it's accuracy will drift one way or the other. As it drifts, your verifications of the crimp strength will become more and more invalid.

You manufacture a monitor for a computer. It is designed to operate under certain conditions, 115VAC/60 Hz input, a certain raster scan rate, certain intensity, certain color parameters, certain high voltage paramters, etc. Many parameters above are quantitative, and traceable to internationally recognized standard measurement units. When you plug that computer monitor into a computer, it's operational quantitative parameters need to match up to some known level of confidence with the operational parameters of the computer. Or when you connect that phone cord (above) into a telephone, it's plug dimensions must match up to the jack dimensions on the phone. One of the purposes of calibration is to compare measured parameter with accepted and agreed upon values. Your DC volt needs to agree with someone elses DC volt, or ohm, or inch, etc. The other part of calibration is to provide confidence that any quantitative measurement is real. So if the compliance testing is to verify any quantifiable parameter, it must be compared somehow to a standardized measurement unit/accuracy.

There is the accepted "alternative" of using a "golden unit". That unit became a golden unit because it was determined somehow to fall within some specified limits. That set of specified limits (if they are quantitative, and relate to any recognized measurement unit) was verified somehow to agree with nationally or internationally recognized units of measure. If it was not, then it never was a golden unit, because it was never determined to be made to within a set of specified tolerance limits. In that case, if that golden unit was lost, the process would also be lost.

Anyone who has ever bought some low priced item, such as one of those electrical boxes to install a ceiling light fixture in. If you have ever gotten one where the screws of the light fixture wouldn't line up with the screw holes on the box, either one or the other, or both manufacturers may not have had a calibrated golden unit to verify compliance to dimensional specs. Or a 60 watt lightbulb from an off-brand that seemed brighter than it should be, and mysteriously blew out after 15 minutes of use. It probably wasn't tested using calibrating test equipment (may not have been tested at all, or they may have had an inadequate sampling plan).

As someone who has always struggled with the philosophy that "calibration is only there to meet a requirement", I believe quite strongly that calibration is a key factor in manufacturing anything with quantifiable parameters.

Compliance testing (depending upon the specifics of the individual situation), is a little like calibration. You verify a product meets some set of specified parameters (Bellcore, ANSI, or whatever - I'm not a compliance specialist). If the values are quantitative, calibration is an integral part of the compliance test to ensure the measurements made are valid.

I would be happy to discuss more specific example.

2)I've seen a lot of time wasted trouble shooting using faulty equipment.

>> Agreed. When you are troubleshooting using uncalibrated test equipment, you may end up chasing your tail (so to speak). If you don't have some level of confidence in your readings, you may troubleshoot a problem that doesn't exist. I've done that many times back in my bench tech days.

Element 7.6 of the standard allows you the lattitude you are proposing. Is there any risk to the customer and is it worth
the risk?

>>I'll use a generic example that if you provide a product which if it fails in the field could pose a safety risk, you are implicitly warranteeing that your product will function a certain way, which when it is used by the customer, is designed into their application/context based on the assumption that it will provide them service to the degree and extent you specify in product specs. A manufacturer has the implicit duty to assure to some definable level of confidence that the product will meet those specs. Testing, by definition in most circumstances, refers to comparison of the tested item to some limits. Those limits for the most part correlate to nationally or internationally recognized measurement accuracies. Calibration is the means by which we assure that correlation is made. Without calibration, there is no correlation.

I just got called into a meeting. Hope I have given some useful input.

------------------
 
Q

Q rex

To comply with the requirements of the standard, it's about equipment that measures the properties of the product to assure that requirements are met.

In my firm's QMS, we have mapped the requirement of the 1994 standard to calibration of a light meter used by the Electrical dept. It has nothing to do with measuring the quality of our product, design. I think we missed what the standard says, and someone threw in an irrelevant procedure to provide the appearance of meeting the standard.

Without a clear understanding and statement of what your product is, any position on calibration of measuring equipment will be difficult to defend.

If your product is a service, what equipment do you have that measures something to confirm that the service meets the customer's requirements? Software is included in the scope of this requirement.

If you use a computer to run a simulation of the control system on to test its performance, it needs to do an adequate job. If test software is used as well, it needs to be appropriate.

Missing "calibration" of these, while keeping a calibration log of your voltmeters, to have something to show your registrar, is not what the standard is about.

Keeping all your tools sharp, while good business practice, is not required for compliance with this provision of the standard.

Treating those items that the standard requires calibration and those that it doesn't the same for simplicity's sake in record keeping is a call you have to make. If you treat them the same, I wouldn't expect your registrar to differentiate for you, if you fail to calibrate something the standard doesn't require, but you have in a common log with those it does require.

Just my opinions,

Rex
 
Top Bottom