There are a few differing schools of thought on that. First, any test equipment used to measure product specs must be calibrated. Second, equipment used to perform maintenance on production tools should be calibrated. equipment for safety compliance should be calibrated (for other reasons- to assure safety of personnel).
Where it gets questionable depends on some variables. If you two of the same type test equipment where one is used in a "calibration required" context and the other is a "no calibration required" context, there is potential product risk, and both "should" be calibrated.
If there are types of equipment that never are used in any "calibration required" context, it becomes a judgment call. But depending proportionately on how much of your test equipment is calibration required versus no calibration required, it may or may not be simpler just to calibrate all of them.
This is my personal opinion, based on 25 plus years in the calibration world -- But I do struggle with a philosophy of calibrating only those items used on product. There is an implicit viewpoint that the calibration is there to satisfy a written requirement rather than for a reliable instrument to add value to a manufacturing process. If I have for edxample, a Fluke 77 DMM (no positive or negative attitude about that meter implied; I'm only using it as it is a common model). If I use that Fluke 77 to measure something that impacts product quality directly, it is truly a no-brainer as to whether I should calibrate it. However, if I only use it to troubleshoot things not directly associated with product (lets say my HVAC tech uses it on air conditioning equipment in the offices), then there is not that pressing need to have it calibrated.
However, as I have spent many unhappy hours over the years preaching this same soapbox to upper management, I am adding value to the company by calibrating this meter. When my HVAC tech uses it, if the AC function is not working properly, he (she) could electrocute him (or her) self. The purpose of calibrating is not to satisfy a written requirement, but to be sure when you measure something, that it is correct (within tolerances). If I did not need to know something quantitative, I would not need a measuring instrument. I have long had the philosophy that the decision whether or not to calibrate any quantitative measuring instrument is based on the answer to the question..
"Do I care if my reading is wrong?" On that Fluke 77, if I am using it to measure line voltage (go/no go), if I want to know if the voltage is 0 VAC versus 120 VAC, if my meter reads 60 VAC, and I expect to see 120 VAC, is that okay? I think the point I am making is that many of those measurements we call non-quantitative are actually quantitative. It may not be important enough of a measurement that we care about +/-1%. But it may be important enough that we care about +/-5% or +/-10%.
The answer is, I believe, that if the particular test equipment is quantitative by design, I lean toward calibrating it, as it was originally purchased because there was a need for a quantitative reading.
The variable in those circumstances may well be longer cal intervals, as there is a reduction in the required confidence level. But the need for some degree of confidence in the readings is nevertheless required.
Sorry for the soapbox. This is just one of my passionate areas. There is a common belief that calibration exists for paperwork purposes. But I know differently. It is just that when you have an effective calibration program, test equipment works well enough that management does not see problems. This sometimes leads to the assumption that since all the test equipment is working so well, why do we need to have a calibration lab.
Anyway.... I hope this is of some help.
------------------
Where it gets questionable depends on some variables. If you two of the same type test equipment where one is used in a "calibration required" context and the other is a "no calibration required" context, there is potential product risk, and both "should" be calibrated.
If there are types of equipment that never are used in any "calibration required" context, it becomes a judgment call. But depending proportionately on how much of your test equipment is calibration required versus no calibration required, it may or may not be simpler just to calibrate all of them.
This is my personal opinion, based on 25 plus years in the calibration world -- But I do struggle with a philosophy of calibrating only those items used on product. There is an implicit viewpoint that the calibration is there to satisfy a written requirement rather than for a reliable instrument to add value to a manufacturing process. If I have for edxample, a Fluke 77 DMM (no positive or negative attitude about that meter implied; I'm only using it as it is a common model). If I use that Fluke 77 to measure something that impacts product quality directly, it is truly a no-brainer as to whether I should calibrate it. However, if I only use it to troubleshoot things not directly associated with product (lets say my HVAC tech uses it on air conditioning equipment in the offices), then there is not that pressing need to have it calibrated.
However, as I have spent many unhappy hours over the years preaching this same soapbox to upper management, I am adding value to the company by calibrating this meter. When my HVAC tech uses it, if the AC function is not working properly, he (she) could electrocute him (or her) self. The purpose of calibrating is not to satisfy a written requirement, but to be sure when you measure something, that it is correct (within tolerances). If I did not need to know something quantitative, I would not need a measuring instrument. I have long had the philosophy that the decision whether or not to calibrate any quantitative measuring instrument is based on the answer to the question..
"Do I care if my reading is wrong?" On that Fluke 77, if I am using it to measure line voltage (go/no go), if I want to know if the voltage is 0 VAC versus 120 VAC, if my meter reads 60 VAC, and I expect to see 120 VAC, is that okay? I think the point I am making is that many of those measurements we call non-quantitative are actually quantitative. It may not be important enough of a measurement that we care about +/-1%. But it may be important enough that we care about +/-5% or +/-10%.
The answer is, I believe, that if the particular test equipment is quantitative by design, I lean toward calibrating it, as it was originally purchased because there was a need for a quantitative reading.
The variable in those circumstances may well be longer cal intervals, as there is a reduction in the required confidence level. But the need for some degree of confidence in the readings is nevertheless required.
Sorry for the soapbox. This is just one of my passionate areas. There is a common belief that calibration exists for paperwork purposes. But I know differently. It is just that when you have an effective calibration program, test equipment works well enough that management does not see problems. This sometimes leads to the assumption that since all the test equipment is working so well, why do we need to have a calibration lab.
Anyway.... I hope this is of some help.
------------------