Determining local tolerances for a gauge despite OEM tolerance

DavidJ.BlackII

Starting to get Involved
Good day all,

I have been trying to find more info on local tolerancing for measurement gauges despite them having OEM tolerances. Specifically, we have a weight scale for a non-critical check at a station for component weight. Tolerance is 4-8g and the gauge has a discrimination of .01g with an accuracy of .01g as well. The scale had a .02 deviation when performing gage calibration at high-end, and I would like to open the tolerance to .05g. This accuracy would still be able to divide the tolerance of the part into at least 10 divisions, is this the correct thought process for determining local tolerances for a gage?
 

John Predmore

Trusted Information Resource
When a measuring device is restricted to a range or tolerance different from the manufacturer's specifications, there is a name for that, limited calibration. I hope I understood your question correctly.

The risk I consider is if the diminished capability of an older device is due to damage or deterioration of a load cell or a circuit, what is the risk of further deterioration after the annual calibration event, and yet the calibration sticker promises users a fixed level of performance based on the date of the calibration? Maybe you have another way to address the risk of further deterioration, such as a daily verification master.
 

Miner

Forum Moderator
Leader
Admin
I agree with John regarding basing it on risk. In some cases, it is perfectly safe and logical, but probably not in every case.
An example where it is perfectly safe: I worked in a newly expanded facility where people went overboard on buying some of the equipment. They purchased a class AA (calibration lab grade) surface plate. To reduce ongoing expenses, we had it calibrated as a class A (inspection grade) surface plate which was more than adequate for our needs.
 

DavidJ.BlackII

Starting to get Involved
Thank you for the replies. The device has a recalibration procedure using weights so the bias is zero’d with each calibration. Since the tolerance of the device is .01 grams and the process tolerance is 6 +/- 2 grams, I believe that we have two options. We could shorten the interval for recalibration, or we could open the acceptable tolerance for the device so that it doesn’t flag as OoT, is there a reason against the latter, as long as it is zero’d with each recal, even if it’s only out of oem tolerance by .01 grams? Thank you
 

Miner

Forum Moderator
Leader
Admin
Maybe I wasn't clear enough. You can establish your own calibration tolerances regardless of what the OEM uses. However, you should assess the risk, and you should have a justification for doing so in case it is questioned.

In my example, the OEM specified class AA tolerances which were much tighter than we needed, so we opened them up to class A tolerances which met our needs.
 

DavidJ.BlackII

Starting to get Involved
Maybe I wasn't clear enough. You can establish your own calibration tolerances regardless of what the OEM uses. However, you should assess the risk, and you should have a justification for doing so in case it is questioned.

In my example, the OEM specified class AA tolerances which were much tighter than we needed, so we opened them up to class A tolerances which met our needs.
I see, thank you. What I’m struggling to understand is what the process itself is? How do you study and determine what in-house limits can be safely applied? And could this be used to- and I know this may sound strange- justify extending calibration intervals? sorry for the compound question
 

Miner

Forum Moderator
Leader
Admin
This is a complex question. Years ago, people used the 10:1 or 4:1 Test Accuracy Ratio. Today that has been replaced by the Test Uncertainty Ratio. See this article from ASQ for more details.
 

Semoi

Involved In Discussions
I suspect I was in a similar situation as you, David. We had technical drawings, but it was unclear how to demonstrate that we fulfil these requirements. When I talked to my colleagues from the quality department they provided (after some discussions) an acceptable value for the customer risk. Next came the discussions about the confidence of this accepted costumer risk. It was painful, but we managed (with the help of an external authority) to come to a common conclusion. To check if I am correct, could you tell us in which department you are working -- Engineering or Quality? It would also help to name your industry, e.g. medical, automotive etc.
 
Top Bottom