Defined External Calibration Acceptance Criteria

I

IGORTS

Hi

During an internal audit we have realized that for external calibrations we do not have defined aceptation criteriums(rates).

I´m needing the standard aceptation for:

- Surface durometers
- CMM
- Spectometers

I know that this point deppends of the accuracy that you are needing for your test.

In any case tell me your comments or figures.

Regards
 

Jerry Eldred

Forum Moderator
Super Moderator
I'm not certain that I understand exactly the question. It appears that you are asking what percentage of units that are calibrated by an external vendor need to be within their tolerance as received by the vendor.

If this is a correct interpretation of your question, 95% is a common rate of percent in-tolerance. This corresponds approximately with two standard deviations of confidence for a reading to be within tolerance.

Please let me know if this is not a correct interpretation.
 
I

IGORTS

aceptation Criterium

Thank for your reply,

But what I mean is that when you determinate the uncertainly U of your control devices you must need if these U uncert. is apropiate or not for your devices.

An example, Gauge used for measure 10.00 +- 0.10 mm in the last calibration the result of the uncertainly observed has been 0.15 mm , you can not use this gauge.

What I´m asking for is the standard ( when the tolerances of the parts that you are checking are not closed ) maximum uncertainly (U) that is acceptable for the followings:

- Supeficial Hardness tester ( durometer )
- Tridimensional Coordinate Measure Machine CMM
- Specometer, Raw material analyzer.

Thanks in any case for your kind reply
 
R

Ryan Wilde

Igorts,

The calibration report should state something resembling "Manufacturer's Specifications" or something to that effect. If so, then you would have to look in the owner's manual/website of the individual product for its specifications. The uncertainty for your use would then be that number. If the uncertainty from your calibration supplier could not attain the manufacturer's specification, then the calibration supplier is obligated to annotate the limitation on the calibration certificate.


Ryan
 

Jerry Eldred

Forum Moderator
Super Moderator
Although I am not an expert in this, it sounds to me like you need to apply an MSA (Measurement Systems Analysis) to your circumstance. I believe there is a 10X requirement for the repeatability/reproducibility of your control standard.

You need a known nominal value, a known error quantity for the value of your control standard, and a known stability for it. You can use measurement standards in a lab of at least four times greater accuracy to measure your control standard.

Then you need data on that control standard which shows you that it remains stable and is capable of being measured reliably, repeatably, and reproducibly with small enough variation that you can use it as a reference for the parts you are measuring.

I don't know exactly what the numbers must be. So I will give you a few example numbers below, then perhaps some others who specialize in MSA and Gauge R&R studies can give some more thorough response.

As an example, you measure a part with a nominal value of 10 mm +/- 0.100 mm. You have a control standard that you use to periodically verify your measurement system. For the example, let us say it is a heighth gauge. The height gauge would then need to be 10X better than the part. So the accuracy of the height gauge would need to be at least +/-0.010 mm. You use a control standard to run checks every day on your thickness gauge. The control standard is 10 mm, and it needs to be +/-0.0025 mm or better.

I believe QS9000 has some detailed requirements for this. And perhaps some of the quality systems experts can comment further on this.

You may even wish to read through some of the older postings on this website and find some information that will help.
 
A

Atul Khandekar

I agree with what Ryan has said. However the original question is still not clear to me. By acceptance criteria I understand that you accept the instrument after calibration, if the readings obtained using a measurement standard are within the range defined by the criteria. This could differ depending upon where the instrument is supposed to be used. For example, the acceptance criteria for a vernier used in a machine shop (narrow tolerances) would much tighter than for one used in a forging shop (where tolerances are wide).

Could someone please clarify!

Thanx,
- Atul.
 

Jerry Eldred

Forum Moderator
Super Moderator
The normal practice I have always used is to use manufacturer's tolerances. If I have a specific type of caliper, regardless of where it is used, I calibrate it to manufacturer's tolerances.

If I have a much tighter application where measurement accuracy must be much better, I would select a more accurate instrument. I may use a lower cost dial caliper for a lower accuracy application. I may use a very good digital caliper for the higher accuracy requirements.

In both cases, I have two different specifications because I have two different instruments, which are designed for two different levels of accuracy.

I recommend not using multiple different tolerances on the same kind of instrument. It can be done, but you need a well documented program to assure you calibrate each one to it's own specification.

But it has a high likelihood of problems. There is always the possibility an incorrect instrument can be used for a measurement, which creates product risk. There is increased likelihood for error when you perform the calibration. If you receive ten calipers, all of the same type into the lab for calibration, and five are to be calibrated at one specification, and the other five, to be calibrated to another specification, there is possibility to erroneously calibrate to the wrong specification.

My bottom line recommendation is to specify a caliper with adequate accuracy for each task, and calibrate each caliper to manufacturer's specifications. This is an unambiguous method. When a caliper is received in the calibration lab for calibration, you observe manufacturer and model, pull the specification and procedure for that manufacturer and model. Then calibrate to that specification. It becomes your acceptance criteria.

If the caliper is not at least 4 to 10 times more accurate than the measurements you are making, it is not adequate for the measurement.

The only way to improve specifications on a specific caliper to something better than the moanufacturer's specifications, is to used some time consuming statistics to prove that it can actually operate better than it's specification. That can be difficult if you don't have the people in your company who know how to do it and have time. It is also difficult to defend in an audit.

I'm not sure if this answers what you need to know. Please let me know if there is more detail you need.
 
K

k_srinivasan66

Re: Acceptance Criteria in Calibration

Stability study can be done to measure the calibration frequency, but you have to define the acceptance criteria for the instrument. Generally 1/4th to 1/10th of the tolerance value of the measurement is taken as the acceptance criteria.After calibration, if the errors in the instrument are within the acceptance criteria, the instrument can be used or else you have to decide to use at places where the acceptance criteria is hiked, like in stores etc to check the dia of rod etc.
 
Top Bottom