Setting Calibration Tolerance on Gauges - Oven Calibration and Stepped Plug Gage

R

RGohil

Setting Calibration Tolerance on Gauges

How do you select what tolerance you should have while calibration of your gauges?

Example: Oven calibration – We have an oven with OMEGA temperature controller. We have had a calibration tolerance of -/+ 5 Deg F on it the temp controller range is from 0-1000F; and for another controller 0 – 2400 F calibration tolerance is -/+ 25 Deg F.

Example for another type of instrument a Stepped Plug gage with following readings PLUG GAGE 1.090 1.100 1.120 1.135 1.150 inches and we have calibration tolerance of -/+.002 inches.

Please can you all tell me how do you set calibration tolerances?

Many thanks in advance.
Robin
 

Hershal

Metrologist-Auditor
Trusted Information Resource
Hi Robin,

The tolerances you listed look pretty good. One thing you may consider is determining whether the application has unique requirements, or whether you can use existing standards such as ASTM for parts of your need.

Also, given your location, conduct your UKAS-accredited calibration provider and get thier input.

Hope this helps.

Hershal
 

Jerry Eldred

Forum Moderator
Super Moderator
Tolerances are normally defined for you from any of a number of sources.

An experienced metrology engineer may sometimes define tolerances. However, defining your own tolerances usually should be avoided.

You mentioned a few different examples, all which may have tolerances assigned from different sources. In the case of plug gages, there may be some industry standard tolerances (maybe ASTM or equivalent).

In the case of ovens, there are a few ways of assigning tolerances:

1. Calibration of the controller. Use manufacturer's specifications. The difficulty of calibrating an oven based on controller specifications is that it does not account for the chamber.

2. Single Point Chamber Calibration. This is often used, and is done by placing a calibrated sensor next to the chamber sensor. This also normally uses the controller specifications (unless the manufacturer says otherwise).

3. Chamber profiling. This is the best (though not always required). The best method is a "9-Point" profile (center, and all 8 corners of the used chamber area).

When you stated +/-25 degrees, that sounded close to (3) above, as most controllers are much better than that.

So after all of that, there is not a formula or anything like that for calibrating oven chambers. You must determine your application requirements, and use the manufacturer's specifications. Depending on what you must comply to, you may need to do (3), (2) or (1).

Uless requirements dictate (3), often (2) is a satisfactory method.

As for your original question, every instrument and calibration discipline is different. You can not fairly use a single method to determine calibration tolerances. The only single method is to use manufacturers specifications.
 
B

Bill Ryan - 2007

RGohil said:
Example for another type of instrument a Stepped Plug gage with following readings PLUG GAGE 1.090 1.100 1.120 1.135 1.150 inches and we have calibration tolerance of -/+.002 inches.Robin
Let me preface this with the fact that I am not a metrology expert.

+/-.002" for a calibration acceptance sounds like a ton of "wiggle room". Depending on the class of plug gage, our normal calibrating "zone of accetance" for a "Go" member is +.0002/-0". That may vary depending on the hole tolerance(s) and actual function but nowhere do we use as loose a tolerance as +/-.002". In my thinking, if you allow a minus tolerance on a "Go" plug gage, you are running the risk of undersize holes being accepted (the opposite holds true for the "NoGo" member).

Maybe I'm just confused regarding your question.
 
D

Dmokong - 2009

Hello there,
Although this is a late reply, I would like to share my idea regarding Oven Calibration. We also conduct calibration of Oven (Temperature Controlled Enclosures) and we have a tolerance of +/- 1 degree Celsius. This is based on our product specs. Previously, we use two specification for Oven Calibration, the Uniformity and Fluctuation. After conducting some evaluation (temperature profiling), we decided not to include the fluctuation o:) n our calibration result since we have proof that the Oven has small fluctuation and will not affect the testing of our units.

Hope this can help you.:)
 

Wesley Richardson

Wes R
Trusted Information Resource
Example: Oven calibration – We have an oven with OMEGA temperature controller. We have had a calibration tolerance of -/+ 5 Deg F on it the temp controller range is from 0-1000F; and for another controller 0 – 2400 F calibration tolerance is -/+ 25 Deg F.

Robin

Hi Robin,

Most ovens have temperature gradients. They can be electric or gas, and depending upon the configuration of the heating source relative to the chamber, the gradients can be even larger than the +/- 25 F. Most controllers base their control on just one thermocouple. Errors within the chamber not only include the thermocouple errors and linearity, but reference junction (electronic or physical), type of leads used from the thermocouple junction to the controller, material composition of any plugs and jacks, and temperature environment of the controller itself. You should use thermocouple wire and connectors that are specifically matched to the type of thermocouple that you are using. Also note that various thermocouple have stated temperature ranges. With a calibrated thermocouple, you can also obtain a temperature versus emf compensation graph or equation.

The controller will require both a delta temperature range and a time response factor setting. If you set the delta too small, the controller will be constantly turning on the heating source and turning it off (cycling). For some furnaces, there may be not only an on or off condition, but a proportional response, depending upon how far the temperature has exceeded the limits. The time response factor also affects the cycling. It effectly smooths the response to temperature changes. Setting too long a time will cause the measured temperature to drift too far before a response. Setting it too short can also cause excessive cycling.

The digital controllers can be set to respond to a specific numeric value, e.g. 1136 F, while the older analog controllers, you are doing well to set them to the nearest 25 F value, e.g. 1125 F.

Now lets talk about chamber size. Do you have a 1 inch by 1 inch by 1 inch chamber, 6 inch by 6 inch by 6 inch chamber, or one that is 20 feet by 20 feet by 20 feet? I have seen all of these and several sizes in between.

If the oven is electric, do your parts see the wires directly, or are they shielded? The difference is that heating occurs by both radiation and convection. With radiation, surfaces can develop hot spots if they are near the radiant source. With convection, you may require fans to circulate the internal atmosphere. In the chamber do you have air or a controlled atmosphere, for example an inert gas or even a reducing atmosphere?

If you create a temperature profile of the chamber, you will find that the gradients increase with increasing temperatures. If you have 20 F difference from highest to lowest point at 1,000 F, you may have 50 F difference at 2,000 F.

Now let's talk about the parts themselves. Just because the chamber temperature is at 1,500 F, does not mean that the parts are at that temperature. Within the parts there can be significant temperature gradients as well. The reason for "soak time" in heat treating is to allow the center of the parts to reach the required temperature. The larger the part, the longer a soak time is required. One part may require 1 hour at 1,500 F, while another part may require 6 hours at 1,500 F, and both are to get the center to the 1,500 F temperature. The same applies on the cooling after removal from the oven. The exterior cools the fastest. While theoretical calculations give estimates of the temperature-time profile, most are based on trial and history. The rate of heat transfer depends on the thermal conductivity of the parts, and surface boundary. The time to reach a given temperature within the part also depends on the temperature difference and the geometry.

So how can you overcome some of these issues? One way is to put multiple calibrated thermocouples on and in your part. By "in your part" I mean actually drilling holes, inserting the thermocouples, then sealing the holes. Running a multi-channel digital recorder you can determine the temperatures of your parts throughout the heat treating process. You will have to deal with the holes later, or perhaps machine them off, if possible.

The digital controller output can be sent to a computer for later analysis and records to demonstrate compliance.

The appropriate degree (get it?) of instrumentation depends greatly on the criticality of the parts, number of parts, and size of parts.

Depending on any changes that occur within the part as a function of temperature, having a +/- 25 F fluctuation around the desired average temperature may not be an issue for some materials. For other materials, exceeding a maximum temperature by + 3 F, may cause a phase change in the material that cannot be reversed. In metals there are critical temperatures for phase changes, based on composition. If you are less than the critical temperature, the desired phase change does not occur, and you cannot achieve a required hardness with subsequent quenching and tempering.

The tolerances you need to set is more than just the controller, but is on the process for temperature, time and ramp rate.

When we sintered tungsten carbide-cobalt powders, in a 27 cubic foot vacuum furnace, with about a 3,000 pound combined part weight, the complete cycle was 3 days. The maximum temperature was about 1,500 C. Yes, C. That is well above the melting temperature of steel. The interior of the chamber was graphite. Portions of the cycle required a maximum ramp rate of 1 C per minute. If you tried to go faster, the parts blew apart. We also had multiple hold an soak points at various temperatures, where changes were occuring in the parts. After the hold at the peak temperature, the parts were slowly cooled to avoid cracking. The temperature was digitally controlled and manually monitored optically, with emissivity correction. We knew there were gradients within the chamber, so after sintering, we took samples from the corners and edges, as well as the center, and from various part sizes.

Wes R.
 
Last edited:

BradM

Leader
Admin
I feel so inadequate following behind Wesley :cool:

Very nice writeup Wesley; very authoritative posts for the question.

What industry is this? I know that for like aerospace, there are specifications for the type of instruments to be used (type and accuracy), uniformity requirements (including # of thermocouples). AMS 2750 comes to mind. More specific processes like aluminum heat treating/ annealing have more specific (and tighter) requirements.

The calibration tolerance would also depend on how you are performing the calibration. If you are conducting a millivolt simulation to a fairly current, electronic instrument, +/-1 C (+/-2F) should be readily attainable. If you are performing a loop calibration with the probe, the calculation (refer to Wesley's post) becomes more involved. Are you performing a systems accuracy check weekly/monthly (inserting an independent,calibrated sensor), or is your equipment connected to a computer monitoring system?

Lots of questions. If you get a chance, some more detail regarding your process might help
 
J

jennameneses

Hi All,

Just want to ask, what are the factors to consider in setting tolerance in calibration specially for those machines in a semiconductor company?
And how to set tolerance?


Thank you, hope for your response,
Jenna
 

Jerry Eldred

Forum Moderator
Super Moderator
There are a wide variety of variables that have to be considered. As a metrologist with numerous years working in the semiconductor industry (in a calibration lab), I hope I can respond to this somewhat.

Half of the equation is the instrument being used. You can not specify a tolerance beyond the capability of the instrument making the measurement. If the instrument has published tolerance limits, those should be the default. If it does not, then you need to determine (for this half of the equation) the capability of the instrument in resolution, stability, bias, drift, linearity, etc. (numerous more parameters) what its capable specifications are.

The second half of the equation is the process control and/or specification limits. The specifications and capability of the instrument need to be adequate to maintain adequate measurement accuracy/uncertainty for needed control parameters. This is a big mistake semiconductor people used to make in my many years in that industry - they did not use adequate instruments for the process control parameter.

A low accuracy instrument can not be arbitrarily specified at tighter tolerance limits because the process control limiits are very narrow. An adequate instrument needs to be used, or the control limits need to be widened - otherwise, the process will lie to you (so to speak).

I'm not sure if this is the circumstance in your context; but I did want to share those thoughts that I've had for many years.

If you are using an instrument with specifications too good for the process, a documented method can be created to specify the instrument as needed for the process.

If you are using an instrument with specifications not good enough for the process, but the process control parameters can't be widened, one alternative MAY be to calibrate more frequently. In this instance, you'll need good history to determine how often the instrument requires re-calibration with the tightened parameters and still maintain them for the process.

For those who are not in the semiconductor world, historically, many semiconductor companies operate a little differently than many other industries, in terms of how they calibrate, what is important, and what is allowable.
 
J

jennameneses

Hi Sir Jerry!

Thank you for the response I really appreciate it.
May I ask about the formula of that equation? on how to get the accuracy and the tolerance in calibration?




Thanks a lot!

Jenna
 
Last edited by a moderator:
Top Bottom