Search the Elsmar Cove!
**Search ALL of Elsmar.com** with DuckDuckGo including content not in the forum - Search results with No ads.

Microscope slide warmer temperature control

#1
Dear Calibration Experts,

I work in the life sciences and am by no means an expert in device calibration. I recently purchased a microscope slide warmer with a digital temperature readout. To test the performance of this device, I measured the heated surface with an IR thermometer at five points distributed evenly along the middle of the surface. Following the prescribed calibration steps at a single set temperature (60.0 ˚C), I was disappointed to find that the mean measured temperatures averaged repeatedly for 0.5 –1 h at set values of 45.0, 50.0 and 70.0 ˚C were inaccurate, yielding measured values of 42.0 ± 0.4, 47.2 ± 0.6, and 71.4 ± 1.1 ˚C, respectively. The temperature at a set point of 60.0 ˚C measured 60.0 ± 0.9 and 59.4 ± 1.1 ˚C before and after the series of measurements at other temperatures. A plot of set versus measured temperatures fits a line with a slope that does not equal 1.0, indicating there is a systematic mismatch between the set and measured temperatures.

1567164681290.png


To me, this suggests that some control feature needs to be adjusted. The manufacturer provides instructions to calibrate at a single temperature and to adjust the "temperature control value", which determines at what temperatures flanking the set point the heater turns on or off. However, the manual does not reveal anything about how to ensure that calibration at a single set temperature will yield accurate temperatures at other set points. So far in direct discussions the manufacturer has not provided any other clues as to how this might be achieved.

Am I right to think that this type of device likely includes a control parameter that can be varied so that, following calibration at a single set temperature, the measured values at other temperatures will be accurate within a degree C? That is, a parameter that can be adjusted at the factory or possibly by the user that adjusts the slope of the line in a plot of set versus measured temperature?

Thanks for your input.

Cheers,
Mark
 
#2
A manufacturer and model of the slide warmer and the IR gun would help in determining if you have a problem with your equipment or with your expectations.

My experience would lead me to think your ± 1°C expectation is not realistic. Hot plate devices like this typically do not approach that king of accuracy. Also, your IR meter is probably not accurate enough to make that measurement.

What are you using for an emissivity setting for your measurements, and what is the emissivity of the slide warmer surface?
 
#3
Hi dwperron,

Thanks very much for your responses. Here are some of mine:

The slide warmer model is Premiere XH-2001, distributed by many life science suppliers. The thermometer is a Fluke 68 IR thermometer.

My measurements at a set temperature of 60.0 ˚C were 60.0 ± 0.9 and 59.4 ± 1.1 ˆC, suggesting that, once calibrated, accuracy is maintained at a single temperature. But perhaps as you suggest it is unreasonable to expect that the same accuracy would be maintained at temperatures other than the calibrated temperature.

Regarding the IR thermometer, I had comparable results conducting the same experiment with an alcohol bulb thermometer taped onto the surface with aluminum foil covering the bulb. The slope in that case was 1.25, similar to the IR thermometer slope of 1.19, again suggesting that the device is not matching set with measured temperatures (except at the calibrated temperature).

I didn't set the emissivity on the IR thermometer as I didn't know that was important. The emissivity of the surface was 0.95 according to the output on the thermometer display. But again, the bulb thermometer result suggests the issue is not the measuring device.

I am disappointed at my foolishness in expecting that a device with an LED temperature readout would be more accurate. The overall performance of the instrument was quite good and could be calibrated accurately at a single temperature. It still seems that there should be a way to adjust a parameter in the device system controls that enables one to get good accuracy at temperatures other than the calibration temperature. If this is not possible, I can use the set versus measured temperature equation to produce the desired temperate accurately, but I would prefer a device-based solution.

I apologize if there is a duplication in replies; I think my network connection flailed during an earlier attempt.

Cheers,
Mark
 
#4
Looking at the manual for this slide warmer it appears to be a fairly crude temperature controller. It is designed to be "calibrated" at a single set point, with another setting for the on/off temperature cycle point (they call it P). The factory default has this so that the heater will turn on and off at 2° from the set point, so there is more temperature deviation built into this warmer than you were expecting to find.

The Fluke 68 has good basic accuracy of ± 1°C at your tempertaure range. It also has a repeatability specification of ± 1°C, meaning that if you make the same measurement over again the readings will be within 1° of each other. Normally you add these two terms to get a ± 2°C instrument accuracy.

The emissivity value for temperature measurement is a number that compares how a device radiates heat compared to a perfect source (a black body). There is no emissivity spec for the slide warmer, so if it has a flat black finish it is safe enough to assume the emissivity is 0.95. The number on the Fluke meter was not a measurement of the emissivity, it was the emissivity setting. That can be changed to match the emissivity of the device being measured for greater accuracy.

Since this warmer has the single point temperature calibration I would suggest optimizing the temperature at your settings mid point and then characterizing it to determine what set point you need to use to reach the desired temperatures. A bit clumsy, but that is what this warmer is capable of.
 
#5
Hi dwperron,

Thanks for your reply. I agree with all of your points. My conclusion after using the device was designed for a single point calibration. However, I'm still hoping that someone might have an idea of additional adjustments that might be possible for this type of heating device, so that the single point calibration is combined with a suitable "slope" to improve accuracy at points other than the calibration point.

The XH-2001 manual does not describe all of the features of the mode invoked by holding the "SET" button for 3 seconds. This mode includes the ability to calibrate the set point accurately ("Sc") and adjust the on/off temperature cycle control ("P") as you note. It also includes the "E" and "c" presets, which the manual warns not to change as doing so might cause the device to malfunction. Finally, there are two parameters, which look like "AL" and "r", that are not mentioned or described in the manual. I'm wondering 1) whether "E" or "c" are what is used at the factory to improve the slope of the set versus measured plot and 2) whether "AL" or "r" might be adjusted to improve the slope. Any guesses as to what these four parameters might mean? I'm hoping they are designed to allow adjustment of device performance.

If not, then I can use a calibration plot at the center of the desired range as you suggest.

Cheers,
Mark
 
Top Bottom