Dear Calibration Experts,
I work in the life sciences and am by no means an expert in device calibration. I recently purchased a microscope slide warmer with a digital temperature readout. To test the performance of this device, I measured the heated surface with an IR thermometer at five points distributed evenly along the middle of the surface. Following the prescribed calibration steps at a single set temperature (60.0 ˚C), I was disappointed to find that the mean measured temperatures averaged repeatedly for 0.5 –1 h at set values of 45.0, 50.0 and 70.0 ˚C were inaccurate, yielding measured values of 42.0 ± 0.4, 47.2 ± 0.6, and 71.4 ± 1.1 ˚C, respectively. The temperature at a set point of 60.0 ˚C measured 60.0 ± 0.9 and 59.4 ± 1.1 ˚C before and after the series of measurements at other temperatures. A plot of set versus measured temperatures fits a line with a slope that does not equal 1.0, indicating there is a systematic mismatch between the set and measured temperatures.
To me, this suggests that some control feature needs to be adjusted. The manufacturer provides instructions to calibrate at a single temperature and to adjust the "temperature control value", which determines at what temperatures flanking the set point the heater turns on or off. However, the manual does not reveal anything about how to ensure that calibration at a single set temperature will yield accurate temperatures at other set points. So far in direct discussions the manufacturer has not provided any other clues as to how this might be achieved.
Am I right to think that this type of device likely includes a control parameter that can be varied so that, following calibration at a single set temperature, the measured values at other temperatures will be accurate within a degree C? That is, a parameter that can be adjusted at the factory or possibly by the user that adjusts the slope of the line in a plot of set versus measured temperature?
Thanks for your input.
Cheers,
Mark
I work in the life sciences and am by no means an expert in device calibration. I recently purchased a microscope slide warmer with a digital temperature readout. To test the performance of this device, I measured the heated surface with an IR thermometer at five points distributed evenly along the middle of the surface. Following the prescribed calibration steps at a single set temperature (60.0 ˚C), I was disappointed to find that the mean measured temperatures averaged repeatedly for 0.5 –1 h at set values of 45.0, 50.0 and 70.0 ˚C were inaccurate, yielding measured values of 42.0 ± 0.4, 47.2 ± 0.6, and 71.4 ± 1.1 ˚C, respectively. The temperature at a set point of 60.0 ˚C measured 60.0 ± 0.9 and 59.4 ± 1.1 ˚C before and after the series of measurements at other temperatures. A plot of set versus measured temperatures fits a line with a slope that does not equal 1.0, indicating there is a systematic mismatch between the set and measured temperatures.
To me, this suggests that some control feature needs to be adjusted. The manufacturer provides instructions to calibrate at a single temperature and to adjust the "temperature control value", which determines at what temperatures flanking the set point the heater turns on or off. However, the manual does not reveal anything about how to ensure that calibration at a single set temperature will yield accurate temperatures at other set points. So far in direct discussions the manufacturer has not provided any other clues as to how this might be achieved.
Am I right to think that this type of device likely includes a control parameter that can be varied so that, following calibration at a single set temperature, the measured values at other temperatures will be accurate within a degree C? That is, a parameter that can be adjusted at the factory or possibly by the user that adjusts the slope of the line in a plot of set versus measured temperature?
Thanks for your input.
Cheers,
Mark