EDIT: Sorry, something changed the message title I originally posted. I'm not looking for specs, I know where those are. What I'm looking for is discussion on what part of the spec you would use in this situation to determine the tolerances to put in a calibration procedure.
I’m trying to write a calibration procedure for a Fluke 7250 pressure controller and their jumble of specs has me a bit confused.
According to the sales guys the precision is 0.005% reading.
Calibration tolerance in their documentation is 0.009% of reading where accuracy is defined as the maximum deviation from the value of pressure including precision, stability, temperature effects, and the calibration standard.
It seems to me that stability, temperature effects, and the calibration standard would certainly effect the uncertainty of the measurement but don’t belong in the calibration limits column of the cal procedure. That in order for it to achieve the 0.009% accuracy spec it would need to agree with the standard within the 0.005% of reading precision at calibration time.
Their method appears to compare the 7250 to the standard and call it good if it is within 0.009% of reading. This seems inflated to me and would result in even poorer instrument performance when stability and temperature effects would be different from conditions at calibration time.
When you write a procedure which spec would you use?
Last edited by a moderator: