C
camcnown
What is considered an "acceptable" range of values for performing a calibration study on a measurement device, i.e. how much outside the usable range of the device ought one go when performing the calibration. The danger of going too far outside the range of use is that there may be a lack of linearity in the device outside the range of use and the device may fail in this range and then be flagged as a failed device. Is there some documented "acceptable" range of values, i.e. +/- some % that is considered acceptable?