So a co-worker of mine and I are at odds on how to interpret this. To me it seems pretty clear, but maybe I am missing something.
For the sake of this discussion we will look at a differential pressure standard that we use regularly. We have one that has a range or 0-0.5"wc (5 digits after the decimal readout). Another that has a range of 0-3"wc (4 digits after the decimal). Both of them have a rated accuracy of ±0.07%FS. That would make the 0-0.5"wc standard be accurate to ±0.00035"wc while the 0-3"wc would have a rated accuracy of ±0.0021. Now my colleague is arguing that when we calibrated the 0-3"wc module, we got data showing that it was meeting the accuracy specs of the 0-0.5"wc module in that range, so "in a pinch" we could justify using that on a device under test that is actually rated as having a higher accuracy than the "standard" we would be using the calibrate it as long as we stay within the range that was shown to be accurate enough on the calibration certificate.
I take issue with this, because it doesn't account for drift of the instrument between calibration, the fact that the customer could easily look up the rated tolerance of our instrument which would put us in a bad spot of having to explain something we could have avoided in the first place. I'll even copy/paste some of their text so that I am not twisting their explanation at all (names redacted of course). They make an argument based on calculation of uncertainties etc. I still think its hogwash, but I really don't want to be hard headed and think of something wrongly for the sake of being right. I do want to learn and improve, so I'm open for feedback here. I'll add notes in RED so you know if I agree or not.
I went to Navy Cal School in Biloxi, MS, and we never got into this discussion there. But that was back in 2006, and things have changed some since.
For the sake of this discussion we will look at a differential pressure standard that we use regularly. We have one that has a range or 0-0.5"wc (5 digits after the decimal readout). Another that has a range of 0-3"wc (4 digits after the decimal). Both of them have a rated accuracy of ±0.07%FS. That would make the 0-0.5"wc standard be accurate to ±0.00035"wc while the 0-3"wc would have a rated accuracy of ±0.0021. Now my colleague is arguing that when we calibrated the 0-3"wc module, we got data showing that it was meeting the accuracy specs of the 0-0.5"wc module in that range, so "in a pinch" we could justify using that on a device under test that is actually rated as having a higher accuracy than the "standard" we would be using the calibrate it as long as we stay within the range that was shown to be accurate enough on the calibration certificate.
I take issue with this, because it doesn't account for drift of the instrument between calibration, the fact that the customer could easily look up the rated tolerance of our instrument which would put us in a bad spot of having to explain something we could have avoided in the first place. I'll even copy/paste some of their text so that I am not twisting their explanation at all (names redacted of course). They make an argument based on calculation of uncertainties etc. I still think its hogwash, but I really don't want to be hard headed and think of something wrongly for the sake of being right. I do want to learn and improve, so I'm open for feedback here. I'll add notes in RED so you know if I agree or not.
Saying “industry standard” is what we used to call a cop out.
Briefly - and this could be the meeting agenda:
Briefly - and this could be the meeting agenda:
- 17025 requires us to know the accuracy of every measurement we take – and we do, but it is now called Uncertainty No complaints here
- I have been around since before 4:1 TAR, understand what the intention was and how it has gotten misused - which is why the standards agencies (e.g. NIST) have gone to Uncertainty
- Accuracy is not the same as tolerance I agree, but I don't know why this is important here since this is basic - especially for anything we call a standard. Tolerance is just the 1-year drift criteria. I don't think it's just the 1-year drift criteria. Sometimes is the best a manufacturer can rate a piece of equipment to over a given range. It can also account for drift, but as we all know, items will drift out of that tolerance too, so there is no guarantee that its just the "drift criteria." And if an instrument was at its limit after a year, we would shorten the interval - effectively cutting the tolerance in half I don't think you can arbitrarily cut the tolerance in half just because you decide to shorten the interval.
- The implementation of 4:1 in eth example below assumes that accuracy and tolerance are the same and that the EQ is always off by its tolerance. This “lazy” interpretation can be assumed to be “worst case” – but less than 0.1% of measurements are at the extreme limit of accuracy. Logically I understand this argument, but it is simultaneously illogical to me.
- The uncertainty of a 5” module is 0.001” (See attached cert from REDACTED.) The actual inaccuracy between 0 and 1” is 0.0005”. The way 4:1 TAR was originally intended was to be used with the actual error of 0.0005. So, for the example below: UUT with a tolerance of 0.01 and an STD with an actual accuracy of 0.0005, the TAR is 20:1. (The TUR is 10:1, btw.) Anyone been around long enough to confirm if this was the original intention of creating a 4:1 TAR?
I went to Navy Cal School in Biloxi, MS, and we never got into this discussion there. But that was back in 2006, and things have changed some since.