Search the Elsmar Cove!
**Search ALL of** with DuckDuckGo including content not in the forum - Search results with No ads.

Discussion between co-worker on tolerance and uncertainty and how to apply it. Thoughts? 17025

So a co-worker of mine and I are at odds on how to interpret this. To me it seems pretty clear, but maybe I am missing something.

For the sake of this discussion we will look at a differential pressure standard that we use regularly. We have one that has a range or 0-0.5"wc (5 digits after the decimal readout). Another that has a range of 0-3"wc (4 digits after the decimal). Both of them have a rated accuracy of ±0.07%FS. That would make the 0-0.5"wc standard be accurate to ±0.00035"wc while the 0-3"wc would have a rated accuracy of ±0.0021. Now my colleague is arguing that when we calibrated the 0-3"wc module, we got data showing that it was meeting the accuracy specs of the 0-0.5"wc module in that range, so "in a pinch" we could justify using that on a device under test that is actually rated as having a higher accuracy than the "standard" we would be using the calibrate it as long as we stay within the range that was shown to be accurate enough on the calibration certificate.

I take issue with this, because it doesn't account for drift of the instrument between calibration, the fact that the customer could easily look up the rated tolerance of our instrument which would put us in a bad spot of having to explain something we could have avoided in the first place. I'll even copy/paste some of their text so that I am not twisting their explanation at all (names redacted of course). They make an argument based on calculation of uncertainties etc. I still think its hogwash, but I really don't want to be hard headed and think of something wrongly for the sake of being right. I do want to learn and improve, so I'm open for feedback here. I'll add notes in RED so you know if I agree or not.

Saying “industry standard” is what we used to call a cop out.
Briefly - and this could be the meeting agenda:
  • 17025 requires us to know the accuracy of every measurement we take – and we do, but it is now called Uncertainty No complaints here
  • I have been around since before 4:1 TAR, understand what the intention was and how it has gotten misused - which is why the standards agencies (e.g. NIST) have gone to Uncertainty
  • Accuracy is not the same as tolerance I agree, but I don't know why this is important here since this is basic - especially for anything we call a standard. Tolerance is just the 1-year drift criteria. I don't think it's just the 1-year drift criteria. Sometimes is the best a manufacturer can rate a piece of equipment to over a given range. It can also account for drift, but as we all know, items will drift out of that tolerance too, so there is no guarantee that its just the "drift criteria." And if an instrument was at its limit after a year, we would shorten the interval - effectively cutting the tolerance in half I don't think you can arbitrarily cut the tolerance in half just because you decide to shorten the interval.
  • The implementation of 4:1 in eth example below assumes that accuracy and tolerance are the same and that the EQ is always off by its tolerance. This “lazy” interpretation can be assumed to be “worst case” – but less than 0.1% of measurements are at the extreme limit of accuracy. Logically I understand this argument, but it is simultaneously illogical to me.
  • The uncertainty of a 5” module is 0.001” (See attached cert from REDACTED.) The actual inaccuracy between 0 and 1” is 0.0005”. The way 4:1 TAR was originally intended was to be used with the actual error of 0.0005. So, for the example below: UUT with a tolerance of 0.01 and an STD with an actual accuracy of 0.0005, the TAR is 20:1. (The TUR is 10:1, btw.) Anyone been around long enough to confirm if this was the original intention of creating a 4:1 TAR?
Thanks for any and all input on this. I certainly have my biases and feelings on this, but I am honestly open to changing that ideology if I am wrong.

I went to Navy Cal School in Biloxi, MS, and we never got into this discussion there. But that was back in 2006, and things have changed some since.
When you are calculating measurement uncertainty one of the factors that must be included as a contributor is the accuracy / tolerance / uncertainty of the measuring instrument(s). Yes, there are many times when I know that a particular instrument is "better" than the published tolerance. But can I use my special knowledge of the goodness of my instrument?

Normally the published tolerance of the instrument is used in calculating measurement uncertainty. Why? Because the manufacturer is assumed to have done their due diligence in determining the expected accuracy of the instrument over a recommended time interval. And because it is supplied by the manufacturer it is assumed that you can trust that value (yes, I know the dangers of assuming tolerances to be true...), and thus you are able to enter it into your uncertainty budget calculation as a Type B contributor according to the GUM. This is a huge time and effort savings, because otherwise you will need to develop and justify your own specification as a Type A contributor. An auditor will be very suspicious if you come up with a tolerance that differs from the manufacturer, and for good reason.

So even though you know, with good certainty, that one gauge is better than it's specifications would indicate you must use that specification unless you are willing to do the work to come up with your own specification. And just showing how close it was during its last calibration is not adequate for that.
Top Bottom