Where does least significant digit come into play?

ScottBP

Involved In Discussions
Hi all,

I couldn't find a dead horse to beat today, so time to trot (er, drag) out another one... Where does the least significant digit of an instrument come into play for measurement uncertainty? Here's the deal... I calibrated a 100 lb. force gauge at 25 lbs, and the uncertainty components are as follows: One 15 lb. hanger whose tolerance is 0.0037 lb., and two 5 lb. weights whose tolerance is .0018 lbs each, added linearly to get ±0.0036 lbs. The uncertainty of local gravity where I am is approx. ±0.008 mgals, which works out to ±0.0004 lb. I divide all these components by k=1.732 (rectangular distribution), then root-sum-square them all together to get a standard uncertainty of 0.00299 lbf; expanding the uncertainty to k=2 makes it 0.00598 lbf.

This is all fine and dandy, until I take 10 readings for a type A contribution, and all readings are exactly the same, leading to a standard deviation of zero. So instead, I consider the resolution of the UUT, 0.1 lbf. If I follow the G.U.M. correctly, dividing the resolution by k=2*sqrt(3) (which is sqrt(12), or 3.464) and adding that to the other uncertainty contributors, the overall uncertainty suddenly jumps to 0.029 lbf (expanded to 0.058 lbf), completely drowning out all the other uncertainty components (contributing 86% to the overall uncertainty.) Is this right, or am I barking up the wrong tree? :confused:

If the tolerance of the gauge was .1% at full scale (100 lbs) and the resolution was the *only* uncertainty contributor, then I'd never ever be able to get a T.U.R better than about 1.74, no matter how accurate the weights are! (Which leads to more confusion: Is T.U.R. based on the standard uncertainty or the expanded uncertainty?) Is .1% too tight of a specification for something that reads out only 1 digit past the decimal place? :frust:

Things like degrees of freedom, when to use a student's T, and sensitivity coefficients fly right over my head, is that something that needs to be considered here?
 
J

jfgunn

Your assessment is correct. The largest component of the uncertainty is the resolution giving you an expanded uncertainty at k=2 of 0.058 lbf.

This is true in many items like calipers, micrometers, dial pressure gages, scales, etc... This is why some people will put that the uncertainty of a calibration is 0.6R where R=the reolsution. (if you round 0.058lbs to 0.06 lbf it would be equal to 0.6R).


The TUR is based upon the expanded uncertainty. Note that often calibration labs will state that they meet a 4:1 TUR on their calibration certificates. They will assume this is true becase they only use the Uncertainty from their standards. If you ignore the contributing factors from the UUT (like resolution) your TUR can be really good. Of course when an item has a tolerance equal to one or two divisions, you won't meet a 4:1TUR. This poits ot why it is best to just report the uncertainty on calibration certificates whenever possible.

Regarding the 0.1% spec:Sometimes these specs are actually 0.1% +/-1 Digit. this usually makes the tolerance on any digital reradout at least two digits.
 

ScottBP

Involved In Discussions
Thanks, I knew I wasn't that crazy. My next question is, what if instead of a digital force gage that has a resolution, you are comparing the mass with an unknown on a balance? I realize the local gravity component will go away, since it affects both masses equally, but what other factors are there to consider? (Please be patient, I'm a DC/LF electrical guy that has been "thrown to the wolves".) :tunnel:
 
J

jfgunn

Are you referring to calibrating a scale with a known test weight or calibrating an unknown test weight by comparison to a know test weight by using a scale?
 
Last edited by a moderator:
M

merrick65

I good rule of thumb when looking at an uncertainty is that the uncertainty can never be less then half the resolution of the uut. In this case the resolution of the force gage is 0.1 lbs so the uncertainty should not be less then 0.05 lbs. I think your calculation of 0.058 lbs is very good. Resolution is a big factor in force and scale calibration. I'm sure that its a major one in dimensional calibration also.
 

Hershal

Metrologist-Auditor
Trusted Information Resource
Hi all,

I couldn't find a dead horse to beat today, so time to trot (er, drag) out another one... Where does the least significant digit of an instrument come into play for measurement uncertainty? Here's the deal... I calibrated a 100 lb. force gauge at 25 lbs, and the uncertainty components are as follows: One 15 lb. hanger whose tolerance is 0.0037 lb., and two 5 lb. weights whose tolerance is .0018 lbs each, added linearly to get ±0.0036 lbs. The uncertainty of local gravity where I am is approx. ±0.008 mgals, which works out to ±0.0004 lb. I divide all these components by k=1.732 (rectangular distribution), then root-sum-square them all together to get a standard uncertainty of 0.00299 lbf; expanding the uncertainty to k=2 makes it 0.00598 lbf.

This is all fine and dandy, until I take 10 readings for a type A contribution, and all readings are exactly the same, leading to a standard deviation of zero. So instead, I consider the resolution of the UUT, 0.1 lbf. If I follow the G.U.M. correctly, dividing the resolution by k=2*sqrt(3) (which is sqrt(12), or 3.464) and adding that to the other uncertainty contributors, the overall uncertainty suddenly jumps to 0.029 lbf (expanded to 0.058 lbf), completely drowning out all the other uncertainty components (contributing 86% to the overall uncertainty.) Is this right, or am I barking up the wrong tree? :confused:

If the tolerance of the gauge was .1% at full scale (100 lbs) and the resolution was the *only* uncertainty contributor, then I'd never ever be able to get a T.U.R better than about 1.74, no matter how accurate the weights are! (Which leads to more confusion: Is T.U.R. based on the standard uncertainty or the expanded uncertainty?) Is .1% too tight of a specification for something that reads out only 1 digit past the decimal place? :frust:

Things like degrees of freedom, when to use a student's T, and sensitivity coefficients fly right over my head, is that something that needs to be considered here?

This is a good one. jfgunn has a very good response.

It looks to me you are right on track and taking a slightly more conservative approach which actually is good.

The weight tolerance is given, but if the weights are calibrated then I would take the uncertainty as a normal distribution and divide by 2 or 3 (depends on calibrates them as to how it is reported) and use that.

The LSD for resolution - TYPICALLY - is simply plugged in at half of LSD, otherwise the calculations to get to a good number get involved. So the simple path here is good.

When your Type As all come out exactly the same, vary a couple of them to get a good standard deviation, then calculate Type A as usual, which it seems you did.

Putting in the Type A will raise the uncertainty, so that is normal.

Hope this helps.
 

ScottBP

Involved In Discussions
Are you referring to calibrating a scale with a known test weight or calibrating an unknown test weight by comparison to a know test weight by using a scale?

The first example was calibrating a scale (actually a force gauge) with a known test weight, but the second is comparing a known to an unknown.
 

ScottBP

Involved In Discussions
I good rule of thumb when looking at an uncertainty is that the uncertainty can never be less then half the resolution of the uut. In this case the resolution of the force gage is 0.1 lbs so the uncertainty should not be less then 0.05 lbs. I think your calculation of 0.058 lbs is very good. Resolution is a big factor in force and scale calibration. I'm sure that its a major one in dimensional calibration also.

It's also a dominant player in pressure calibrations. (Pressure=Force*Area) So get this... A customer sends us a dial pressure gauge, 0-3000 PSIG, rated at 1% of full scale by the manufacturer (±30 PSIG.) The customer wanted it calibrated to 0.5% of FS, which is ±15 PSIG. The gage is marked in 50 PSIG increments, with a needle that is maybe 1/4 as wide as the spacing between the markings and no parallax mirror behind the needle. Is the customer fooling themselves by wanting the gauge to be tighter than manufacturer's specs? Is it up to the calibrated eyeball of the technician to guesstimate what the resolution will be?
 
C

CodyK

Hello,

I am newcomer to the cove with a similar question. We recently completed accreditation and now have our BMC. We do high voltage calibration and our BMC for DC HV: 0 - 175kV is basically 0.01% of reading. I am trying to list my uncertainties for each reading on my reports. For example, a 1000:1 ratio report would have a reading at 100kV input and a output reading of 100V. My BMC is .01%. Multiply the uncertainty by the reading and you have an uncertainty of 10mV. Now you throw in the resolution of the UUT meter, for example a fluke 87 in the 400V range which has a resolution of 100mV. Divide that in half and do all the math and its still basically 50 mV. So if I now multiply 50mV by my reading of 100V, I suddenly have a uncertainty of 5 volts.

Am I doing this correctly? Should I be multiplying the UUT resolution by my output reading like I do the rest of the uncertainty or just add the 50 mV to the 10mV BMC and have MU of 60mV.

Sorry this is a little long winded, its my first post after all and this stuff gives me a headache. Any help would be greatly appreciated!
 
Top Bottom