Hi all,
I couldn't find a dead horse to beat today, so time to trot (er, drag) out another one... Where does the least significant digit of an instrument come into play for measurement uncertainty? Here's the deal... I calibrated a 100 lb. force gauge at 25 lbs, and the uncertainty components are as follows: One 15 lb. hanger whose tolerance is 0.0037 lb., and two 5 lb. weights whose tolerance is .0018 lbs each, added linearly to get ±0.0036 lbs. The uncertainty of local gravity where I am is approx. ±0.008 mgals, which works out to ±0.0004 lb. I divide all these components by k=1.732 (rectangular distribution), then root-sum-square them all together to get a standard uncertainty of 0.00299 lbf; expanding the uncertainty to k=2 makes it 0.00598 lbf.
This is all fine and dandy, until I take 10 readings for a type A contribution, and all readings are exactly the same, leading to a standard deviation of zero. So instead, I consider the resolution of the UUT, 0.1 lbf. If I follow the G.U.M. correctly, dividing the resolution by k=2*sqrt(3) (which is sqrt(12), or 3.464) and adding that to the other uncertainty contributors, the overall uncertainty suddenly jumps to 0.029 lbf (expanded to 0.058 lbf), completely drowning out all the other uncertainty components (contributing 86% to the overall uncertainty.) Is this right, or am I barking up the wrong tree?
If the tolerance of the gauge was .1% at full scale (100 lbs) and the resolution was the *only* uncertainty contributor, then I'd never ever be able to get a T.U.R better than about 1.74, no matter how accurate the weights are! (Which leads to more confusion: Is T.U.R. based on the standard uncertainty or the expanded uncertainty?) Is .1% too tight of a specification for something that reads out only 1 digit past the decimal place?
Things like degrees of freedom, when to use a student's T, and sensitivity coefficients fly right over my head, is that something that needs to be considered here?
I couldn't find a dead horse to beat today, so time to trot (er, drag) out another one... Where does the least significant digit of an instrument come into play for measurement uncertainty? Here's the deal... I calibrated a 100 lb. force gauge at 25 lbs, and the uncertainty components are as follows: One 15 lb. hanger whose tolerance is 0.0037 lb., and two 5 lb. weights whose tolerance is .0018 lbs each, added linearly to get ±0.0036 lbs. The uncertainty of local gravity where I am is approx. ±0.008 mgals, which works out to ±0.0004 lb. I divide all these components by k=1.732 (rectangular distribution), then root-sum-square them all together to get a standard uncertainty of 0.00299 lbf; expanding the uncertainty to k=2 makes it 0.00598 lbf.
This is all fine and dandy, until I take 10 readings for a type A contribution, and all readings are exactly the same, leading to a standard deviation of zero. So instead, I consider the resolution of the UUT, 0.1 lbf. If I follow the G.U.M. correctly, dividing the resolution by k=2*sqrt(3) (which is sqrt(12), or 3.464) and adding that to the other uncertainty contributors, the overall uncertainty suddenly jumps to 0.029 lbf (expanded to 0.058 lbf), completely drowning out all the other uncertainty components (contributing 86% to the overall uncertainty.) Is this right, or am I barking up the wrong tree?
If the tolerance of the gauge was .1% at full scale (100 lbs) and the resolution was the *only* uncertainty contributor, then I'd never ever be able to get a T.U.R better than about 1.74, no matter how accurate the weights are! (Which leads to more confusion: Is T.U.R. based on the standard uncertainty or the expanded uncertainty?) Is .1% too tight of a specification for something that reads out only 1 digit past the decimal place?
Things like degrees of freedom, when to use a student's T, and sensitivity coefficients fly right over my head, is that something that needs to be considered here?