Conversion of fraction to decimal for R&R of english ruler

P

potato

Experts,

We are in the process of performing a Gage R&R on a ruler that has 1/64" calibrated increments. ASTM E29 does not specifically address the conversion of a fraction to a decimal...however, I see two paths here:

1) round each measurement when the data is taken. In this case, our systems say that since the instrument is a ruler, then the error is 1/2 the increment, so in this case it is 1/128. Converted to decimal, it is .0078125. Adding and subtracting that from the decimal equivalent of the 1/64" = .015625", then the first uncertain digit would be the 2nd one. So, a reading of 1/64" = .02".

2) E29 states that for calculations, you can carry decimals until the end. So, mathematically, 1/64" = .015625" is a calculation, and then that data could be used to evaluate the R&R. This would assume all of the digits are significant, since they are a result of a calculation.

Which is the correct way to interpret E29? I know on this forum many do not agree with an R&R on a ruler, but we have to do it per our systems. FYI, we have systems to periodically check the ruler, recalibrate (or some say reverify)...Also, in the future, we will use a metric ruler, but unfortunately we are stuck with this for now. Anyway, my question is really a mathematical one...
 

Miner

Forum Moderator
Leader
Admin
I doubt that the difference between the two approaches will make a substantive difference in the results of the R&R study. Try it both ways and see.

From a purist perspective, you should cut it off at fewer decimal places, because that reflects the actual resolution of the measurement device. From a practical perspective, it probably will make little difference.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
The beauty of the gage R&R is it will tell you how many decimal points are significant when you review the ndc.
 
P

potato

Miner, you are correct, there is little difference (we ran it both ways). However, it seems completely arbitrary that if you use a ruler that just so happens to not have several digits when you convert the fraction, that you don't have to round. For example, if we used a 1/50 ruler, we would not have to round at all. Just by changing the denominator to 64, we now have to remove those digits.

On the other hand, I see the point about removing the digits based upon resolution. But, that doesn't explain why 1/50 just so happens to be lucky...
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
1/64 = .0156 1/126=.0078. Based on the fact that you can not be as accurate as 1/128 reading 1/64, any decimal past .001 is clearly insignificant - you leave those off for sure. In the end, you will find resolution in .01 increments will be reportable, but barely significant.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
I use 1/100" rulers to explain the difference between gage resolution and significant resolution (ndc). Sure, the gage reads out in .01" increments, but it is not that accurate...because no one can see, read or count the darn marks. There is also greater errors at that resolution in parallax, etc. Can't be done....might need a magnifier - but that's cheating. Besides, you will get manipulative error and observational error if you try to use a magnifier.
 
P

potato

OK...so assuming we need to round before we record the data, maybe my organization doesn't understand which digit is uncertain...

Here is our current logic:
1/64"=.015625
half of that is the error, so
1/128'=.0078125
So, a measurement of 1/64" could be plus or minus the error.
lowest=.0078125
highest=.0234375
So, the first uncertain number is the second digit. According to our review, you report the all of the certain digits and the first uncertain. In this case (if C=certain, U=uncertain), we report .CU, out to two digits.
Therefore, 1/64=.015625 = .02

Is this the incorrect interpretation of uncertain digits?
 
Top Bottom