Measurement Resolution - How far to carry the decimal place out

J

Jon O

Measurement Resolution

Hello All,

I am curious to know how many of you determine how far to carry the decimal place out for measurement purposes. Are you basing this on your GR&R's? If you have have a process you have not yet GR&R'd (yes, should be done first) what do you use to start out with? 4, 5, 6 decimal places?? and what is the reasoning behind your selection? Please let me know what others are doing.

Regards,

Jon:rolleyes:
 
Elsmar Forum Sponsor
D

D.Scott

Jon - I'm not sure I am the one to try to answer your question but it looks like all the stats guys/gals are on leave.

Just to get you started, I look first at the requirements of the job I am measuring. If the requirements are to 2 decimals, I look for 3 decimals in the measurement system. There is a "rule of thumb" of 10:1 which is commonly followed for most measurements in our industry.

As far as determining the suitability of the measurement system you would need a gage R&R.

To arbitrarilly set 4, 5 or 6 decimals as a start, would be overkill if the measurement you were making had spec limits with 1 decimal. The pros can elaborate here much better than I, but there is a thing called "Signifficant Numbers" which illustrates the logic in dropping numbers after a certain point (example = 100/3 = 33.33333333 and so on - sooner or later, who needs all those 3s?). The same theory is what allows "rounding up or down".


Don't lock yourself into a whole bunch of decimals unless the part/process spec requires it. We found that when we put in our new measurement system and "showed off" with 8 decimals on a layout that the part we used to measure at 3 decimals now became a requirement at 6.

Hope some of the others can shed more light for you.

Dave
 
J

Jon O

D. Scott,

Thanks for your feedback on the subject, it does help. I am also curious to hear others techniques and stratagies.

Regards,

Jon:cool:
 
A

Al Dyer

I've always used the 10 to 1 rule. If your target is .01 I would at least use a gage discriminated to .001
 

Marc

Fully vaccinated are you?
Leader
Al really repeated what D Scott said - and as far as Mr. Scott's explaination, I agree.

Basically it boils down to (in the auto world early APQP) the fact that during early design criticval characteristics start emerging. At that point you should be determining to what precision you need a dimension to be. This is where the M&TE people come in and sit down and say we have equipment to measure that or we don't and thus will have to buy.

> Don't lock yourself into a whole bunch of decimals unless
> the part/process spec requires it. We found that when we
> put in our new measurement system and "showed off" with 8
> decimals on a layout that the part we used to measure at 3
> decimals now became a requirement at 6.

I agree with the first sentence, but will say that the precision of the instrument should never dictate the dimension precision. That should be a design function. If there was a part I had and the print gave me 'y' mm +/- 0.2 mm and I used a CMM precise to 0.0001, I would only use the first 2 digits after the decimal point from the CMM readings. Mr. Scott amply described these as the 'significant digits' - also Al's 10 to 1 rule. If you don't do this you're heading for trouble. If you are measuring at 0.01 yet are capable of measuring at 0.0001 (precision), the fact that you're capable of the measurement precision beyond the print callout may get you into trouble simply because your manufacturing equipment is not capable of that precision. As a last comment, if you change a dimension precision on a part based upon your measurement precision capability, it's going to be more expensive to make that part. Tightening a tolerance - which is what I think is being implied - is typically going to make the part more expensive.

The Batavia experiment was good proof that if you don't have the M&TE people in early, and you have 'company idiot' design engineers, you can end up on the other side of the stick - the precision necessary is more than your instrument (gage, CMM, whatever) can measure to.

There is another 'rule' to consider. It's something like 25% MAX (of tolerance) for combined 'uncertainty' (the 4 to 1 rule) of the measurement system as a whole. That should be taken into consideration when setting tolerances during design phase as well. Make sure the M&TE people are there when decisions like this are reviewed during the design stage.

I think.... Me not always right :thedeal: Not my specialty.
 
S

SSanap

The measuring equipment should have least count of 1/10 of the decimal of the tolerance limits. If the tolerance is + 0.1 unit the measurement should be upto 0.01unit.

Thanx,

SSanap
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
There has been an attempt to answer 3 questions here, that really need to be addressed separately:

What should the resolution of a gage be in relationship to the tolerance? 10:1 is a backyard rule of thumb that is adequate for just that - back yard work. For important measurement systems, the answer is a minimum ndc>=5, although I recommend ndc>=10, and for SPC ndc>=10 where PV=UCL-LCL.

What should the ratio of a master reference gage be in relationship to the gage being calibrated? Two schools of thought - 10:1 to the gage discrimination or 4:1 to the gage discrimination

and, the original question:
How far to carry the decimal place out in relationship to the tolerance? Rule of thumb: If the tolerance is wide enough to contain more than 10 units - for example 20-40mm, then 1mm increments is fine. But, if the tolerance is less than 10 increments, then use 10:1 to the unit measure - for example 3-5 mm, report to 0.1 mm. Bottom line, report to at least 10 increments between the specifications.
 
S

sixsigmais

I agree with D,Scott that
If the requirements are to 2 decimals, I look for 3 decimals in the measurement system

Yes, you are looking for more but it consider a waste, just give more accuracy than the requirement is good enough
 

Miner

Forum Moderator
Leader
Admin
Donald Wheeler has proposed a sound statistical approach to making this decision. Read Intro to MSA of Continuous Data – Part 7: R&R using Wheeler’s Honest Gage Study The file attached to this blog entry contains links to Wheeler's articles as well as calculates the smallest and largest Effective Measurement Increment (EMI).

The resolution of the gage should be smaller than the largest EMI, and decimal places smaller than the smallest EMI do not provide additional value and should not be recorded.
 
B

BrQ

Is it acceptable to use a "for reference only" sticker, along with a calibration, if your caliper does not have adequate resolution to pass a gage R&R?
I am talking about steel tooling here, calipers are used "on the bench" but all final decisions are based on CMM / Micrometer (higher res) measurement results.
I know that the obvious answer is to buy higher res calipers, but this is currently not an option.
 
Top Bottom