The Elsmar Cove Business Standards Discussion Forums More Free Files Forum Discussion Thread Post Attachments Listing Elsmar Cove Discussion Forums Main Page
Welcome to what was The Original Cayman Cove Forums!
This thread is carried over and continued in the Current Elsmar Cove Forums

Search the Elsmar Cove!

Wooden Line
This is a "Frozen" Legacy Forum.
Most links on this page do NOT work.
Discussions since 2001 are HERE

Owl Line
The New Elsmar Cove Forums   The New Elsmar Cove Forums
  Measurement, Test and Calibration
  Dimensional Requirement Precision vs Its Measurment

Post New Topic  Post A Reply
profile | register | preferences | faq | search

UBBFriend: Email This Page to Someone! next newest topic | next oldest topic
Author Topic:   Dimensional Requirement Precision vs Its Measurment
Paul A. Pelletier
Lurker (<10 Posts)

Posts: 1
From:Lexington, MA.
Registered: Jul 2000

posted 18 July 2000 12:20 PM     Click Here to See the Profile for Paul A. Pelletier   Click Here to Email Paul A. Pelletier     Edit/Delete Message   Reply w/Quote
I recently submitted the following question to ASME. I am curious what the experience of others has been on this topic.

My question is:

What is the proper way to specify and evaluate a dimensional requirement in terms of its designed precision versus the uncertainty and accuracy of its measurement?

Background

ASME Y 14.5M section 2.4 (Interpretation of Limits) states that: "All limits are absolute. Dimensional limits, regardless of the number of decimal places, are used as if they were continued with zeros (ex. 12.2 means 12.20....0). To determine conformance with limits, the measured value is compared directly with the specified value and any deviation outside the specified limiting value signifies nonconformance with the limits".

The as specified precision of a dimensional requirement (or any value for that matter) is generally represented by the number of decimal places to the right of the decimal point. For instance, a value of .125" is a less precise value than .1250". I cannot quote the source of this axiom. I can tell you that I was instructed this way as an engineering undergraduate, and that in the 20+ years since then, I have never encountered an engineer, machinist, inspector, draftsman, or designer that disagreed with it. If ASME does not concur with this axiom, I would like to have that confirmed as well as the rational or specification that ASME believes governs the conveyance of the level of precision in a dimensional requirement.

If this axiom is true, then section 2.4 of ASME Y 14.5M is at the very least confusing (i.e. 12.2 does not mean 12.20.....0).

Related to the correct way to specify a level of precision for a dimensional requirement is the correct way to inspect the part for conformance to that requirement. A second axiom that I have applied through out my professional experience is that one should inspect a technical requirement with an instrument that is more precise than the requirement as specified. This "incremental precision" is generally accepted to be 1 order of magnitude. The one caveat being that this would not be done if 1 order of magnitude exceeded the "state of the art" in measurement capability. This axiom is (was) documented at one time in the military specifications (MIL-STD-45662, I believe). Regardless, it is certainly logical that if one wishes to evaluate a requirement specified to .000" that it should be evaluated with an instrument capable of resolving .0000", otherwise the measured value of the third decimal place is too uncertain as to be considered reliable.

The third element of my question is what one does with the value measured in the "incremental precision position". The generally accepted practice that I have encountered is to round the "incremental precision position" up or down to the level of precision as specified by the designer (as represented by the number of decimal places in the dimensional requirement). This third piece to my question is the part that seems to be most at odds with ASME Y 14.5M section 2.4. When the standard states "All limits are absolute. .....To determine conformance with limits, the measured value is compared directly with the specified value and any deviation outside the specified limiting value signifies nonconformance with the limits", an interpretation could be that the "generally accepted practice" of rounding the "incremental precision position" in any direction is wrong. This phrase in the standard would imply that a dimensional requirement of .125 +/- .005" (specified value of .125", specified limiting value of .130") should be rejected for an as measured value of .1301", or for that matter .130001". Is that true? Is this .0001" or .000001" what ASME means when you say "outside the specified limiting value"?

This is the fundamental problem that I am requesting assistance in understanding, can one legitimately round the least significant digit (s) of measurements to the number of decimal places as specified by the designer, which for the sake of this question is the defined level of precision for the requirement, without violating the specified limiting value?

As a final illustration, consider the following: A hole diameter is specified to be .121+/- .001". The part is measured with an instrument that is capable (and calibrated) of resolving .00001", with the results being that the part measures .12210". Should the part be accepted? If the answer is no, then should it be accepted if the measured value is .12209"? Should it be accepted if the measured value is .12201"?

If your answer is that an instrument capable of resolving to .00000" is the wrong instrument, then what is the correct interpretation if the part were inspected with an instrument capable of resolving only .0000", and the measured value is .1220" when in reality the actual value is .12204". Is not the instrument rounding down? Does this not violate the intention of the standard?

I would greatly appreciate clarification on this issue. My company's intention is to unambiguously comply with internationally recognized standards, of which we consider ASME Y14.5M a critical element. Up until an internal difference of opinion resulted in a thorough research of the applicable standards (we are in the process of researching ANSI/IEEE 268 for additional guidance), we thought that we were doing so. After reviewing the standard it is not clear if we are or are not. I would submit that section 4 of ASME Y14.5M falls just short of defining the interpretation of limits in terms of the evaluation of parts for their conformance to those limits. This is the area that we are in need of clarification on.

IP: Logged

Jerry Eldred
Forum Wizard

Posts: 136
From:
Registered: Dec 1999

posted 18 July 2000 12:55 PM     Click Here to See the Profile for Jerry Eldred   Click Here to Email Jerry Eldred     Edit/Delete Message   Reply w/Quote
I apologize that I am in the midst of some rather 'brain-frying' activities, so I only gave this posting a quick read.

My gut response is that it is important to differentiate between precision and accuracy. It is totally correct that precision is in laymans terms a minimum resolution. I don't have all of my standards handy. But precision is like 'discrimination', the instruments ability to discriminate small incremental measurand changes.

From the text of the posting, it seemed that 'precision' was being blended into 'accuracy.' They are most definitely different and distinct from each other.

I don't have time to go into great detail. But absolutely, an 8 digit display has greater 'precision' (resolution, discrimination) than a 6 digit display. But that doesn't say anything about what difference there may be in accuracy. You could theoretically have an 8 digit resolution piece of measuring equipment with lower accuracy than a 6 digit piece of equipment.

As for the requirements for a level of precision in the standard, I am not specifically familiar with the mentioned standard. But as a 24 year metrologist, it makes good sense. This makes it simpler and less confusing when working with measurands. If a measurand is 1 inch, and has resolution of only 1 (versus say 1.000,000 inches), as a user of high 'accuracy' dimensional equipment, the added digits give me a better understanding of the actual measurand. If I am using a gage block, maintaining standard resolutions of say 0.1 micro inches, standardizes reading resolution.

As to the last part of your question, I am going to ask some of my dimensional gurus (I am not a dimensional guru - my claim to fame is in electrical areas). If I can get some worthwhile input, I will post for you.

Initially, my gut reaction would be that you define a level of resolution necessary for a given measurement. If you define 10X from the item under test (i.e.: 0.125" +/- 0.005", resolution of the measurement standard being 0.1250"), then read only those digits defined as being part of the measurement. What is happening, it seems, is that you have much better measurement standards than those units being tested or calibrated. In having that much better resolution, you have the added baggage of having digits of resolution that are not essential to the measurement process. If you document how you will make the measurement, then do it that way every time has more value than using the extra unnecessary digits of resolution.

The further thought that comes to mind is to bear in mind the required degree of precision of the unit being tested. If the unit being tested doesn't need that degree of precision, there is not, it seems, value in testing to the 'ultra-high' level of precision.

Hope I haven't muddied the waters too much on this.

------------------

IP: Logged

Steven Truchon
Forum Contributor

Posts: 89
From:Fort Lauderdale, FL USA
Registered: Jul 2000

posted 31 July 2000 06:07 PM     Click Here to See the Profile for Steven Truchon   Click Here to Email Steven Truchon     Edit/Delete Message   Reply w/Quote
I agree with Jerry for the most part. Having spent the majority of my career in "Silicon Valley" metrology functions I have the following comments that are simplistic yet do reflect actual practices.

Limits are limits. A high limit of .13 is no different than .130000 . If the measurements were legitimately accurate enough, .129999 is acceptable and .130001 is out of tolerance. This is not arguable in the ideal sense of the numbers. What I have always tried to keep in mind is to keep the measurements in sync with the requirements AND expectations. Most of my travels have revealed usage of a 4X rule over the 10X rule for measurement precision. Its just a starting point and it all depends on whats being made. Irrigation pumps have much more flexibility outside a tolerance limit than does a hard-drive read-head for instance. So quantities of zeros are not equal across the board in that sense. I know I am crossing into engineering and design intent but it does become a factor.

A feature with a tolerance of X should be measured with an instrument that is accurate to 4 - 10 times the single direction feature tolerance depending on the function intent. When one reads beyond the requirement in terms of resolution and accuracy it becomes a moot point. It also becomes a matter of economics, why measure in millonths when your tolerance is thousandths? I know thats a stretch but I used it to make a point.

I remember one company I worked had a 10% out of tolerance rule. On a +/-.005 tolerance one could accept +/-.0055. I dunno. That was their rule. And it worked for them and it was under MIL-Q-9858A system.

From the mfg. world I live in, .12 and .120000000 are identical, depending...

Whaddya makin?


------------------
Steven Truchon
Precision Resource - Florida Division
www.precisionresource.com
stevent@pr-fl.com

IP: Logged

All times are Eastern Standard Time (USA)

next newest topic | next oldest topic

Administrative Options: Close Topic | Archive/Move | Delete Topic
Post New Topic  Post A Reply Hop to:

Contact Us | The Elsmar Cove Home Page

Your Input Into These Forums Is Appreciated! Thanks!


Main Site Search
Y'All Come Back Now, Ya Hear?
Powered by FreeBSD!Made With A Mac!Powered by Apache!