CMM Uncertainty Calculation Question - Deviations

J

JAltmann

I am calculating uncertainty for a CMM and I have input the calibration vendors measurement uncertainty into my calculations, but should I also be listing their reported deviations as well? It been awhile and I seem to be having an internal debate with myself.

TIA.
 

dwperron

Trusted Information Resource
Typically you would use the manufacturer's specifications, not the reported deviations from calibration results. You would definitely include the calibration vendor uncertainty, and you would also need at least measurement repeatability / reproducibility data to go along with environmental contributors, etc.
 
N

ncwalker

Ah! One of my favorite topics.

Before I begin, I want to ask two questions:

1) On a CMM, why do we use DIFFERENT probe tips? What is the purpose of or reason for choosing a different tip?

2) On a CMM, when we are unhappy with a diameter, or plane, or whatever, do we do things like a) Add more probe hits and b) change the tip acceleration?



Please stop and think about these two questions before going further.



Your answer to BOTH questions is most likely something to do with measurement accuracy.

Consider Question 1: We all know a short, stiff, probe tip with a big ruby works "best." The only reason we deviate from this is we can't reach some feature with "the stubby." (And hopefully, we don't just do everything with a long, slender tip because it is easier.)

Consider Question 2: We all know more hits is better. The reason we don't take 1,000 hits is basically time. You have to get parts through the CMM.

The point of this is that things like probe selection and programming VERY MUCH affect how well a CMM measures. If my two questions have not convinced you of this, you may as well stop reading now. :)

On to your CMM calibration ... When that dude or dudette shows up, they are NOT concerned with how well your CMM is measuring your parts. They are concerned with if the CMM is WORKING. That's a different thing. What they do is measure a VERY CONTROLLED object in a VERY CONTROLLED way at different locations in the CMMs active volume. And they want to see that they get the same results in different locations and orientations. And they will tell you the devices repeatability in XYZ and/or the Volumetric Accuracy.

You can not just take this number and say "This is my uncertainty at measuring a part" because it isn't this. In fact, you may do BETTER at measuring a part because you (most likely) will load the part in the same location and the same orientation in the CMMs active volume.

If you want to know the uncertainty of measuring a particular feature, the only proper way to do this is to do a Gage R&R and get the uncertainty from that. Yeek. Suddenly it sounds like a lot of work. If you aren't convinced this is the correct way, think about this: Let's say you write a program and you aren't careful and you are shank hitting a diameter it is hard to see. You know this will give you bad results. You CLEARLY cannot take the "uncertainty" from your calibration results and use this... But were you to do a Gage R&R, you would see the uncertainty represented correctly (large and bad).

Do you have to do this on every feature? Is there a strategy to minimize this? I have two answers ....

1) The one you won't like, but it's true. :) Look, to get a Gage R&R on ANY feature, you're going to run 12 parts 3 times at least. The CMM output is IN A COMPUTER at that point. KEEP it in a computer. You can make a big spreadsheet and put the features in rows and the results in columns. And then just copy down the formulas and calculate the uncertainties. The only real hard part is the cycling of the machine to take the measurements. Everything else is just an excuse.

2) If you aren't comfortable with this, or if a big Gage R&R table is beyond the scope of your customer and they want the "pretty" sheet, well, that sucks to do all the copy pasting of a lot of dimensions. So pick some key ones. A flatness. A short distance and a long distance. A diameter. A true position. You want to look at things you know will be problems. So you pick the tightest tolerances. True positions that are based on incomplete circles are also good candidates. Test your worst cases and call it good. Eventually, you will get a "feel" for things. And you will know that tip configuration X with Y probe hits will give you an uncertainty of Z on machined holes.

That's how it is done. The only way.
 
J

JAltmann

Typically you would use the manufacturer's specifications, not the reported deviations from calibration results. You would definitely include the calibration vendor uncertainty, and you would also need at least measurement repeatability / reproducibility data to go along with environmental contributors, etc.

Thank you, this is what I had recalled, but as I was putting the budgets together I suddenly couldn't fully recall. I the other variables included.
 
J

JAltmann

Ah! One of my favorite topics.

Before I begin, I want to ask two questions:

1) On a CMM, why do we use DIFFERENT probe tips? What is the purpose of or reason for choosing a different tip?

2) On a CMM, when we are unhappy with a diameter, or plane, or whatever, do we do things like a) Add more probe hits and b) change the tip acceleration?



Please stop and think about these two questions before going further.



Your answer to BOTH questions is most likely something to do with measurement accuracy.

Consider Question 1: We all know a short, stiff, probe tip with a big ruby works "best." The only reason we deviate from this is we can't reach some feature with "the stubby." (And hopefully, we don't just do everything with a long, slender tip because it is easier.)

Consider Question 2: We all know more hits is better. The reason we don't take 1,000 hits is basically time. You have to get parts through the CMM.

The point of this is that things like probe selection and programming VERY MUCH affect how well a CMM measures. If my two questions have not convinced you of this, you may as well stop reading now. :)

On to your CMM calibration ... When that dude or dudette shows up, they are NOT concerned with how well your CMM is measuring your parts. They are concerned with if the CMM is WORKING. That's a different thing. What they do is measure a VERY CONTROLLED object in a VERY CONTROLLED way at different locations in the CMMs active volume. And they want to see that they get the same results in different locations and orientations. And they will tell you the devices repeatability in XYZ and/or the Volumetric Accuracy.

You can not just take this number and say "This is my uncertainty at measuring a part" because it isn't this. In fact, you may do BETTER at measuring a part because you (most likely) will load the part in the same location and the same orientation in the CMMs active volume.

If you want to know the uncertainty of measuring a particular feature, the only proper way to do this is to do a Gage R&R and get the uncertainty from that. Yeek. Suddenly it sounds like a lot of work. If you aren't convinced this is the correct way, think about this: Let's say you write a program and you aren't careful and you are shank hitting a diameter it is hard to see. You know this will give you bad results. You CLEARLY cannot take the "uncertainty" from your calibration results and use this... But were you to do a Gage R&R, you would see the uncertainty represented correctly (large and bad).

Do you have to do this on every feature? Is there a strategy to minimize this? I have two answers ....

1) The one you won't like, but it's true. :) Look, to get a Gage R&R on ANY feature, you're going to run 12 parts 3 times at least. The CMM output is IN A COMPUTER at that point. KEEP it in a computer. You can make a big spreadsheet and put the features in rows and the results in columns. And then just copy down the formulas and calculate the uncertainties. The only real hard part is the cycling of the machine to take the measurements. Everything else is just an excuse.

2) If you aren't comfortable with this, or if a big Gage R&R table is beyond the scope of your customer and they want the "pretty" sheet, well, that sucks to do all the copy pasting of a lot of dimensions. So pick some key ones. A flatness. A short distance and a long distance. A diameter. A true position. You want to look at things you know will be problems. So you pick the tightest tolerances. True positions that are based on incomplete circles are also good candidates. Test your worst cases and call it good. Eventually, you will get a "feel" for things. And you will know that tip configuration X with Y probe hits will give you an uncertainty of Z on machined holes.

That's how it is done. The only way.

Thanks, I am well versed in metrology and CMM practices. In the case of best measurement uncertainty your going a little to deep. For particular measurements some of these would be applied, as needed/as applicable.
 
C

Coleman Donnelly

Hello,

Recently we have had a very hot topic start popping up around exactly this.

Internally we have a process where we test our CMM programs very thoroughly.

We start with a single point touch probe program where we take a grid of points every 0.040" on each surface.
We run this program 30x without moving it (we call this a stability study).

We do this to establish a base line result for the actual dimensional results.

Then we change to tactile scanning methods at a relatively slow speed and we run 30x - again without moving the piece. In order to capture the bias caused by scanning.

Next we preform a type 1 Gage study with the scanning program in order to identify the impact of the fixture.

Next we increase the scan speed incrementally and monitor the effects on Bias and Repeat-ability by preforming additional Type 1 Gage studies (1 pc run 30X breaking the setup each run.)

We do this so we can validate our program/equipment/fixture impact on measurement error individually and so that we can isolate areas that need improvement.

Finally, once we are satisfied with the efficiency of scanning vs. the quality of results are acceptablt (10-15 % /tol) we run a full crossed Type 2 Gage R&R study using 10x3x3 method to introduce variation from samples and operators and validate the system as a whole for production inspection.

My question is... We use a grid of points at 0.040" as an arbatreary number to establish the "true value" of a dimension...
I am being challenged on this.
Why 0.040"?
Why not 0.020"?
Why not 0.000000000001"?

Is there a scientific method that describes an approach for estimating uncertainty of a probing strategy?
Could i reasonably say that if my point spacing is 0.040" then my opportunity for missing critical data is = or > X?
Is there a different way to do this?
 
C

Coleman Donnelly

hmm... its great that the cove is back, but it does not seem to be the same.
 
N

ncwalker

I had a discussion with a guy on systems modeling and response surfaces once with a similar problem that you describe - there is a grid of points, what delta grid is sufficient to validate the model? Now, this was a response surface and not an actual measurement, but I think the strategy may be the same or you may be able to modify it.

The technique was if you had 100 data points, say, along an axis, they built the model off every other data point, generating a response surface. Then the took the OTHER set, not used to generate the model, and calculated the residuals. They would then double the increment (every 4th, every 8th) and recalculate the residuals to get a feel for the delta of the data points. When the residuals got too large, they knew they needed better resolution.
 
C

Coleman Donnelly

If i read this correctly, I think this would be good for scaling back an inspection plan from a known standard point from your example of 100 points, but it does not do a good job of determining why 100 points was an appropriate starting point.

I think i need to go in the other direction.
If i collect 100 points can I predict the maximum margin of error received at 100 points compared to taking infinite points.
Is there a way to calculate that 100 is equivalent to infinite +/- X?
Then i could extrapolate a procedure from X/tolerance range must be equal to or less than ... 10%?
 
N

ncwalker

Your interpretation of what I said was correct. If you measure the even points only and get a flatness, then measure the odd points - shifted by some amount - and get a flatness, you will get two different flatness measurements. The delta of these errors will tell you the adequacy of the number of hits.

Your follow on is also correct - this is not a prediction of how many you need, rather a validation of is what you're doing sufficient.

I can't help you with the prediction of knowing the process. But I can get you started. There is a table of standard tolerances in the Machinery's Handbook that is based on the manufacturing process. What I mean is, it is a table of expected accuracy if say your process is milling, grinding, forging, etc.

If you consider this as sort of an expected change of z on a CMM or a profilometer, you should be able to trig out a needed change in x based on the the geometric constraint you want. (I haven't done this myself, but I would try this route).

Bear in mind, it would be aggregate. It would not account for, say, a void from a casting or forging process exposed by machining.
 
Top Bottom