Gage R&R with large inter part variance - Torque

M

Mechanica

Hey all,

this is my first post

I'm trying to set up a calibration system for a relatively very accurate torque measurement.
I'm measuring torque tools using a torque measuring system.
The %study var i'm getting is lower than 30%
The %Tolerance i'm getting is much higher than 30%

i suspect that the reason for getting that high %Tolerance is due to the low tolerance being tested and the large inter part variance.
If the tool being tested changes a bit every time i test it should i try destructive testes ?

what would give me a definitive answer in regards to the reason for the high G R&R
is it the inter part variance?
or is it the measurement system insufficient resolution / noise reading ?
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
If your %study is better than %Tolerance, that is a dead giveaway that you are using a wider spread in your parts than the width of your tolerance. Since the spread is supposed to represent the variation of your process, you are assuming by the spread presented to the study that your process is not capable. If it is, I HIGHLY recommend using the historical standard deviation approach to describe your process variation rather than your sample part distribution.
 

Welshwizard

Involved In Discussions
The reason for comparatively low % study var is due to the relationship between measurement error and the spread of the chosen parts. If you are confident that the natural spread of variation which has been calculated for the parts is a true reflection of the manufacturing process then the % Study will indicate a number which can be relied upon.

The % Tolerance number relies on the relationship between the measurement error and the specification width, it has nothing to do with the sampled part spread. A well planned study would only measure the feature of interest and not contaminate the results with the variation of other features, e.g. diameter with roundness etc etc.

The measurement error estimate is only valid if the process is consistent, this should be indicated by no observations appearing outside the Upper Control Limit for the Range Chart which should be plotted by your software. If this is not the case, investigate the process and perform the study again. There are other nuisance elements of measurement error which can be seen on the Average chart but this article would be much longer if we brought that into play too so ensuring that the measurement process is consistent is the first priority.

I'm assuming that you are performing a GRR Study, in which case the comments on consistency apply to the other operators too.

If the measurement process is consistent and not chunky, its performing the best it can under the conditions of the test, in which case the only way to get more favorable numbers would be to change the measurement device or change the spec.

On your point about the tool, you mean the torque wrench? The variation you see in each measurement could be due to many things but before you decide to change anything check whether you have consistency. If you don't then make a change and re evaluate, if you do have consistency its performing the best it can.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
The reason for comparatively low % study var is due to the relationship between measurement error and the spread of the chosen parts. If you are confident that the natural spread of variation which has been calculated for the parts is a true reflection of the manufacturing process then the % Study will indicate a number which can be relied upon.

If the process variation is truly greater than the tolerance, than the OP is claiming the process is also drastically not capable - by definition. Is that true?

If the measurement process is consistent and not chunky, its performing the best it can under the conditions of the test, in which case the only way to get more favorable numbers would be to change the measurement device or change the spec.

If you change the tolerance (which is typically not an option) it will only affect the %Tolerance. You might find a more accurate gage, but whatever the difference is in the gage, it has to resolve the problem that this gage faces in accuracy. More resolution alone may not be the issue. And, the real issue is your process - is is capable? How do you know? If not, what are you going to do about it? The gage won't fix that! It may be perfect for a process that is capable.
 

Welshwizard

Involved In Discussions
I'm sorry, I dont follow what you mean by your first question? My point was, however you act on a consumption of a specification by part variation you would need to take care that your estimate of part variation was a true reflection of your manufacturing process or the number wouldnt mean anything.

On your second paragraph, with the type of study potentially being performed here, accuracy (the difference between a true or reference value and a measured value) is not assessed. Rather, its the repeatability of a set of observations across a subset of operators.

The questions of interest are whether the part can be sentenced with confidence and/or whether the measurement error (repeatability & reproducibility) can track or pick up variation in the part.

Actually, the two measures of tolerance consumption are poor insurance policies for answering both these questions, both overstate the effect measurement error has when it interacts with part variation, but this is another topic.

I agree, the only thing of real interest is knowing that when you see a signal on a process behaviour chart for a product its not due to measurement error and you can act on it with confidence, return the process to stability and read off the capability.For people not running process behaviour charts, its how much one should allow for measurement error (if anything)when sentencing a part.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
I agree, the only thing of real interest is knowing that when you see a signal on a process behaviour chart for a product its not due to measurement error and you can act on it with confidence, return the process to stability and read off the capability.For people not running process behaviour charts, its how much one should allow for measurement error (if anything)when sentencing a part.

When that is the issue you are trying to determine, it is critical that you are calculating the PV as a true representation of the process. If you are good enough (and most people are not) to have a 10 piece sample that represents the process variation, then the calculation using the part variation is fine. For most people, using the historical process standard deviation (AIAG MSA 4th ed pg. 121) is far more accurate representation of the process variation, and is the preferred method. If it is not known yet, the Pp (where TV= (USL-LSL)/(6*Pp) should be a good estimate for a starting point. (AIAG MSA 4th ed pg. 122) It will also yield more accurate ndc calculation which is critical to understanding if the gage has the statistical accuracy to detect signals within the process variation.

The OP states that %Tol us better than %TV, which means TV>Tol.
 
Top Bottom