Do I need part variation while doing Destructive Variable Gage R&R MSA study

N

ncwalker

The goal is to test the measurement system, not just the gage.

100% agree. I slipped in my original post and called it "gage" when it is "measurement system." For the dear readers - that means, not just the device that spits out the number, it also includes the fixturing, the operators, the consistency of the parts, and the ambient conditions. All feed into the system.

Stamping, casting, extruding and molding all might make it difficult or impossible (at least in practical sense) to create the variation necessary to include the entire tolerance range.

I agree with the practicality aspect. If it is practical to create parts throughout the range, it should be done. If it is not practical (and the things you cite are not) my response is - save the setup parts when the process is new. Even in a mold, there's usually first off tooling samples that are then tweaked into final samples to correct small dimensional errors. Those first off parts, the setup parts, heck, even the parts made outside the normal process window, are all GOLDEN PARTS to hang on to for the Gage R&R.
 
N

ncwalker

This is correct, but also includes where the gage is used for SPC and capability studies.

Agreed. (In my head, SPC and capability are a subset of DMAIC, not separate things. But I could have been more clear.)
 

Miner

Forum Moderator
Leader
Admin
Agreed. (In my head, SPC and capability are a subset of DMAIC, not separate things. But I could have been more clear.)
That is true, but I was being cognizant of the fact that many companies do not use DMAIC, but do use SPC and capability studies.
 
N

ncwalker

It's the right call, Miner. I fall into the trap of expecting others to have the same experiences as myself. Makes it sometimes hard to judge the frame of reference.
 

Jim Wynne

Leader
Admin
I agree with the practicality aspect. If it is practical to create parts throughout the range, it should be done. If it is not practical (and the things you cite are not) my response is - save the setup parts when the process is new. Even in a mold, there's usually first off tooling samples that are then tweaked into final samples to correct small dimensional errors. Those first off parts, the setup parts, heck, even the parts made outside the normal process window, are all GOLDEN PARTS to hang on to for the Gage R&R.
In the great majority of cases, this is not necessary and is unlikely to be fruitful. At some point we must allow common sense and experience to prevail and stop allowing the tail to wag the dog. If the process is being tweaked when tooling is new, there is no benefit in saving parts that don't represent the approved state of the process.
 
N

ncwalker

If the process is being tweaked when tooling is new, there is no benefit in saving parts that don't represent the approved state of the process.

I don't know, Jim. Again, I'm less sure myself what the MSA study has to do with the process. For example, a bad part is out of the process. A result of the process performance being unknown or the process being out of control. If I am evaluating a gage system, I want to see it catch a bad part. This raises my confidence that the thing actually works. When I do a control plan audit and there is some checking operation or a poka yoke, I want to see it catch the red rabbit part. Having out of normal process parts in the MSA study is (IMHO) the mathematical equivalent of confirming the red rabbit part.

I have seen where an MSA study fails and then it is redone with set up parts and it passes. But I will add this caveat - often we say "fail" and "pass" when we mean "customer unhappy" and "customer happy." Sometimes what makes the customer happy isn't statistically sound. But in this case, I think it IS statistically sound.
 

Jim Wynne

Leader
Admin
I don't know, Jim. Again, I'm less sure myself what the MSA study has to do with the process. For example, a bad part is out of the process. A result of the process performance being unknown or the process being out of control. If I am evaluating a gage system, I want to see it catch a bad part.
How would this bad part be discovered? How can you logically tie the discovery of a bad part to the use of a similar bad part in a gage study?

When I do a control plan audit and there is some checking operation or a poka yoke, I want to see it catch the red rabbit part. Having out of normal process parts in the MSA study is (IMHO) the mathematical equivalent of confirming the red rabbit part.
I'm not following the logic here. If you have introduced a known bad part into the process stream in order to see if it gets caught, and it doesn't get caught, how would you control for all of the variables that could contribute to it not being caught? In other words, how can you logically assume that a similar bad part had not been used in the MSA, that fact accounts for it not being caught in production? It makes no sense, frankly.

I have seen where an MSA study fails and then it is redone with set up parts and it passes. Sometimes what makes the customer happy isn't statistically sound. But in this case, I think it IS statistically sound.
If your contention is that using an out-of-tolerance part is "statistically sound" in terms of evaluating the likelihood of being able to detect a similarly out-of-tolerance part in production, please show your work.
 
N

ncwalker

I do want to say I am very much enjoying the discussion. But I'll get back to this in a day or two.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
If your short term variation used to prepare the parts is too tight, you will not be able to tell them apart and will fail GR&R. In such a case, fabricating "worse" parts may make sense. However, they may represent process variability, so using the historical standard deviation approach to represent process variation, in conjunction to fabbed worse parts, may be the best option there.
 

Welshwizard

Involved In Discussions
In any typical measurement study you will be designing it to understand whether or not the measurement process is consistent and then characterise the results. When the study confirms the consistency you will typically characterise the measurement error and then the usefulness in the form of the ability of the measurement process to pick up the variation in the parts.

If you can't, for whatever reason, select parts which represent a faithful spread of the process variation then any characterisation involving this aspect will clearly not make sense.

Difficult with a destructive process, but if it would be possible, take the measurements of multiple parts as an ongoing process. When you have say 20 or more, plot the characteristic on an ImR Chart. When the average moving range from this chart is divided by the bias correction factor of 1.128 the outcome is an estimate of the Total Variation (TV) of the process.

Once you have this estimate of TV you can compute more appropriate estimates of usefulness like Don Wheeler's Intraclass Correlation Coefficient.

I hope this helps, good luck!
 
Top Bottom