Gage R&R on Potentially Changing Parts - Guidance Needed

S

Sara013

Hi...! First off, let me warn you that I'm an R&D Engineer and (clearly) not at all an expert in Gage R&R studies. I really don't know all of the proper terminology, so I apologize ahead of time. I used to have a wonderful Quality Engineer guiding me through all of this, but that was years ago and at a different company.... So, if you can help, feel free to use small words so I can understand. :D

Here's my current situation:
5 Test Units
3 Operators
2 tests per unit

Ought to be a straight forward Gage R&R. But....

Each operator, as a stand-alone, demonstrates a very high degree of repeatibility, based on a one-way ANOVA.

With all three operators together, the total Gage R&R %SV accounts for upwards of 60-80% depending on which test I'm analyzing. And that tells me the test method is not reproducible. (Right?)

So I tried a quick experiment with the original operator (who has the greatest amount of experience) performing the testing again, several days after their first round of testing.

Again, compared to that operator's original data, each group of data individually demonstrated high repeatability; together, the data showed horribly poor reproducibility.

I'm currently going through all the processes, trying to find and clarify any possible point of variation, but here's my fear:

It is very possible that the product itself is the source of variation. None of the test units are from the same manufacturing lot, there's no telling how often the test units were used before, there's no telling how or when the tests units were made, and there's no telling how the test units might degrade over time and use.

Not my ideal batch for Gage R&R, but it's what my customer has requested.

My first instinct was to find a representative test unit with everything known and consistent, and just test that (or several of that) to focus on the method and the operator without the added trouble of inconsistent, unknown parts.

So... if you've read this far, thank you, I really do appreciate it.

Any advice? Guidance? Thoughts? Am I looking at the wrong thing, am I misinterpreting the data, etc.?

Any help you could give me would be greatly appreciated.
 

Miner

Forum Moderator
Leader
Admin
I occasionally run across similar issues. It does take some effort to track the problem down because you have to isolate the different sources of variation.

The most basic assumption of a crossed R&R study is that the part itself does not change, so start by verifying that assumption. I recommend using a variant of a stability study. Select one part that you preferably have not tested previously. Select a single operator (the experienced one) and have them test that part repeatedly over a period of time equivalent to that of your original series of tests or longer. Test the product 3-5x in succession, wait some period of time (an hour, shift or day) then repeat the 3-5 successive checks, wait the same period of time and repeat the successive checks all on the same part using the same operator. Do this at least 6 times. This will tell you the major source of variation relative to the part (i.e., successive checks, over time, trend from degradation).

Once you have this information, post your findings and we can guide you on the next step.
 
S

Sara013

Thank you for the suggestion, I do appreciate it!

I had already started doing as you suggested, and it's taken me a while to gather the data, as I was also able to make some process improvements in hopes of eliminating all possible sources of variation before really beginning the testing.

(The parts I'm testing aren't simple single material things. Each is a small contained system of different materials, most of which can shift position and orientation due to handling, and there's no way of telling it happened or controlling it. So I have to believe that this is a great contributor to variation....)

I did testing on the same part across three days, collecting one set of data per day. Again, each day's data exhibits great repeatability. But, looking at all three days, the reproducibility is still pretty poor at 30-40% Study Var. (I even did this same three-day testing on a group of parts, and the results were the same: great repeat. / poor repro.)

One test yielded data that showed a steady decay in force, which leads me to believe that, while the part itself is still acceptable and functional, the test is somewhat "destructive".

Another test yielded data that showed no such trend, the data sets were just too dissimilar. In this particular situation, how can you judge whether it's the part that is actually changing, or it's the test itself that is inadequate for measuring the part?
 

Miner

Forum Moderator
Leader
Admin
I have experienced similar issues with product that has highly complex inner mechanical workings where variation in friction, lubrication, etc. caused a lot of within part variation. In order to separate this from the test, I used a consistent substitute. For example, if the test equipment was measuring activation force, I substitute a simple spring to measure force. If the tester measures spring force consistently, but not activation force, the variation lies within the product itself.
 
S

Sara013

Thank you very much for your input!

I had suggested using a substitute, but my customer was against that for some reason. I may have to suggest it again, with all the data pointing to the same thing, and maybe a nice blinking neon sign. :frust:

I'll keep hammering away at this, but it's nice to know that I'm not alone in seeing this sort of issue. Thanks!!
 
Top Bottom