Ron Rompen

Trusted Information Resource
Trusted
#1
Haven't been able to find any previous references to this question, but I am sure that some of the members here will have suggestions as to how to approach this issue.

I have been instructed to perform GR&R on an automated inspection cell. There are several discreet test stations in the cell, and it is possible to identify which cell a reject came from.

My question is how to best perform and analyze the study. At first I thought of doing it as an attribute study (the cell basically gives a pass/fail result) however I want to also capture the fact that a part is consistently rejected at the SAME station, demonstrating that the gauging is repeatable.

The other question is how to deal with the reproducibility. Parts are loaded onto a conveyor before entering into the cell, and then are handled by robots through all subsequent stages. The robot(s) are error-proofed so that an incorrectly loaded part will not be processed.

There is no opportunity for operator influence that I can identify, unless I have overlooked something.

Any suggestions as to how to capture and analyze the data?

Thanks
 

Miner

Forum Moderator
Staff member
Super Moderator
#2
Is the decision to classify the part as Pass/Fail based on a continuous measurement? This measurement may not be displayed, but may potentially be accessible if the data are stored by the tester. If this is the case, you may treat the stations as if they were operators. Reproducibility would then be station to station reproducibility rather than operator to operator.
 

Ron Rompen

Trusted Information Resource
Trusted
#3
Thanks Miner. The decision is made (if I understand your question) based on discrete measurements in some cases, and continuous measurements in others. I have considered using each station in the cell as a separate operator.

Thanks for your input
 

bobdoering

Stop X-bar/R Madness!!
Trusted
#8
I have performed 50 pc attribute study, with about half with bad pieces that include representative bad for each station. I run them through 9 times and use the first 3 as operator 1, second three trials as operator 2 and 3rd trial as operator 3. Then, you can see if over the run there were any stability issues. That would likely be more meaningful than just running stations as operators. Looks obvious on the Minitab charts is there is one. An attribute study answers the question whether or not the gage can determine bad parts. That is pretty much all you need to know. Rarely will variable data from a gage like this be used to adjust the process, so variable studies are not very productive.
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#9
There are other reasons for getting the continuous data other than whether or not it will be used to 'adjust' the equipment...

Since many of my products (which are measurement devices) are used for pass/fail calls I have found that the continuous data can be invaluable in helping to understand how to improve the measurement system when:

- There are many parts at the pass/fail margin: which I determine by a histogram of the continuous data over a larger period of time so I know how to select my MSA samples such that they are representative of the population distribution)

- The measurement system variation is larger than the marginality of results (in other words the MSA fails)


Of course if your parts are representative of the distribution of results and you pass the MSA then you don't need to gather the continuous data.
 

bobdoering

Stop X-bar/R Madness!!
Trusted
#10
I was going to add to my comment that there may be academic reasons - or deeper analysis needs, especially if the machine fails - to collect variable data, but I left it out to simplify the point that a machine designed to tell good parts from bad parts really needs to prove that it can do just that. Sometimes people overlook the simple answer when it may answer the question they are asking.
 

Top