# Visual Inspection (Attribute Data) Gage R&R

B

#### bobby j

Hi,

So I just wanted to know if there was a way to do a Gage R&R on a visual inspection. There is no physical gage and I have 3 operators doing visual inspections on parts and I would like to know if there was a way to do a Gage R&R study. If so, how? I have Minitab and I am looking on the attibute gage study, is this it? How do I use it?

Thanks a lot.

Last edited by a moderator:
U

#### Umang Vidyarthi

Re: Visual Inspection Gage R&R

Hi,

So I just wanted to know if there was a way to do a Gage R&R on a visual inspection. There is no physical gage and I have 3 operators doing visual inspections on parts and I would like to know if there was a way to do a Gage R&R study. If so, how? I have Minitab and I am looking on the attibute gage study, is this it? How do I use it?

Thanks a lot.

Hello Bobby, Welcome to the cove

Very good first post. Let us say there are three operators A, B & C. The instruments are the eyes of the operators say E1, E2 & E3. Now operator A can only use E1, and so on. You can neither have one operator using the three instruments, nor have a common instrument for the three operators. Since appraisers and instruments are not interchangeable, hence it will be a fruitless exercise.

Umang

Last edited by a moderator:

#### bobdoering

Trusted Information Resource
Re: Visual Inspection Gage R&R

Let us say there are three operators A, B & C. The instruments are the eyes of the operators say E1, E2 & E3. Now operator A can only use E1, and so on. You can neither have one operator using the three instruments, nor have a common instrument for the three operators. Since appraisers and instruments are not interchangeable, hence it will be a fruitless exercise.

I consider the visual standard the gage, not they eyes - the eyes are a function the operator error. The key of that study is will the operators correctly identify known good and known bad parts. You need to understand if they are capable of agreeing with the standards. If not, the standard needs to be clearer or another approach must be used.

Be sure to randomize the specimens - maybe even have them presented by another person to the operator to avoid bias (memorizing).

#### Bev D

##### Heretical Statistician
Super Moderator
Re: Visual Inspection Gage R&R

Of course you can!
I do it all of the time and I've found it to be quite valuable.
Google the article "When Quality is a matter of taste" by yDavid Futrell

Then use the search engine here to search for attribute gage (<-- key words) R&R studies...

tomorrow, I'll try to remember to post the formulas - try searching for "verification and validation of measurement systems" I posted this document in a thread and it includes a description of this process...

#### Bev D

##### Heretical Statistician
Super Moderator
Re: Visual Inspection Gage R&R

thanks Stijloor!!

I'm such a Luddite

T

#### TWIBlogger

Re: Visual Inspection Gage R&R

I just completed a Gage R&R study this morning.

Just looking for any opinions on holes that I may have in my approach...please be gentle!

33 samples known to pass or fail. A plurality of visual attributes are inspected per piece. So, the first variable that may be a problem for this study is that inspectors may fail the same part for different reasons. Something to chew on...

Anyway, average reproducibility is .79.

Average repeatability is .73

I determined Cohen's Kappa, just for fun, per inspector. The average k=.37, ranging from .25 to .58

My understanding is that I want R&R to be +.90. Is this correct or a good goal? Is it even feasible? What is the next best step to close the gap between .7X and .90? I'm thinking a meeting where the inspectors sit down and calibration over the differences in pass/fail?

Is kappa meaningful at all in this sort of situation?

Back to the plurality of attributes per piece. Should I conduct gage R&R per attribute? Would this help me better understand our weaknesses? Sort of a pareto approach to try and pinpoint precisely what is lower our R&R? Thoughts??

Bryan

T

#### TWIBlogger

Re: Visual Inspection Gage R&R

Sorry, I forgot to add that I had eight inspectors, two trials each.

#### bobdoering

Trusted Information Resource
Re: Visual Inspection Gage R&R

My understanding is that I want R&R to be +.90. Is this correct or a good goal? Is it even feasible? What is the next best step to close the gap between .7X and .90?

Rules of Thumb
Parameter Acceptable Marginal Unacceptable
E >= .90 .80 - .90 < .8

P(FR) <= .05 .05 - .10 > .10

P(FA) <= .02 .02 - .05 > .05

B .80 - 1.20 .50 - .80 or 1.20 - 1.50 < .50 or > 1.50

Are they feasible? It depends - particularly on how obvious the fail condition is to the pass condition. For surface conditions (as an example) the level of permissible scratches (how many, how deep, how long, etc.) may not achieve .90. Detecting missing components can be 1.00.

Back to the plurality of attributes per piece. Should I conduct gage R&R per attribute? Would this help me better understand our weaknesses? Sort of a pareto approach to try and pinpoint precisely what is lower our R&R? Thoughts??

You could set up the gage R&R specimens with some failing parts with single failing attributes, and others with combinations. You are not limited to the number of specimens you present to the operators. This may help sort out if there are any specific attributes or combination of attributes that create discrimination error - especially if they can exist singly in the process output. You could also do gage R&R per attribute...little more work, but may provide further insight.

I'm thinking a meeting where the inspectors sit down and calibration over the differences in pass/fail?

That may be one of the last things you do, as one you have those conversations the probability of bias will increase since the operators will know that is what you are looking at.

T

#### TWIBlogger

Re: Visual Inspection Gage R&R

Rules of Thumb
Parameter Acceptable Marginal Unacceptable
E >= .90 .80 - .90 < .8

P(FR) <= .05 .05 - .10 > .10

P(FA) <= .02 .02 - .05 > .05

B .80 - 1.20 .50 - .80 or 1.20 - 1.50 < .50 or > 1.50

Thank you for the detailed feedback...however, can you tell me what each parameter represents?

Are your parameters the same thing as mine? If so, can you match them up to clarify?