Visual Inspection (Attribute Data) Gage R&R

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
Re: Visual Inspection Gage R&R

Thank you for the detailed feedback...however, can you tell me what each parameter represents? :eek:

Are your parameters the same thing as mine? If so, can you match them up to clarify?:eek:

I am referencing the Help pages from SPC XL:

Effectiveness or E The ability of an inspector to distinguish between defective and non-defective parts. Effectiveness is the Number of parts identified correctly divided by the total number of opportunities to be correct.

Probability of False Rejects or P(FR) The likelihood of rejecting a good part. The number of times good parts are rejected as bad divided by the total number of opportunities to rate good parts.

Probability of False Acceptance or P(FA) The likelihood of accepting a bad part. The number of times bad parts are accepted as good divided by the total number of opportunities to rate bad parts.

BIAS or B A measure of an inspector’s tendency to falsely classify a part as good or bad. BIAS is computed as P(FR) divided by P(FA).

Not sure how they match to yours, although I might if I had your raw data in Excel format.
 
T

TWIBlogger

Re: Visual Inspection Gage R&R

not sure how they match to yours, although i might if i had your raw data in excel format.

***********************
summary (n=8)
avg min max stdev
e 0.73 0.61 0.82 0.07
p(fr) 0.26 0.16 0.48 0.11
p(fa) 0.31 0.13 0.50 0.13
b 1.18 0.32 3.84 1.16
***********************
summary per inspector
e 0.73
p(fr) 0.28
p(fa) 0.25
b 1.12
**********
e 0.61
p(fr) 0.48
p(fa) 0.13
b 3.84
**********
e 0.79
p(fr) 0.16
p(fa) 0.38
b 0.43
**********
e 0.82
p(fr) 0.20
p(fa) 0.13
b 1.6
**********
e 0.76
p(fr) 0.16
p(fa) 0.50
b 0.32
**********
e 0.79
p(fr) 0.16
p(fa) 0.38
b 0.43
**********
e 0.67
p(fr) 0.32
p(fa) 0.38
b 0.85
**********
e 0.67
p(fr) 0.32
p(fa) 0.38
b 0.85
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
Re: Visual Inspection Gage R&R

You have some genuine chaos there. They have some inspectors with greater error detecting failures than acceptables, and then you have the opposite in other inspectors. Some significantly incorrectly identifying the correct condition.

I would go back and look at the specific specimens that were the most incorrectly identified. What conditions are on those parts? Any theory why they would likely be incorrectly identified than the others? This may help zero in on the areas that need clarified with further retest, without doing each and every condition. May help identify some interactions, also. Again, I would avoid the temptation of asking the operators which were harder just yet, due to the risk of bias.
 

Jim Wynne

Leader
Admin
Re: Visual Inspection Gage R&R

Back to the plurality of attributes per piece. Should I conduct gage R&R per attribute? Would this help me better understand our weaknesses? Sort of a pareto approach to try and pinpoint precisely what is lower our R&R? Thoughts??


You could set up the gage R&R specimens with some failing parts with single failing attributes, and others with combinations. You are not limited to the number of specimens you present to the operators. This may help sort out if there are any specific attributes or combination of attributes that create discrimination error - especially if they can exist singly in the process output. You could also do gage R&R per attribute...little more work, but may provide further insight.
I think this is a recipe for confusion. If the only thing you're concerned about is whether an appraiser can tell good from bad, regardless of individual characteristics and criteria, it might be OK.

If you want to know something about appraisers' abilities to make judgments for each characteristic/criterion however, you need to separate the studies, with each one focused on an individual characteristic.

I have a feeling that at least some of the heartburn the OP is experiencing has to do with Appraiser "A" rejecting a part for characteristic #1 and Appraiser "B" rejecting the same part but for a different reason. You need to be able to make sure that each appraiser understands the criteria for each characteristic, and that the criteria are sufficiently objective to allow for consistent decisions.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
Re: Visual Inspection Gage R&R

If you want to know something about appraisers' abilities to make judgments for each characteristic/criterion however, you need to separate the studies, with each one focused on an individual characteristic.

That may be fine if there is no characteristic interactions to consider and they have the time to run that many series of studies. Looking at the results of individual samples within a varied set as I described would likely provide the same insight - or, if not, at least information to reduce the number of individual studies. It would also limit bias. If you are looking for one thing by itself, you can be much more focused on it and skew the results toward the better.
 

Jim Wynne

Leader
Admin
Re: Visual Inspection Gage R&R

That may be fine if there is no characteristic interactions to consider and they have the time to run that many series of studies. Looking at the results of individual samples within a varied set as I described would likely provide the same insight - or, if not, at least information to reduce the number of individual studies. It would also limit bias. If you are looking for one thing by itself, you can be much more focused on it and skew the results toward the better.

It would be helpful to know what types of defects, standards and criteria the OP is dealing with.
 
T

TWIBlogger

Re: Visual Inspection Gage R&R

It would be helpful to know what types of defects, standards and criteria the OP is dealing with.

Without revealing too much...the defects are are cosmetic in nature, scratches, bubbles, pits, specks, runs, etc.

Bubbles, pits, specks that cause problems are in the neighborhood of 0.4mm and larger. An added twist is that there are quantity thresholds in addition to location, spacing and grouping of these types of defects which determine pass/fail.

Scratches are not allowed.

Runs are not well defined and depends on if they can be seen easily or not.

And there are critical areas where the criteria tightens and non-critical areas where the criteria loosened.

And of course, we don't want to spend all day looking at them, no more than 5 seconds to pick out the really bad pieces.
 
Last edited by a moderator:

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
Re: Visual Inspection Gage R&R

Oh, yeah...you will have a tough time getting to .90. Your only hope is to get the criteria as clear as possible - which is not easy. Every time you have a boundary sample, someone will come up with one just a little different...and confusion will ensue. I understand your dilemma even better now. Some colors or surface finishes will also be worse than others. The best you can do is try to pinpoint the areas of confusion and try to get a better consensus among the inspectors.
 
K

Kalpol92

Re: Visual Inspection Gage R&R

Sorry to jump on this thread but i too have a question...... i have always worked to the following.

Parameter

% Within-Appraiser Agreement
³ 90% - acceptable
< 90%- not acceptable


% Appraiser Agreement with Standard
³ 90% - acceptable
< 90%- not acceptable

Overall Kappa for Between-Appraiser Agreement
³ 90% - acceptable
³ 70% and < 90% - conditionally acceptable with rational
< 70% - not acceptable

Overall Kappa for All Appraiser Agreement with Standard
³ 90% - acceptable
³ 70% and < 90% - conditionally acceptable with rational
< 70% - not acceptable

Is there a standard rule of thumb or does it differ from company to company?

kalpol
 
D

Darius

Re: Visual Inspection Gage R&R

looks ok to me.

IMHO, is a global standard.:2cents:

But I have also a question...

The calculation of Kappa is for overall an by category as far as I know, how can be calculated the need of improvement of an especific tester?:confused:

Of course is nice to know the overall status of agreement but.... if one of the testers is doing badly, how to detect it (non continuous R&R)?
 
Top Bottom