Probability of passing bad product - Air decay inspection

K

Karen Beth

I'm hoping one of the gurus out there can help me build an argument. We have a problem of passing a leaking parts that should have been caught in-house to the customer. I catch a lot of flack for the performance of the air decay machines but no one seems to want to listen to my complaints of the way the parts are handled. After a part rejects once, the inspectors run the part back through the air decay again, then again if it still rejects. It is only after rejecting 3 times that the part is actually scrapped. If it passes at any one of these opportunities, its passed on as a good part. Is there a formula that I can plug our numbers (GRR results?, # rejects?) into to demonstrate that by saturating the air decay with marginal product, we are allowing bad product to pass due to the repeatability of the equipment? Appreciate your help!
 

Tim Folkerts

Trusted Information Resource
Karen,

There seem to be several issues here that are tied together. The ultimate concern is that you are getting complaints about quality, which will affect customer satifaction and eventually profits. So something should change! One question is whether even your best parts that past the test are good enough for the customers need.

Assuming that you make at least some good parts that work for the customer, the direct problem for you is how to test parts to sort the good from the bad. It appears that when you test multiple times, you get different results. This could be due to I) the part actually is borderline defective and the test has a difficult time judging or II) the test itself is defective (a large value of alpha and/or beta). Do you have a feel for whether either or both of these is occuring?

For case I, multiple testing will simply push the borderline parts into the "accept" pile.
How to test depend on just what kind of expectations you have. For example, suppost you have reason to believe that only 1 out of 1000 parts is bad, but that the test will reject a good part 1 out of 5 times and always rejects bad parts (i.e. a Type I error with a large alpha). If you test 1000 parts, you would expect it to reject the one bad part, but also to reject (on average) 200 good parts. If you retest these rejected parts, then you will again reject the one bad part, but now reject just 40 good parts. One more pass and you are down to 9 rejects on average (the one bad one and 8 good ones).

But we can revese the situation as well. Suppose that 100 out of 1000 are bad (i.e. p = 0.1), but the test accepts 1 out 5 bad parts and accepts all the good parts (i.e. a Type II error with a large beta). After the first pass, you accept the 900 good parts, but also 20 of the 100 bad parts. If you retest the 80 rejects, you will accept another 16. The third pass will accept another 13 bad parts.

In the first case, each extra pass increases adds more good parts to the "accept" pile. In the second case, each extra pass adds extra bad parts to the "accept" pile. Changing the numbers will change the outcomes along this continuum. The "best" testing plan will depend on several factors - alpha, beta, p, cost of testing, the cost of scrapping a part, the value of selling a good part, and cost of selling a bad part come to mind.

(I can think of one other variation - the process of going through the testing could improve the product and actually turn it from bad to good. Perhaps some resin has to set and just the extra time from the first test to the second gives it time to cure. Perhaps the technicians wiggle the parts or brush off bits of dust to improve the performance.)

Sorry - I seem to have written quite a bit but I don't have a definite answer. Bottom line - most of the time I would tend to vote against repeated testing. Two cases where it could be appropriate would be
1) the parts are valuable and alpha is large and you retest the rejects (you are scrapping a lot of expensive good parts).
2) the cost of shipping bad parts is high and the beta value is high and you retest the accepted parts (you are selling a lot of bad parts and you want to make sure the parts you accept really are good).


Tim F

P.S. Has anyone seen an analysis like this? I might be valuable to have a formula to decide when multiple testing is worthwhile and how many times to repeat the test. I think I could come up with something of the sort if I got motivated. whip.gif
 
B

Bill Ryan - 2007

We 100% test (air decay) a Fuel Rail (casting). Our specification is 1 cc at 72 psi. We also have the process of repeating a failed part - but only once. I haven't been that close to this part in a couple of years and I'm not sure where the retest originated. I do know that our rubber seals get worn, accumulate dust, and otherwise "get damaged" from repeated use. I'm not sure how often we change seals. There are two testing units which put a stamp (hallmark) on each passed part. The Gage R&Rs (variable) are 4.8 and 7.9 (% tol.). We use the units as "Pass/Fail" while in production so I have no capability data other than our internal failure rate is around 5.5% (much too high!). The customer failure rate is around 1/100,000 (they 100% leak test the final assembly).

How does one get to the customer - We're not always sure :bonk: . We have sliced and diced many parts for analysis and other than the few that have obvious leak paths (porosity, nonfills) we are pretty much at a loss. Could the stamping of the hallmark be just enough impact to open a path? Possibly. This part gets grit blasted - could the blasting be "hiding" a path which later on opens up? Possibly. Could the customer's assembly line be impacting the part enough to open a path? Possibly. I will say that this customer has been very supportive in helping us (and them) when a leaker does rear its ugly head at their facility. When we retest a returned part from the customer, they aren't "marginal". The leak normally checks at over 2 cc so we really scratch our heads as to how it got out in the first place.

Now that I've got all this down - I've lost where I was going with it (early in the morning and that age thing :rolleyes: ).
Karen - I guess this isn't much help to you other than to let you know we have issues also.
Tim - I don't know if my lengthy rambling gives you enough to come up with a formula or not.

Anyone else out there have leak testing issues?? We're quoting more and more parts (some assemblies) with leak testing requirements. (Hope this doesn't derail your original post, Karen)
 
K

Karen Beth

reply to probability of passing bad parts

Thanks Tim. We are passing 5 parts per thousand that leak to the customer. Not all good parts are in a borderline state. I beleive the problem is as you stated in case I, multiple testing of borderline parts. The parts were rejected once, then we 'push parts into the accept pile', as you said, through multiple testing. Your scenario of Type II error with large beta is correct. I know we have issues with the air decay itself. We have cut the tolerance of what the equipment is designed for in half with 0 investment due to a customer change. I am convinced, however, that it is the practice of multiple testing that is our main culprit. One thing I did not see mentioned is the R&R of the tester. Does the 20% accept of the bad parts (1 out of 5 in your example), mean the same thing as the repeatability? Thanks again!

karen
 
K

Karen Beth

thanks bill.

Thanks Bill. I can relate to everything you are saying and yes we have sliced and diced countless parts over the years. We have some grey areas as to what should have been caught in house versus opened up by customer machining. Also, in our 'cost reduction' efforts, we no longer change gaskets at a frequency but only when they are visibly worn (shredded is a better word). All part of the case I am trying to make. Very frustrating.

Thanks!
 
Top Bottom