S
My company performs both attribute and variable data analysis for design verification. When performing attribute analysis, we use the binomial distribution model and base our sample size
on the number of expected failures at a given confidence/reliability level. So, for example, for C/R of 95/90%, n=29 for 0 failures; n= 46 for 1 failure; n=61 for 3 failures; etc. Easy so far; here's where we get into "discussions": Any one of those criteria satisfies our requirements, so it seems like we can produce say 61 samples and if we have 0 failures after testing 29, or 1 after testing 46, we pass, and may stop testing. Is there anything mathematically wrong with "testing until we pass?" The pro point is that the experimenter's involvement in the testing can't have any effect on whether the design is good or not, since the test units have already been produced and once you meet one of the acceptance criteria, you pass. Also, the protocols used to generate the testing clearly state the alternate acceptance criteria before testing begins. The counterpoints revolve around lot acceptance testing (AQL's) and double, or multiple sampling but don't seem to apply since this is design verification testing and beta error is not considered. There are also many comments about not being sure the process is working well, but that, again, is process, not design, and not part of design verification.
So, anyone know if the testing scheme we use is incorrect?
So, anyone know if the testing scheme we use is incorrect?