Binomial Process Capability Sample Size - Variables Data

E

ErikS

Unfortunately we don't have a good way of arriving at variables data (measuring) for a part dimension and thus are forced to create a go/no go gage. We still want to evaluate capability. How do I determine sample size to achieve 95/90 confidence of a capable process? (or 95/95 or 99/95 etc.) From what I read/gather I can use binomial process capability on Minitab to anayze it, but this is new turf for me.

Thanks in advance!!
 
E

ErikS

Re: Binomial Process Capability Sample Size

Also can I go a step farther and translate a 1.33 ppk to 64 ppm as a defect rate and work the math from there?
 

reynald

Quite Involved in Discussions
If you are using Minitab 16, there is a very nice step by step guide on how to do capability analysis for binomial data.
See
Assistant---> Capability analysis--> Binomial Capability
Make sure to read to guidelines, they are very informative.

For intance it says (verbatim):
"
Binomial capability analysis determines whether the % defective meets customer requirements.
Guidelines
Collecting the data
Collect data from a stable process.
Process capability determines the capability of the current process and can also be used to predict the future, ongoing capability of the process. When you use data from the current process to predict future performance, the current process must be stable and in control. If it is not, you cannot accurately predict future capability.

In the Diagnostic Report, Minitab displays a P chart that you can use to determine whether your process is stable. Investigate out-of-control points and eliminate any special cause variation in your process before continuing with the capability analysis.

Collect data in subgroups (samples, lots).
A subgroup is a collection of similar items that are representative of the output from the process you want to evaluate. The items in each subgroup should be collected under the same inputs and conditions, such as personnel, equipment, suppliers, or environment.

Subgroups must be large enough.
If some subgroup sizes are too small, you cannot adequately assess process stability. Minitab checks that the subgroup size is large enough based on your data and reports the subgroup size that is needed to produce a reliable control chart.

Subgroup sizes can be unequal.
Subgroups can vary in size. For example, if a call center tracks 100 incoming calls each hour and counts the number of unsatisfactory wait times, all of the subgroup sizes are 100. However, if the call center tracks all of the incoming calls during a randomly selected hour of the day, the number of calls is likely to vary and result in unequal subgroup sizes.

Collect enough subgroups.
To obtain accurate estimates, you must collect enough subgroups. The number of subgroups required depends on the average number of defective items and on the subgroup size. It is generally recommended that you collect at least 25 subgroups over a long enough period of time to capture the different sources of process variation.

Minitab displays the confidence interval of the % of defective items, which indicates the precision of the estimate. If the interval is too wide for your application, you can gather more data to increase the precision of the interval.

Count the number of defective items in each subgroup.
A defective item has one or more defects that make it unacceptable. If you can determine only whether an item is defective, use this analysis. If you can also count the number of defects on each item, you may want to use a Poisson capability analysis to evaluate the defects per unit.

Interpreting the results
% Defective and PPM (DPMO) measure the defect rate of the process.

"
Regards,
Reynald
 
W

w_grunfeld

Any suggestions for those of us who don't have minitab?
I need to demonstrate to my customer that a process is capable to 3.4 ppm (6 sigma)
 

Bev D

Heretical Statistician
Leader
Super Moderator
Any suggestions for those of us who don't have minitab?
I need to demonstrate to my customer that a process is capable to 3.4 ppm (6 sigma)

there are several basic ways but the sample sizes are huge if your capability is very good. (by the way 3.4ppm is NOT six sigma. 3.4 ppm incorporates the abominal 1.5 sigma shift. A real 'six sigma' process would be 1 ppb)

the smallest sample sizes possible would use a formula that requires the number of defects in the sample to be 0.

one way is to simply use the ppk value of interest: n=1/(1-Normsdist(3*ppk)). where normsdist is an excel function and 3*Ppk gives you the zscore for any given Ppk. the requirement here is that the sample size be the number of sequential parts and there are no defects in the sample...for a 3.4ppm value the Ppk is 1.5 (using the shift) and your sample size = 294,318. if you use a true six sigma level the Ppk is 2 and your sample size increases to 1,013,594,633. Of course these sample sizes have no attached confidence level and so are fairly small. One could calculate the precision of the estimate by calculating the upper confidence limit using the exact binomial of the proportion of 0/n: in the case of n = 294,318 and a one tailed confidence interval with confidence of 95%, the true process average is no worse than .000018 or 18ppm. the formula for the upper confidence limit is: betainv(confidence,d+1,n-9) where d=number of defects, in this case 0.


Another way is to specify confidence and use the exact Binomial: n= LN(1-(confidence/100))/LN(1-p). Let's say you want 95% confidence. p=.0000034 (3.4ppm) and then n= 881,097 again no defects can be found in the sample.

of course you could use the basic sample size calculation for a point estimate of a proportion: for 95% confidence: n=4*p*(1-p)/delta^2 where p = target defect rate and delta is the amount of precision or error in the estimate. in the case of 3.4ppm and let's say you want to have no more error than 5ppm in the estimate. n= 553,999



there are several other ways but really, do you want do this?
 
Last edited:
Top Bottom