
This thread is carried over and continued in the Current Elsmar Cove Forums 
The New Elsmar Cove Forums 
The New Elsmar Cove Forums
Statistical Techniques and 6 Sigma Statistically Valid Sample Sizes For SPC

UBBFriend: Email This Page to Someone!  next newest topic  next oldest topic 
Author  Topic: Statistically Valid Sample Sizes For SPC 
J. Otto unregistered 
posted 27 September 2000 05:59 PM
Hello All, I am looking for some input into sample size considerations and frequecy of sample pulls for production lots that we are calculating Ppk values on. Currently we are pulling 4 parts every 2 hours for each process to calculate Ppk values on. My concern is that this is the 4 parts every 2 hours is regardless of cycle time or production lot size. Is there an issue with being statistically valid in our calculations based on sample size and frequency. As an example, I may have two seperate production lots that I am going to run for 24 hours. The first process may have a cycle time of 10 seconds and the second process may have a cycle time of 20 seconds. After 24 hours of production, this would result in first process having twice as much product than the second process. Should we be increasing our sample size and/or frequency of pulls for the first process??? If there is any information available or if anyone has any suggestions on what makes for statistically valid sample sizes and frequency of sampling, please let me know. Any help will be greatly appreciated. Regards, Jon IP: Logged 
Rick Goodson Forum Wizard Posts: 102 
posted 28 September 2000 12:39 PM
Your question is very good, but unfortunately complex and not easily answered in a paragraph or two. Never the less, some thoughts and suggestions. From a protection point of view, absolute sample size of a random sample is more important than its relative size compared to lot size. This is readily apparent if you look at Operating Characteristic Curves for a constant sample size and with differing lot sizes. With regard to sample size in Xbar and R charts, the essential idea is to select subgroups to minimize the opprotunity for variation within a subgroup. We want them small. However, a size of four works best because with a sample of four the distribution of Xbar is nearly normal even though the samples are taken from a nonnormal universe. The frequency of the sampling is an economic decision based on what happens to material produced between samples if we find an out of control condition. Theoretically frequency should be often when analyzing a process initially and the reduced once the process is under control and running well. Take a look at the following references for detail on sample size selection and frequency: Statistical Process Control by Grant and Leavenworth, ISBN 0078443547 AT&T Statistical Quality Control Hanbook (Mine is so old id does not have an ISBN) Quality Control by Dale Besterfield IP: Logged 
All times are Eastern Standard Time (USA)  next newest topic  next oldest topic 
Hop to: 
Your Input Into These Forums Is Appreciated! Thanks!