N
ncwalker
The age old argument .... Do I have to do a capability study on each machine and fixture? That grows pretty quickly in measurements.... If I have 6 machines with 2 fixtures each and the customer wants a 100 pc study, that's 1200 samples to measure. And if each part has 6 KPCs suddenly we are up to 7,200 data points. Yeek.
Can one just do a study on the AGGREGATE? I mean, if I take pieces off the END of the process fed by all the combinations and THIS output is capable, can I assume by superposition that the underlying individual processes are capable?
I did a BUNCH of modeling in Excel and here is what I found....
If the underlying process are not matching each other for centeredness, you are OK with an aggregate study. The effect is this: because the individual process are not centered (say mach A running near the low limit and mach B running near the high limit) a random draw (well, somewhat random, you should ensure representation from all the subprocess) will have WORSE capability. The variance will appear greater because you are drawing from two groups that are separated. In other words, good on the aggregate study means at LEAST that good for the individuals.
BUT ... and this is the big but ... it does NOT work if the processes variances are unequal. If mach A is running a tight process and mach B has a loose cutter and the parts are all over the place, the aggregate of the two will mask the problem process. A capability study will be worse than the good process, but BETTER than the bad one. Think of it like a dart game. You have a team made up of Accurate Andy and Missing Mike. The combined score will be somewhere between the two individuals. It can very much be the case the combined score is enough to "win" where Mike is a dismal failure.
Can one just do a study on the AGGREGATE? I mean, if I take pieces off the END of the process fed by all the combinations and THIS output is capable, can I assume by superposition that the underlying individual processes are capable?
I did a BUNCH of modeling in Excel and here is what I found....
If the underlying process are not matching each other for centeredness, you are OK with an aggregate study. The effect is this: because the individual process are not centered (say mach A running near the low limit and mach B running near the high limit) a random draw (well, somewhat random, you should ensure representation from all the subprocess) will have WORSE capability. The variance will appear greater because you are drawing from two groups that are separated. In other words, good on the aggregate study means at LEAST that good for the individuals.
BUT ... and this is the big but ... it does NOT work if the processes variances are unequal. If mach A is running a tight process and mach B has a loose cutter and the parts are all over the place, the aggregate of the two will mask the problem process. A capability study will be worse than the good process, but BETTER than the bad one. Think of it like a dart game. You have a team made up of Accurate Andy and Missing Mike. The combined score will be somewhere between the two individuals. It can very much be the case the combined score is enough to "win" where Mike is a dismal failure.