Process Capability Study - Aggregate vs. Individual Processes

N

ncwalker

The age old argument .... Do I have to do a capability study on each machine and fixture? That grows pretty quickly in measurements.... If I have 6 machines with 2 fixtures each and the customer wants a 100 pc study, that's 1200 samples to measure. And if each part has 6 KPCs suddenly we are up to 7,200 data points. Yeek.

Can one just do a study on the AGGREGATE? I mean, if I take pieces off the END of the process fed by all the combinations and THIS output is capable, can I assume by superposition that the underlying individual processes are capable?

I did a BUNCH of modeling in Excel and here is what I found....

If the underlying process are not matching each other for centeredness, you are OK with an aggregate study. The effect is this: because the individual process are not centered (say mach A running near the low limit and mach B running near the high limit) a random draw (well, somewhat random, you should ensure representation from all the subprocess) will have WORSE capability. The variance will appear greater because you are drawing from two groups that are separated. In other words, good on the aggregate study means at LEAST that good for the individuals.

BUT ... and this is the big but ... it does NOT work if the processes variances are unequal. If mach A is running a tight process and mach B has a loose cutter and the parts are all over the place, the aggregate of the two will mask the problem process. A capability study will be worse than the good process, but BETTER than the bad one. Think of it like a dart game. You have a team made up of Accurate Andy and Missing Mike. The combined score will be somewhere between the two individuals. It can very much be the case the combined score is enough to "win" where Mike is a dismal failure.
 

v9991

Trusted Information Resource
Re: Process Capability Study - Aggregate or Individual Processes?

a quick bump....any responses and thoughts!

though the query is elaborate....op can try follow_through with an example....
 

howste

Thaumaturge
Trusted Information Resource
Re: Process Capability Study - Aggregate or Individual Processes?

It looks to me like the question was asked and answered in the same post. I thought the post was to share information learned from some studies. Ncwalker, is there something else you're looking for?
 
N

ncwalker

Re: Process Capability Study - Aggregate or Individual Processes?

No. I have no questions, I am just relaying what I found.

If you do a capability study on an aggregate (output of more than 1 parallel process) and find the aggregate output is "capable." It:

1) Somewhat infers that the means of the processes making the aggregate are OK - meaning lined up. Probably.

2) Does not infer at all that the variances (and thus capability) of the individuals are OK. In fact, a couple of tight, low variance processes can hide an incapable one. Proceed with caution.
 

Proud Liberal

Quite Involved in Discussions
Re: Process Capability Study - Aggregate or Individual Processes?

Since your "evaluation" of standard deviation from an aggregate group is compromised, and the effects of kurtosis add more uncertainty, I would be more than just cautious about that approach. Less is always less.

I've had this approach fail really badly when an 8 cavity tool fed into a 32 station secondary operation data was "aggregated". The capability estimates were conservative but useless in working on any improvements needed. so all of the aggregate date was useless. Years of my predecessors frustrations were quickly eliminated by a couple weeks of data collection that separated the processes. Without really looking deep into the data, it was impossible to know what lurked under the hood.

But to your point, YES, BE CAUTIOUS.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
As little value as capability indices are, if your aggregate is really capable, then I would use it. You may have to sell that approach to the customer. If not, you will need to analyze individuals. It should be multimodal - as most processes are, as illustrated by the total variance equation - but for obvious reasons in this case. That is clearly no indication of lack of stability in this case.

Once you satisfied the capability seekers (the curious customer), best to work on controlling the process. That is the real issue.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
Remember when you do your contract review that you may be signing up for 7200 point studies, and prepare yourself. For example, most automotive OEMS expect an 80 cavity injection mold often will be requested to have all 80 cavities show capability for KPCs. Trying to get out of it afterwards may be a difficult chore.
 
N

ncwalker

Bob, that was the point of my exercise.

What I found was if I make the statement to the customer:

"The aggregate is capable, therefore all the individual process are capable, therefore I only have to do a capability study on the aggregate and you should approve my PPAP."

This is a false statement.

The aggregate will do a good job of telling me if all my component processes are centered, but it will mask if one of my component process has more variance than the others. Example: I have 3 process drilling a hole. 2 of them are setup right, the third, I flub the heat shrinking of my drill in my collet and this drill is wobbling. Individually, 1 and 2 are capable. But 3 is centered and too noisy. (Cpk = Cp, but Cp is too low). The aggregate study will (may) mask my problem with the THIRD process.

So now - how do we guarantee to the customer that we are capable on all possible combinations WITHOUT having to do a capability study on ALL the combinations, which would be a LOT of work. I don't know many shops large enough that have a CMM dedicated JUST for launches. They need them to monitor current production ....

One could say we approach it this way:
1) We do an aggregate study, ensuring that ALL subprocesses are represented by at least 5 pieces, and the total in the aggregate has a minimum of 40 pieces. So if I had 4 processes, 5 each, well that's 20 total ... not enough. So I have to take 10 from each of the 4 process to get enough in the total. But if I had 100 processes in parallel, minimum 5 each means 500 total.
2) We do an individual study on two different processes that surmise the total. Then, review all these capability indices. (Actually, it would be better to do a t-test on means and an F test on variances). But, the point is, we look at the individuals AND the aggregate. And see if it makes sense.
3) Possible enhancement - we keep track of the 5 parts from each process. And in doing so, when we select the individual process to do a study on, we choose the one with the tightest and loosest range of the 5 parts from the aggregate. Again, you could very much miss a stray, but this would hedge your bet ....

Yes, you could have a "stray" you did not capture in the analysis. But that goes directly to Bob's point - what do they have in place to CONTROL the process.....

Aside: IF we do this, the weakness is there are no established "golden finish lines...." I can't give a hard number like: If the aggregate is greater than 1.1 and the individuals are greater than 1.33 you're good. MAYBE that would be good ....

But back to the main point. Taking an aggregate and a subset of all the processes would demonstrate that the designed process is CAPABLE. And that should be good enough. If the setup guy on the 5th process in mounts a drill askew and one of the copies is NOT capable, I don't think that's a process design issue as much as a setup verification issue. A car is totally "capable" of going down the highway at 70 mph staying in the painted lanes. Put a drunk behind the wheel ... not so much. Is that a design flaw of the car?

If I put my customer hat on and one of you tried this with me ... aggregate study and discrete study on a couple of contributors ... I think I would be OK with it IF you ALSO convinced me that during operation, each process was monitored like a stand alone process. Each duplicate machine get's it's own process control and setup verification steps that MADE SENSE and you did not control your entire process by just watching the aggregate. Because to use normzone's under the hood analogy ... it's just that, you may miss something under the hood.

We actually in the real world got burned by this. A supplier told me he was checking one part every hour on 5 different machining lines at random to ensure each line was monitored. Well, in real life "random" became the line or two closest to the quality lab and the furthest line wasn't being checked at all. The lesson? Random isn't ALWAYS good. Sometimes planned and controlled is superior. The fix was a check off sheet to ensure the number of samples taken represented each line equally. But taken at random.
 
Last edited:

howste

Thaumaturge
Trusted Information Resource
Well, in real life "random" became the line or two closest to the quality lab and the furthest line wasn't being checked at all. The lesson? Random isn't ALWAYS good. Sometimes planned and controlled is superior. The fix was a check off sheet to ensure the number of samples taken represented each line equally. But taken at random.

Maybe you could say that what people think is random isn't really random. What they did wasn't random. If it had really been random, chances are that they would have detected a problem with the furthest line.
 
N

ncwalker

That's the issue. The control that failed said something along the lines of "parts taken at random from all five lines."

Sounds good. It is what you would want to do. But there was no assurance that the parts REALLY WERE taken at random. Run this control over a month and make a histogram of line numbers where samples were taken from. You would expect (were it truly random) roughly equal counts of lines 1 through 5.

In practice, it was 80% from line 1, 20% from line 2 and none from 3, 4 OR 5...

Conversation was probably like this:

Quality Tech (in the office): "Boss, which part to I get?"

Boss: "Eh, go out there and grab one 'at random'."

Tech: "Ok." Proceeds to get part off the closest line because walking is work ....

We tend to think random sampling will just occur, you know, "at random."

The lesson is - saying the words "at random" doesn't imply that any gathering scheme is random at all. You need to sometimes convince yourself that it truly IS random. And, if the monitoring and verification of "random" is a lot of work, a uniform, structured sampling plan may in fact be better.

The point of "random" is exactly that - to ensure uniform representation. At this point in my life, I'm not even sure "true random" actually exists. Even if you generate random numbers in a computer - if you don't change the seed value, you get the same random numbers over and over again. (These days, random functions typically seed with the system clock, but in the old days, you had to put a seed value in.)

I babble. The lesson is when someone says "we do it random" it behooves you to at least review what they mean by "random" and if it truly IS random.

When we went to the check sheet, it was no longer random. But it WAS better periodic monitoring of ALL the processes.
 
Top Bottom