# Capability on Parallel Processes such as Individual Mould Cavities

#### bobdoering

Trusted Information Resource
The difference you allude to is the operational definition of common cause and 'special' cause. A system of so called common causes results in 'seemingly random results'. What you call independent. One or more 'special causes' will result in non random patterns. Your tool wear has a special cause that results in a specific pattern.

That is true - my definition of common cause is one that is truly common - affects every part. That would, then, include tool wear. Special causes are truly special, and do not affect every part - such as tool breakage, start-up/warm-up, etc.

To define whether a cause is special by whether is has a pattern, may in and of itself create more problems than answers in some cases. It works very, very well in naturally caused variation - such as the variation in heights of loaves of bread in an automated bakery. Even so, special causes as I defined them will show up as discontinuities in the charting that typically follow a subset of the Western Electric rules, but clearly and intentionally not all of them. Many of them relate to the mean, of which a process with the continuous uniform distribution has little (or no) use for in its control.

The central limit theorem has nothing to do with SPC theory. The central limit theorem applies when a process is randomly sampled and the sample averages are plotted in a histogram not a time series plot.

Shewart charts - as originally conceived - apply to homogenous process streams which will behave in time series like random samples without apparent pattern.

It is fun to go back into his writings and look at his "process" examples...

#### Bev D

##### Heretical Statistician
Super Moderator
That is true - my definition of common cause is one that is truly common - affects every part. That would, then, include tool wear. Special causes are truly special, and do not affect every part - such as tool breakage, start-up/warm-up, etc.

yes - you operate under a different definition of the terms special and common than the original operational definitions. Its not wrong as it works for you and they are operational definitions not physical or statistical laws. however it can be confusing when two people use the same words without realizing that they have different meanings.

Last edited by a moderator:
N

#### ncwalker

OK. This was quick to read, but will take me some time to digest.

Bev - I do not have "data." I have a model. In my model I have 35 cavities with Cpk of 2.0 and one cavity making bad parts. The collective is calculating capable. In the molding world, this is very possible. A new toolmaker need only install the wrong core pin. I have seen it happen in molds with small cavity counts and yes, the capability went out then, because it was not so favorable weighted as in my 35 to 1 model. Point is I have convinced myself that a capable collective cannot be used to indicate capable streams.

Homogeneity - well, that's purely a matter of scale. A chocolate chip cookie is not homogenous, but a chocolate chip cookie 100 yds in diameter is. The whole what distribution to expect with assignable causes lurking around is a question of this. If during my PPAP run I'm diddling the knobs the whole time, well I have a lot of assignable causes. Note that BY the central limit theorem, averaging subgroups will still yield a normal distribution...... But I'll look at your model Bob.

I am not convinced that grabbing a subset away from the discontinuity (tool change), or even right up to it as long as it's not included is not valid.

Still, the sawtooth model you supply and the resulting distributions read like each point on the sawtooth is exactly that - one point. In control charts they are not, they are averaged. So I am not yet ready to rule out control charts based on that sketch. I have built a "real" sawtooth model and explored it. My model incorporates slight part to part variability, trending upward but not guaranteeing each following part is larger (or smaller depending on the op). I find that's reality. And yes, some of it is measurement error. I also find that on my diamond reamers, the slope of the sawtooth is so gradual, I never get the sawtooth. Either the machine breaks, or we change over to another part. And when we pick back up after a change, well I'm in a different spot on my chart.

So in the subset I examine I'm looking for when to correct. If I average 3 points a measurement and get a sigma (albeit an OK one, not a great one) I AM confident that when I get within 3 sigma of a spec limit, it's time to adjust. If I was marching lock step in a row larger part to larger part in a rigid, organized wear model, I STILL would get a sigma, and could still use it to know when to put the brakes on.

I hear what you are saying, just not convinced yet. I'll keep reading.

#### bobdoering

Trusted Information Resource
I hear what you are saying, just not convinced yet. I'll keep reading.

Don't pass up looking at my example of real shop floor data, too.

Your reamer does not provide adjustment, but a new reamer will (or should) also start high and wear low - so over time and several reamers you will still get a sawtooth curve.

You are correct - X hi/lo R chart does not chart the average value - it charts every value of a circular feature - much more powerful.

Let me illustrate one problem with typical SPC on a circular feature (as an example). For each sample you measure one of the resulting part diameters. The emphasis is on 'one'. How many diameters are there in a circle? There are an infinite number. So, how can you describe or predict an infinite number of diameters with one measurement? You can not. In fact, it represents 1/infinity or a statistically insignificant sample of the diameter population. Now, take 5 samples and average them. That is the average of 5 statistically insignificant samples. And from that weak data you want to make decisions? X hi/lo R uses every diameter on one part and plots them (represented by the high and low - after all, all others are 'in between'!) Much more powerful, much more accurate.

Last edited:

#### Bev D

##### Heretical Statistician
Super Moderator
Point is I have convinced myself that a capable collective cannot be used to indicate capable streams.

which formula are you using to calculate the capability of the individual streams and the total?

What sample size are you using?

Last edited by a moderator:
N

#### ncwalker

I am using Cp = (UL - LL) / 6 / sigma and Cpk = min(UL - mu, mu - LL ) /3 / sigma

In the model I have, I can vary the sample size. If I play the game of guaranteeing one from each of the 36 cavities, you are right. My 35 capable streams far outweigh the contribution of the one I have forced bad. If I start taking more in the model, say 100 measurements and STILL force the same (roughly) amount from each cavity it doesn't change anything. It only scales the problem in the histogram. In fact, the capabilities don't change at all.

If I go to a number of samples with the model driven by a random selection of cavities (as if a receiving department were pulling them from a large box with no thought to cavity number) then I find that the one bad one is so seldom selected the problem gets worse. And by worse I mean the capabilities improve, but the result is worse because I am trying to detect the bad sample from the conglomerate.

So I am fully convinced one cannot "just take" samples. Capability of the conglomerate does not guarantee capability of the individual streams.

So the next thing I am going to play with is some middle ground steps. For example - if I take say n = 40 from my 36 cavities each, that's 1440 measurements...... That gets expensive.

So maybe I break the problem up. Take n = 8 from 5 cavities as a group for 40 samples. That's about 280 total. And with the possibility of one bad one, well that would contribute 20% to the total. Should be enough to drag 4 capable cavities and give me a result.

It's about how certain we want to be and how much effort we are willing to go through to get there.

#### Bev D

##### Heretical Statistician
Super Moderator
Let me clarify: which formulas are you using for the standard deviations and when?

You are correct that you 'can't just take samples'. The requirement is that the samples be random in order to be representative and that the sample size be sufficient for the accuracy (actually precision) of try estimate. The sample size needed is dependent on the standard deviation of the process you are sampling from...yes that is an iterative process...

The big issue with process capability indexes is that they have given the false impression that we can use a cookbook approach and turn off our brains. Very sad.

N

#### ncwalker

Sorry. Sample standard deviation right out of Excel. (Not population, except for Pp and Ppk).

#### Bev D

##### Heretical Statistician
Super Moderator
ahh to be specific: are you using the within subgroup average standard deviation (or range) and/or the total standard deviation with all data points. which are are you using for within stream (within cavity) and for all cavities combined.

N

#### ncwalker

I am using the sample standard deviation function from Excel.

Maybe you should look at my model.

#### Attachments

• MultiCav Capability Model.xlsx
138.5 KB · Views: 188