Capability on Parallel Processes such as Individual Mould Cavities

N

ncwalker

Here is the question: If I have "n" cavities in a mold (or "n" fixtures in a machining process, or "n" processes in parallel for that matter....), if a random sampling across n cavities yields a capable process (the collective output) does that mean that each of the individual outputs is capable?

The reverse is obviously not true. Individually capable parallel paths do NOT mean the collective is capable. You just have to think about it: say the output of two cavities has a high Cp, like Cp=8 on an inside diameter. This means I have very little variation with respect to the tolerance. But now assume the Cpks are JUST acceptable. Say 1.4 for each one. Further, with these metrics, it is possible that the output of Cav 1 is bumping the lower spec, but with a nice, tight group. And the output of Cav 2 is bumping the upper spec also with a nice tight group.

Now if I take random samples from both of these distributions, well, my sigma is gonna be pretty high. It will show I am eating all my tolerance up (unless my subgroups are restricted to all being from the same cavity, and even then). So the collective capability will be pretty bad.

This does NOT mean my individual cavities are out of control. They are in control, albeit they could use some centering. But if I am putting this in control output in boxes that are mixed and sent to my customer (face it, they don't want to be able to tell the difference between cavities) and THEY do a study on my collective output in their receiving department... They're not gonna be happy.

But what about the CONVERSE?

24 cav mold? That's expensive and time consuming to do a study on that..... Can I say with certainty that if I grab 36 samples from a box of mixed parts and this population IS capable, then each individual cavities are capable?

Note that if I pull 36 samples, that doesn't leave much room for duplicates.... do I want that truly random? Or do I pull 24, one from each cavity, then another 12 at random? Or maybe one from each cavity so that second draw contains no repeats.....

Or am I better off with a more controlled experiment? Like doing a furnace map? Do I take n from a couple corner cavities, n from a couple center cavities? Or maybe my selection is based on watching the process, maybe with a pyrometer for a molding process taking some from the cooler, hotter, and middle cavities temperature wise?

OK to say do what the customer wants, and I understand this. But they are paying ME for my expertise in the fabrication of the part. So I should get a say in what makes sense ..... (provided it does).
 

Miner

Forum Moderator
Leader
Admin
Re: Capability on Parallel Processes

Overall, you have a good grasp of the situation.

Do you have a high degree of confidence that your process is in control? If you do, my suggestion is the following. In your 24 cavity mold example, pull 4-5 samples from each cavity (total sample size should be approximately 100). Plot a control chart by cavity number, not by time. Note: this is similar to performing ANOM. If all cavities are in control (ignore extended rules as they will not apply in this case), then the cavities are statistically equivalent and may be treated as one. You may then calculate capability using all cavities.

If some cavities are different, your process will likely begin to deviate from the normality assumption (if it is typically normal to begin with). It may still be capable, but may require a different approach.
 
Last edited:
N

ncwalker

Re: Capability on Parallel Processes

Understand what you are saying, but I don't think if one tests a subset of the cavities by plotting a control chart, then saying each of the cavities individually is in control, therefore the cavities as a group is in control. Why? Because one of the cavities could be close to the low limit, but in control, and another one could be close to the high limit and ALSO in control. I think if one were to call cavities "equivalent" you would have to take a statistical number of measurements (30 to 40) and do a Student's t-test for differentiation of means. THAT would say there is no difference between Cavity X and Cavity Y. Still, to cover ALL the cavities are we saying we would have to t-test all of them? That doesn't reduce the number of measurements. Just adds another mathematical model we are going to run the data through. A t-test in addition to a capability study.

Here is what I think....

I don't think one can CONTROL individual process outputs by looking at the collective. And this is assuming one is not 100% inspecting, but attempting SPC. One has to consider each process individually. Why?

Molds - something can happen to one cavity physically (damage/wear) or thermally.

Machining fixtures - something can happen to a fixture - loose bolt, broken dowel or datum pin.

Parallel lines - what two assembly/process lines are EXACTLY the same, really?

And the list goes on.....

Here is how I would minimize control:

In the case of molds, I would identify thermally the different cavities based on an experiment. I would certainly continuously monitor the hottest and coldest. I mean continuously in the sense that I would take 3-5 samples from each as a regular statistical data point. The others? I would spot check them. Even one piece might be sufficient (but checked to CONTROL limits, not specification limits). How many? How frequent? More is better, but more = $$$. One has to weigh:

a) Speed of the process - how many bad parts COULD I make between measurements?
b) Cost of measuring vs. cost of containment - This is actually pretty easy.... If my selected frequency is known and my production rate is known, I just have to ask myself how much does it cost to scrap the n part between my last good measurement and the current bad ones (or manually sort them until the defect is contained) vs the cost of making the measurement.
c) Risk of exposure - let's face it, how bad am I going to get beat on if I let a bad part to my customer?

By doing a real good job of measuring my extreme cases with good frequency, I sort of validate the endpoint of the others. Except for breakage or wear. Now breakage is a gimme putt. Usually when a mold breaks, it really breaks. It's almost visual. But wear? Not so much. They wear slowly, requiring measurement.

Still, my original question. Let's say I do a PPAP run of 100 pcs on a 36 cavity mold. I now have 3600 PPAP parts. My quality techs goof and all these things wind up mixed in one big box. If I take 40 random parts from this box for a capability study (not knowing which cavity they are from) and these 40 pieces show capability, can I then say that each individual cavity is capable?
 
N

ncwalker

Well I couldn't let a sleeping dog lie. I went and built an Excel model. It supports up to 36 parallel process that you can thing of as cavities, or machining fixtures, whatever.

You set a main mean and sigma for all of them, then you can put in a "nudge value" for the individual processes and it generates 100 data points (configurable) for each process. The data points are random normal numbers.

You get to see the Pp/Ppk (didn't bother with Cp/Cpk because of subgroup difficulty in the layout of the sheet).

Then on the collective model, you tell it what percent of the total datapoints you want to sample. As if you were pulling the collective ouput randomly from a box. Histogram output makes analysis pretty quick.

Trying different scenarios and hitting the F9 key a LOT, here is what I saw. And most of this is intuitive:

1) The capabilities of the individuals and the collective can be vastly different.

2) The more individuals there are, the less the effect of each individual on the collective. Example: the 36 cavity mold. If I randomly draw 36 samples with all but one cavity centered and capable, it happens pretty frequently that there is no representation of the "bad" cavity. And even when it is there, being one out of 36, it doesn't drag the mean far enough to affect Ppk unless it is significantly out. If it is just incapable (like Cpk = 0.5) it get's lost in the masses of good points. And it scales. If I take 3 measurements from each cavity by force (108 points), with one incapable cavity, well, everything scales. The histogram gets taller, but the capability of the collective remains unchanged.

Been a while since I was in college and the math skills have deteriorated, so I can't mathematically prove it. But the fact that I could model a situation with 35 capable cavities and 1 incapable cavity and have the collective still capable with the 1 incapable on included sorta demonstrates that:

You cannot infer that a capable random sampling of parallel processes means that all of the individual processes are capable.

If I stop and think about the good old Central Limit Theorem, the statement above follows.

So how does one demonstrate capability from a lot of parallel process in a way that demonstrates it, but still optimizes for testing costs. Especially at initial PPAP when part designs are apt to change, which can orphan a large number of parts.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
Here is the question: If I have "n" cavities in a mold (or "n" fixtures in a machining process, or "n" processes in parallel for that matter....), if a random sampling across n cavities yields a capable process (the collective output) does that mean that each of the individual outputs is capable?

No, the only meaningful capability analysis is by individual cavities for all cavities. In essence, treat each cavity as an individual machine. Once one is comfortable with the reliability of that data, then you can track the worst versus the best. But to throw them all together -after random sampling - is really only good for data stew.

The reverse is obviously not true. Individually capable parallel paths do NOT mean the collective is capable.

That is correct, so the best way to analyze the process is via the individual paths.

This does NOT mean my individual cavities are out of control. They are in control, albeit they could use some centering. But if I am putting this in control output in boxes that are mixed and sent to my customer (face it, they don't want to be able to tell the difference between cavities) and THEY do a study on my collective output in their receiving department... They're not gonna be happy.

They are not being happy because they are ignorant of process capability. You cannot determine process capability with incoming receiving sampling. An example of this is here...but the same reasoning applies to your situation. Incoming receiving cannot represent the process due to severe sampling error.

OK to say do what the customer wants, and I understand this. But they are paying ME for my expertise in the fabrication of the part. So I should get a say in what makes sense ..... (provided it does).

True - and if they demand process capability indices (and sadly, they do), you need to show them the capability of each stream - as that is how the process is controlled. If it is meaningful, and they want to endure the cost and lead time to temporarily dial in every cavity so that the are so similar that the combined capability is adequate....then fine. But, what they really need is for you to share your expertise on the product and process and show the capability of the individual process stream, and how that capability will ensure that the probability of all of the parts are in spec.

It's molding...there is no easy answer.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Ncwalker: what you are seeing in your model is the effect of sample size and the large number of process streams.
Two comments:
1. as you've seen small sample sizes that are randomly selected from a non homogenous process stream (multiple cavities with different means in your case) have a very high probability of missing non capable streams. Sample sizes are driven by the standard deviation of the process, the accuracy (or in actuality, the precision) of the estimate AND the non homogeneity of the process in question.
2. Even if you have a couple of incapable process streams, if the sampling was representative and the total capability of all streams combined is good, the overall capability is good. (however you define capability, Cpk > 1.33?). The only exception to this would be if one or more customers receive a single stream or if the product is shipped in lots that contain only a single stream. In that case the customer will experience times when the capability is worse than expected and you will have to answer that question.

As has been discussed many times in this forum, capability indexes are all but useless in communicating, controlling or driving improvements in true quality levels...
 
N

ncwalker

They are not being happy because they are ignorant of process capability. You cannot determine process capability with incoming receiving sampling. An example of this is here...but the same reasoning applies to your situation. Incoming receiving cannot represent the process due to severe sampling error.

I looked at the sawtooth plot and understand the images - the start and endpoints of the sampling period will drive the values you take based on where they fall on the sawtooth. But I don't think that example is representative of the receiving inspection problem.

First, the images were of control charts. So the "lines" of the sawtooth were not individual points, rather averages of multiple points. So if you were performing a capability study on subgroups taken anywhere along the sawtooth, each point would be something like 3 to 5 measurements. You would be average these and via the central limit theorem, your result would approach a normal distribution. So you would still be able to get capability.

(I actually have a killer control chart in Excel. X bar R. You can use sliders to grab a portion of the data along a sawtooth I have generated as a teaching example. Even if you grab the discontinuity of the tooth, you still get a normal histrogram provided there are enough data points). Now this REALLY depends on how you calculate sigma. There are a couple of schools of thought .... calculate it on each subgroup, then average those subgroup sigmas. Or calculate it on the whole kit and caboodle......

The receiving inspection problem is really one of short term capability. Cp/Cpk. The estimators used for sigma rely on the parts being IN ORDER. But Pp/Ppk it doesn't matter. You are using sample standard deviation - transparent to order.

Put some random normal data into Minitab. Run the Capability sixpack and look at Cp/Cpk and Pp/Ppk. Go back and SORT the data and re-run. Check your indices again. (And find a really devious way to suddenly make all your capabilities good.... :)

I think receiving inspection COULD calculate Pp/Ppk from random samples and it would be pretty representative of the process.

Second, the example shows the sawtooth. And we all know this is generated by some sort of tool wear that is then corrected. I haven't seen it in reality. There is always something that comes along and disturbs it. Machine break, changes in incoming stock. The list goes on. What I am saying is, I haven't seen such a regular sawtooth in my experience. The point? WHEN do you say enough samples have gone by?

It's molding...there is no easy answer.

My purchasing folks would disagree. They say RESOURCE. :rolleyes:

I noticed under your user name you have Stop X-BarR Madness. Man, I'm with you. I don't even like X-BarS. I started doing what I call "running sigma". I set up a sheet where you take your grouped measurements and plot your average. Then you specify a look back parameter. This calculates sigma for the current data points based on the last n points you specified in the lookback parameter. Then, I plot the average and knowing this running sigma, the +/- 3 sigma lines. And I watch those against the SPEC limits. Neat. Easy. Clear. Once the sigma bands cross the spec limits, I know I'm making bad parts.

Completely agree that the only way to track things is individual cavities.

NCWalker
 
N

ncwalker

2. Even if you have a couple of incapable process streams, if the sampling was representative and the total capability of all streams combined is good, the overall capability is good.

I agree with that in that the indices will be good. But not so sure I agree it satisfies the intent. In my model, if I run it with 35 cavities centered and capable and old no 36 is actually generating out of spec parts, it STILL calculates good capability (for the reasons you stated). But I'm making bad parts... in significant quantities.

So I trick my receiving inspection department. I am feeding them parts where the process is not in control. They calculate good capability, and we all scratch our heads as to why the line keeps crashing....
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
I looked at the sawtooth plot and understand the images - the start and endpoints of the sampling period will drive the values you take based on where they fall on the sawtooth. But I don't think that example is representative of the receiving inspection problem.

In the diagram the sawtooth chart shows that the sampling was taken from the exact same process in each case - but the distribution charts below then show that you can have bimodal, tight, skewed or other resulting distributions, depending on the sample presented. An incoming box of material could show any of those distributions shown, yet be from the same perfectly controlled and capable process. That illustrates my point that incoming inspection can not duplicate the process capability.

First, the images were of control charts. So the "lines" of the sawtooth were not individual points, rather averages of multiple points. So if you were performing a capability study on subgroups taken anywhere along the sawtooth, each point would be something like 3 to 5 measurements. You would be average these and via the central limit theorem, your result would approach a normal distribution. So you would still be able to get capability.

Actually, the central limit theorem only works for independent variables. Tool wear is dependent, and the plotting the averages would not approach a normal distribution - only a truncated uniform distribution, illustrated here. You can calculate capability, but you need to use the statistics of the uniform distribution, not the normal distribution. All of that is a long, old story we have already hashed through here on the cove.

The receiving inspection problem is really one of short term capability. Cp/Cpk. The estimators used for sigma rely on the parts being IN ORDER. But Pp/Ppk it doesn't matter. You are using sample standard deviation - transparent to order.

Works great, as long as the original process was a homogeneous process. The original process is not, with its various streams. Look at the attached chart. It shows that one stream can have a good Cpk (or Ppk, if you wish), another stream can also have a gook Cpk, narrow enough to "pass", but on the opposite side of the target (middle of the specification). Either one of those streams individually will be perfectly acceptable to a capability-centric customer. But, combining them will cause a multi-modal distribution whose overall width of variation is too wide to generate an acceptable Cpk calculation.

Now, here comes the hard question; if the individual streams were perfectly acceptable, why would the mixed streams not be? Doesn't make any sense. One reason is that the calculations will artificially stretch the variation beyond their actual probability - thinking it is one big honkin' Gaussian curve. Not a good model to base a decision on, although a very popular one.

My purchasing folks would disagree. They say RESOURCE. :rolleyes:

Apparently, give them a gun and they will shoot at anything.

I noticed under your user name you have Stop X-BarR Madness. Man, I'm with you. I don't even like X-BarS. I started doing what I call "running sigma". I set up a sheet where you take your grouped measurements and plot your average. Then you specify a look back parameter. This calculates sigma for the current data points based on the last n points you specified in the look back parameter. Then, I plot the average and knowing this running sigma, the +/- 3 sigma lines. And I watch those against the SPEC limits. Neat. Easy. Clear. Once the sigma bands cross the spec limits, I know I'm making bad parts.

My point is I do not approve of rubber stamping X-bar R charts with the false notion that they are universally correct. I have shown that in the particular case of precision machining, they encourage overcontrol and are actually the worst possible chart of all possible charts. That is one specific example - there could very well be others. But....I also do not encourage Cpk or Ppk indices used indiscriminately. First - no single number can represent how a process will perform over time, and second (as seen in the attached diagram) it is not always applicable or correct.

Completely agree that the only way to track things is individual cavities.

Cool! :cool:
 

Attachments

  • cpk comparison.jpg
    cpk comparison.jpg
    50.9 KB · Views: 433

Bev D

Heretical Statistician
Leader
Super Moderator
Central limit theorem works for sample averages and random sampling.Process capability is for individual values. The central limit theorem does not apply at all to individual values.Ncwalker: If your sample size is large enough to get a representative sample then the only way you get a capable Ppk is if the distribution of all streams together is capable. If capability is defined as 1.33 then you would have to have even your worst streams contributing no observed (sampled) values near the spec limits for the +/- 3sigma spread to be 3/4 of the tolerance...one or two streams near that would have an individual Ppk that was less than 1.33 but could not be observed to have defects. So your question is theoretically possible but then you provide an example where you have one or more streams actually producing defects. THAT will not yield a capable overall result unless you have a lot of other very capable streams such that your incapable streams are so small in contribution when compared to the overall. Not very likely...do you have actual data? (remember if your sample size is small - and any sample size that is less than your number of streams is way too small - you will most likely just not get any defects or even inspect values from your incapable stream.Also remember that a process can be in control and incapable. What you describe about comparing your results to spec limits is acceptance samplingfor continuous data, not SPC. It's not bad or wrong. Infact I use it a lot and sometimes without any SPC. It depends. But the two things do different things for us. The validity of one doesn't invalidate the other. The process or situation itself is what invalidates one tool over another.Bob - we've had this discussion before and I realize you are of a different opinion, perhaps a different operational definition, but for others I want to clarify that the central limit theorem doesn't apply for 'independent' variables. It applies to All output variables. All output variables are dependent on the input factors of their system. The difference you allude to is the operational definition of common cause and 'special' cause. A system of so called common causes results in 'seemingly random results'. What you call independent. One or more 'special causes' will result in non random patterns. Your tool wear has a special cause that results in a specific pattern.
The concept of SPC is that the process stream as presented is homogenous.
The central limit theorem has nothing to do with SPC theory. The central limit theorem applies when a process is randomly sampled and the sample averages are plotted in a histogram not a time series plot. Shewart charts - as originally conceived - apply to homogenous process streams which will behave in time series like random samples without apparent pattern. When we have non homogenous processes like multi cavity molds, we can still utilize this conept by applying rational subgrouping and occassionally having to use extensions of the classical chart sets. OR we can choose alternative approaches as you have done with your tool wear.
 
Top Bottom