Stable, Predictable, Control - High Volume Mfg

G

Grimaskr

I work in a high volume, low cost manufacturing plant that is trying to meet TS16949 automotive standards and be auditable using European VDA 6.3 auditing standards regarding TS16949.

TS16949 says special characteristics need to be shown as "stable and capable". SPC is not mentioned in the standard.

VDA 6.3 says special characteristics need to be systematically controlled and monitored in relation to control limits. It suggests that SPC is the tool to do that.

So here's the rub: I make millions of tiny/inexpensive parts with high Cpk (2.5 or higher) and am constantly changing over our machines for different product lines.

My setup-setup variation is a bit all over the place, but is no concern from a capability standpoint. I'd incur a lot of lost productivity and setup scrap trying to eliminate that setup cause of variation... so there's no ROI.

Do I bite the bullet and say all our processes are out of control? Or is it fair to call my setup variation "predictable and stable", treat it like a common cause variation source, set my control limits outside of all my system-wide variation and only react to points outside my control limits?

Opinions?
 

Stijloor

Leader
Super Moderator
I work in a high volume, low cost manufacturing plant that is trying to meet TS16949 automotive standards and be auditable using European VDA 6.3 auditing standards regarding TS16949.

TS16949 says special characteristics need to be shown as "stable and capable". SPC is not mentioned in the standard.

VDA 6.3 says special characteristics need to be systematically controlled and monitored in relation to control limits. It suggests that SPC is the tool to do that.

So here's the rub: I make millions of tiny/inexpensive parts with high Cpk (2.5 or higher) and am constantly changing over our machines for different product lines.

My setup-setup variation is a bit all over the place, but is no concern from a capability standpoint. I'd incur a lot of lost productivity and setup scrap trying to eliminate that setup cause of variation... so there's no ROI.

Do I bite the bullet and say all our processes are out of control? Or is it fair to call my setup variation "predictable and stable", treat it like a common cause variation source, set my control limits outside of all my system-wide variation and only react to points outside my control limits?

Opinions?

Are you PPAP responsible? Look at the PPAP requirements for Stability and Capability.
 

Golfman25

Trusted Information Resource
Not sure I completely understand your situation but if it was me I would just keep the cpk on each different part.
 

Bev D

Heretical Statistician
Leader
Super Moderator
from a physics - and practical statistical standpoint - the setup to setup variation is a separate component of variation than piece to piece. If an I MR chart of the set up averages shows this to be stable and predictable then you are good. this is why "rational subgrouping" exists.
 
G

Grimaskr

@Golfman25 - My variation/control problem is that I will run a few hundred thousand parts of "Product A" through a process like progressive stamping or plating over 12-48 hours. Then move on to different parts.

A month later, when I return to "Product A", my system is pretty radically different. If it's stamping, the die has been cleaned, oiled, PMed, sharpened, and some of the tooling may even have been replaced completely.

We go through a set up procedure, possibly pulling the die once or twice to shim things... then we blast out a few hundred thousand parts again.

The means between these runs are almost necessarily different for each run. To limit setup scrap and maximize production time, as soon as we get the dimensions of the parts in the right neighborhood... we run.

The same kind of thing happens in plating, the baths are all at different concentrations each time we setup and run "Product A" through it, so our mean is always jumping around.

It's high speed, high volume, high Cpk... but the setup-to-setup changes in my mean make it look like I'm out of control based on most control charting methodologies.

@Bev D - Thanks for replying. Our runs are so short and sample frequencies are so far apart that I usually only have several data points on an ImR before I'm off to a new mean.

I could take more samples per run and re-calculate control points each time I run the product. I guess I'm just resisting that due to the lack of ROI on more sampling and chart-manipulation. But I reckon making auditors/customers happy is it's own reward!
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
I

Do I bite the bullet and say all our processes are out of control? Or is it fair to call my setup variation "predictable and stable", treat it like a common cause variation source, set my control limits outside of all my system-wide variation and only react to points outside my control limits?

Opinions?

I'm going to take your above statement literally, and reply you don't "set" your control limits. The control limits are only established by the data results, not by whim. Heck, I could take a drunk's path down the middle of a road, set the limits to plus or minus a mile from the centerline, and claim my drunk is stone cold sober if that was the way it worked.
 

Golfman25

Trusted Information Resource
@Golfman25 - My variation/control problem is that I will run a few hundred thousand parts of "Product A" through a process like progressive stamping or plating over 12-48 hours. Then move on to different parts.

A month later, when I return to "Product A", my system is pretty radically different. If it's stamping, the die has been cleaned, oiled, PMed, sharpened, and some of the tooling may even have been replaced completely.

We go through a set up procedure, possibly pulling the die once or twice to shim things... then we blast out a few hundred thousand parts again.

The means between these runs are almost necessarily different for each run. To limit setup scrap and maximize production time, as soon as we get the dimensions of the parts in the right neighborhood... we run.

The same kind of thing happens in plating, the baths are all at different concentrations each time we setup and run "Product A" through it, so our mean is always jumping around.

It's high speed, high volume, high Cpk... but the setup-to-setup changes in my mean make it look like I'm out of control based on most control charting methodologies.

@Bev D - Thanks for replying. Our runs are so short and sample frequencies are so far apart that I usually only have several data points on an ImR before I'm off to a new mean.

I could take more samples per run and re-calculate control points each time I run the product. I guess I'm just resisting that due to the lack of ROI on more sampling and chart-manipulation. But I reckon making auditors/customers happy is it's own reward!

Well you described my situation as well. We take each batch separately. You'll never get the numbers to work if you include all lots. I have never met someone who has been able to do it either.
 

Bev D

Heretical Statistician
Leader
Super Moderator
exactly golfman
this is a very typical thing for high volume plastics, metal fab, coatings, etc. we ensure that the setup is within some allowable limit (Steve - this can easily be done as the factors that control the location are different than the factors that control the variation around the location. this is not a random walk) that will ensure capability given the piece to piece variation. control chart only the variation within an setup. however we do include ALL of the variation in any 'capability' discussion or calculation.

physics trumps black box statistics!
 
G

Grimaskr

@Steve - Yes, "set control limits" is poorly phrased on my end. It's part of my "liars, damned liars, and statisticians" mindset. You "set" control limits by deciding what sample to calculate them from. :D

If I wanted to ignore the situation, I could try to cheat and use a cross-batch sample to calculate wide control limits. My rationale would be, "Setup-to-setup variation is a special cause that is necessarily inherent in my system and manufacturing model, so to avoid over-reacting to false Type 1 signals I will calculate control limits that incorporate setup-to-setup variation".

In doing so, I greatly reduce my chance of Type 1 false OOC signals, but I obviously lose any ability to apply further Western Electric rules, I ignore setup-to-setup variation as a special cause, and I camouflage any real special causes lurking within my setup-to-setup variation.

That's not something I would even consider with a low Cpk, but it's a cheat I am tempted to do when I have no scrap, no CARs, no problems whatsoever (other than creating evidence of control for auditors).

However, Bev and Golfman are right. The "best practices" answer is to take more samples per setup and calculate control limits for each setup... it will just be a challenge convincing my engineers/production staff that we need to do that when there is no benefit other than "doing it right".

Thanks all!
 

Bev D

Heretical Statistician
Leader
Super Moderator
there is nothing wrong with setting engineering limits for set-up performance.
the whole point of SPC is to detect deviations that are due to assignable causes and take the appropriate action to bring the process back towards it's target. this is EXACTLY what are you doing for each set-up.

The use of SPC is intended to prevent 'tampering'; if you know what caused the deviation and you know how to correct it, you are not tampering.

Assignable causes exist set-up to set-up and within a set-up. because they are from different sources they are treated separately...keep doing what you are doing. ignore the 'black-box' 'rubber-stamp' SPC police
 
Top Bottom