Measuring Capability of Process with Multiple Specifications

Morlock

Involved In Discussions
Hey all. I work for a contract manufacturer. We apply coatings to medical devices. We're in the process of moving our facility, and as part of the Validation Master Plan, I'd like to include a process capability comparison (CpK or PpK), since we're going with a "Like-for-Like" validation, and I am under the impression that an efficient and effective way of establishing whether or not something is indeed like-for-like is by comparing process capabilities (if this impression is wrong, or if there is a better way, I'm all ears!).

The issue lies in that, as a contract manufacturer, we deal with many different customers with many different products, each having their own specifications. I have capability charts for each customer and product for our current process at our current location, but I don't feel that they will really help in the move, without running a number of runs of each customer product and generating new capability charts for the new location for each product. To mitigate this, we are planning on performing validation runs on both the old and new processes utilizing generic parts, then comparing the output data from each of those runs. If a particular client then wants a product-specific validation/process capability comparison, we can do that after the generic ones have been performed as an addendum.

Does this seem like a GOOD approach, and are CpL or PpK the right metrics to use? Might there be a different approach to compare an overall process transfer that contains a number of smaller, but similar, subprocesses (same general process, different settings and parameters)? What else might you suggest (and why)? Thanks all!
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
When posed with a variety of requirements for the same process (assuming you can categorize down to truly "the same" processes) showing capability to the tightest tolerance is generally a safe approach. Beyond that, one can apply wider tolerances to the data and recalculate the capability. Another approach is to show A-B - before after - on sample parts of "same process" families to allow the customers to feel comfortable that the process has not degraded from the move. As far as whether Ppk is an appropriate measure - it would be if your variation is a normal distribution with random, independent variation. It might not be if your process changes as a function over time - such as plating between adds.
 

Morlock

Involved In Discussions
First off, what's wrong with X-bar/R?! :) A discussion for another time...

Another approach is to show A-B - before after - on sample parts of "same process" families to allow the customers to feel comfortable that the process has not degraded from the move.

That is essentially what we're going after with the "Like for Like", comparing the outputs from the old (currently-validated) process and the new process, using the same inputs.

So far, the data shows relatively normal distribution with random, independent variation, so maybe I'll look at PpK instead of CpK...

Thanks!
 

Miner

Forum Moderator
Leader
Admin
I would approach it in a different manner. While your customers have many different specifications, you have a limited number of processes and characteristics. Rather than try to show no change on the numerous capability indices, show no change in the short/long term process variation.
 

Morlock

Involved In Discussions
I would approach it in a different manner. While your customers have many different specifications, you have a limited number of processes and characteristics. Rather than try to show no change on the numerous capability indices, show no change in the short/long term process variation.

To fill in some gaps, our company applies coatings to medical devices. Different device substrates and different coating performance requirements mean different coating process profiles and different release specifications (each with their own capability indices, where appropriate). Our process is more or less the same for the different units, but there are profile parameters that do change that are critical to those units (coating speed, coating length, dry temp/time, etc.) that are dependent on the unit being coated.

With this information, how might you suggest "showing no change in the short/long term process variation", given that there is much variation, by design, between customers?
 

Miner

Forum Moderator
Leader
Admin
There are several approaches that you may consider. 1) Evaluate the highest volume cases along with the most stringent cases; 2) Use a fractional factorial experimental design or a definitive screening design of the profile parameters to select cases that most closely match and evaluate those. This would give you a good cross sectional representation of your process.
 

rmundroff

Starting to get Involved
Another approach that you may consider.
Use a large enough sample of like parts divide in half, test A with the old process, and B with the new process

using worse case spec's and existing ppk figure out your mean shift that would drop you below your customer required ppk
using your exist sigma (in the ppk) you can look up the power you want was well as determine sample size for BOTH the before AND after sample sizes.

Then do F test for Variances to show variances not different,
and a 2 sample mean test to show means not different.
with these you can state at a certain level of confidence level that the 2 processes are not statistically different
 
Top Bottom