Continuous monitoring of validated process – sample sizes

Quality Runner

QE Manager by day, Ultra runner by night
I am working on a procedure for ongoing monitoring of validated processes, for a Class III medical device. I found a lot of good advice for validating these processes online, but not much related to sample sizes for continuous monitoring of these validated processes. Background:
  • It is continuous production (no lot size), and monthly output can average 400.
  • I can use AQLs of 2.5, for high risks S4. Then Z1.9 or Z1.4 for sampling and acceptance. So, you get a reasonable 5-8 samples per month for each process.
  • However, due to the number of validated processes having to be monitored, it adds up to 40+ pieces per month for destructive testing, which is not feasible (to scrap 10% of your expensive parts).
  • There isn’t a lot of historical data to use Cpk,Ppk for justifying reduced sampling.
Considering the factors above:
  • How do you calculate and justify sample size and frequency for destructive testing of monitoring a validated process?
  • Any tips on reducing sample sizes with a sound statistical justification for it?
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
It sounds like the key is how to reduce the need for destructive sampling. An ideal answer would be some form of measurement data, which requires smaller sample sizes than go-no go sampling.

There are some articles out there that come up when you google "reducing destructive sampling".

How to Eliminate Destructive Testing (sciemetric.com)

Dealing with Trade-Offs in Destructive Sampling Designs for Occupancy Surveys | PLOS ONE

I would suggest a good starting point is looking at any failures that came up during past destructive samples, or if none, reviewing the criteria for what is a failure in a destructive test. Is there anything you can measure or non-destructively test during the fabrication process when you have access to the raw materials? A way for a supplier of parts to test early in the process?

Short all this, a cost-benefit analysis may be feasible - costs of enduring failures versus cost of destructive testing. This may be tough if a failure means something adverse to a users health and life. Without knowing specific details of what represents a failure and its risks, it is hard to provide much further advice.
 

somashekar

Leader
Admin
Considering the factors above:
  • How do you calculate and justify sample size and frequency for destructive testing of monitoring a validated process?
  • Any tips on reducing sample sizes with a sound statistical justification for it?
I may be sounding a bit different, but if you have validated the process sufficiently well with appropriate statistical techniques and rationale for the sample size in the PQ stage, to state that the process is validated to achieve planned results consistently., then why do you need to do the distructive testing in the monthly output quantity going ahead ...?
The purpose of validating a process is to assure that the process so operated will be in control and the resulting output will meet the product requirements.
You may address when and how you will perform revalidation of this process and you could include time or output volume levels also along with other criteria for the revalidation.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Typically destruct testing sample sizes are not ‘statistically’ determined. As Someshekar said we valid “special processes“ because acceptance sampling is very onerous. depending on the variation you see under stable conditions you could use 1-3 samples on a frequency that corresponds with natural process change points and plot the data on a control chart. The use of a control chart and it’s much smaller sample sizes are themselves statistically valid.
 
Top Bottom