Sample size for process validation

shimonv

Trusted Information Resource
Hi all,
We need to do process validation for a console (medical device) and I have an issue with the sample size.

According to our Statistical methods WI, the minimum sample size for variable data should be 30. This seems reasonable for V&V but quite high for PV.

It's a small startup company with a very small installed base. How can I justify doing process validation with 5-10 units?

I'd appreciate hearing what's your experience on this matter?

Thanks,
Shimon
 
You are likely to get few responses right now as it is Christmas Eve and most of the seasoned members are not on line - spending time with family.

A statistical methods WI? I would strongly advise you to do a LOT of research here and update that WI. You also need to understand what your regulatory body will require and what kind of statistical review you will be under. If any.

But to answer your question we would need to know what kind of processes you are validating and why. Are these ‘special’ processes where the output cannot be verified (inspected/tested) 100% because the inspection /testing would be destruct or prohibitively expensive? Or are these processes simple like mere assembly of parts? Is special alignment required?

What confidence do you want? What defect rate is OK? What defect rate is not OK?

To be honest there are ways to use reduced sampling but these are justified before hand, not after you’ve decided what sample size you can afford.
 
Thanks Bev, I especially appreciate your input during this time
Sample size for process validation

I agree with all that you wrote. Important point to mention: Every console is checked in the end of production (acceptance / final test). I guess this can be the basis to claim that PV is not necessary? However, and not to shot myself in the leg, how robust should the acceptance test be to determine that the process is fully variable?

Shimon
 
If every function is tested and assembly items inspected at final acceptance a ‘concurrent’ “PQ” is often acceptable. In other words you would test/inspect each device and ship it if it passes. The processes would then be considered validated when 30 (if that’s your number) devices in a row pass. And corrective action is taken for any failures/defects found during this assessment is taken.

The one concern here is for intermittent failures, most of which should have been caught during the device design validation, but it is possible that intermittent failures can occur from processes (still talking only about non-special processes) and this may actually be the source of faliures during “PQ” testing. This is why “OQ” or ‘life’ testing is so important even for process validation…but many regulatory - and statistical - reviewers or even internal Quality/R&D people think about…
 
Without knowing the details of *what* the validation is trying to demonstrate, it is practically impossible to make any directly actionable recommendations. I can offer a few comments:

The use of "variable data" would have to be well-motivated. If the process outcomes are more binary "pass/no pass", analysis of attribute data is probably more appropriate.... and in such cases where the decision on pass/no-pass is based on a test, it is possible that a test method validation is most appropriate. The sample sizes could be as small as 11 for a 95% confidence 95% tolerance hypothesis test... if the TMV is constructed as one.

If you are convinced variable data is the way to go, the lower limit on sample size will most certainly be convincing someone (yourself) that the distribution of the data collected is well-described by the normal distribution. Other distributions are possible of course, depending on the thing being measured, but most common tests assume/require a normal distribution. A sample size of 15 is the typical floor for assessing normality, although a literature review of Anderson-Darling or K-S tests will reveal that "fewer than 8" is the point at which you can't trust such tests. (this is from personal memory)

Variable data for study designs are probably best leveraged when (a) there exists some historical data about the variable and its relationship to the process (or design) boundary AND (b) there is a reasonable expectation that the data being collected will be "far enough" away (from the boundaries) to satisfy the necessary "k-value". IIRC a 95/95 requirement with a sample size of 15 ends up with a (1-sided) k-value of over 2.5, and the 2-sided k-value will be almost 3. [Assuming a normal distribution!]

I agree with all that you wrote. Important point to mention: Every console is checked in the end of production (acceptance / final test). I guess this can be the basis to claim that PV is not necessary? However, and not to shot myself in the leg, how robust should the acceptance test be to determine that the process is fully variable?

If you aren't relying on a process to control any variability that can lead to non-conforming outputs, you don't need to worry so much about process validation. This approach could be triggering for some third parties, so there are two recommended options:
  1. 100% verification of process outputs
  2. 'process validation' that demonstrates that the process doesn't introduce variability in the outputs.
The latter is (I think) somewhat subjective... but I've seen many cases where a third-party thought they were being clever (or playing 'gotcha') by pointing to some process step (let's say: "fastening") asking to see "the validation" but we had an analysis that showed we actually validated whatever the largest source of variation was (vital many), and did minimal testing to show that variation in the trivial many process steps couldn't contribute to non-conforming escapes.
 
Thank you @Tidge,
I appreciate your input; it's clear you have a lot of experience on this matter.

Since the consoles are capital equipment and the process outcomes are more likely to be binary, it seems to me that it's better to aim for 100% verification of process outputs.
You wrote: "but we had an analysis that showed we actually validated whatever the largest source of variation was (vital many), and did minimal testing to show that variation in the trivial many process steps couldn't contribute to non-conforming escapes."

Can you share some tips on how did you do that?

Thanks,
Shimon
 
"but we had an analysis that showed we actually validated whatever the largest source of variation was (vital many), and did minimal testing to show that variation in the trivial many process steps couldn't contribute to non-conforming escapes."
This is likely to be weirdly specific:

We had a product that included PCBA that we assembled and wave-soldered. Those PCBA ended up in the final assembly, and the final assembly included (manual) fastening. There was testing along the way, including testing of the assembled PCBA. Practically speaking... functional tests of the PCBA should have been sufficient, but the manager was on something like a "validation kick". We ended up challenging/verifying (via OQ) the established variance of the wave solder process... and we never got a non-conforming output. Had we started blindly... the OQ would have been used to establish the allowed range of process parameters, but this process was pretty tolerant of extremely wide (possible) parameters... we ended up just verifying this fact.

The fastening steps got called into question by a third party ("why don't you have data about the (lack of) torque specs?") but this person really had no mechanical experience(*1)... the fasteners had no purpose beyond holding things together... which wasn't a "critical" performance feature and obvious to see if "done wrong". (This was documented in Design FMEA)... it was the electrical performance that was most likely to be compromised by the PCBA (process validation showed we couldn't "improperly solder" them, testing verified functionality of each one) or in-house cable assemblies (which were all 100% tested). We weren't having non-conformances, so it isn't as if the third party was helping uncover weaknesses... as we had accounted for all the things that were actually likely to "go out of bounds" during assembly.

(*1) We did have to explain to the third party some of the mechanics behind the engineering discipline of "fastening"... I don't know that if we had explicitly included such things in a DFMEA (or accompanying design review) would have been more sufficient, as that guy was playing a common variant of "gotcha"/"let's see how they react to this question." It helped that we had folks who knew a thing or two about fasteners, but I've had similar experience with auditors "amazed" by basic electrical engineering (e.g. "resistors as voltage dividers") or software concepts (e.g. "binary").
 
Still need more details about the process you are validating. Like Bev D has mentioned, if it is a special process or not. And also the purpose of your validation. Process validation may be focusing on certain process parameters can repeatedly produce consistent product, work-in-progress or components.

If you are talking about a certain characteristic of a finished product that you would test for each final console, that may not require a PV to validate. It may be tested using a final console in the corresponding type test for that device.

I totally understand the economic burden for you to produce 30 units simply for PV. But the key solution to you may be to think outside of the current situation. I mean if strictly speaking in statistical terms, 5-10 units may not be robust enough for most statistical models.

If you may provide more information, maybe there is a way to help you.

And one more question is that is your company preparing for a QMS audit or something?


 
Brilliant and much better than AI :)
The only challenge with this approach is that you need a good team of QA and Engineering working together, mapping out the entire manufacturing process to make sure its bullet ("gotcha") proof.

Thanks again!
 
The only challenge with this approach is that you need a good team of QA and Engineering working together, mapping out the entire manufacturing process...
From the quality/project management side, I suggest the process mapping be the initial part of the Process FMEA construction. A Process FMEA (or a comprehensive set of Process FMEA) is the most appropriate place to document the need (and extent of) for process validations along with direct references to any that are performed.

There is always a chance that the team working on the PFMEA will overlook something(*1), but having PFMEA makes it hard for third parties to claim that a company didn't consider risk to patients/users when deciding what elements of a production process needed to be validated (and at what level). If the production process doesn't rely on "non-verifiable" outputs (I prefer that term to "special" outputs) like sterilization, bag sealing, welding, etc. for safety.... process validation (as a concept) is IMO primarily just establishing the effectiveness of the production process to minimize production delays, scrap, rework. This is important, but it isn't directly related to safety.

(*1) The business' robust quality feedback mechanisms (NCR, CA/PA, Complaints Handling) will provide clues if something was missed. We don't have to rely on brainy third-parties to expose a missing (or incomplete) process validation... assuming that the process team has a basic awareness of verification v. validation.
 
Back
Top Bottom