Process Validation Sampling and Data Analysis Techniques (medical devices)

The circumstance described regarding the inferior firmware test sounds more like a problem with test method validation, and less like a problem with process validation.
 
You should be validating that the firmware was complete and correct - this is rarely, almost never fully tested in an acceptance testing plan as it can be quite large and it is typically running a complex machine that itself is not fully tested particularly for intermittent problems. I have decades of experience with this. (On point I have heard more times than I can count from software developers who complain that they can’t possibly validate an entire software version as it is ‘just too large’. They just let the Customer do the ‘testing’ for them…)

Similarly reliability failures are not a test method validation issue, that are due to a lack of reliability validation itself. And a lack of real specifications via appropriate experimental desings.

Most acceptance sampling is only time zero testing. And a single run of a function - not multiple runs to detect intermittent failures or early reliability failures.

And my larger concern here is that if the OP does work for an assembly-only house that only mounts a PC board and flashes the firmware device then they don’t have the ability to perform an adequate validation and can only perform a basic MSA on their supplied parts and not on the actual product - that would belong to the Customer….

The key here is that it is often sufficient to validate a single device for every change or new ‘product’. (This is similar to the concept of validating under worst case conditions to reduce sample size). A robust risk assessment focused on severity (not probability as that is nothing more than a self serving guess) on a single device is usually sufficient. It may not be necessary to test every function based on the risk assessment… I don’t know what else the OP has to validate so can make no other recommendations regarding sample size
 
With respect: These quality complaints are scattered over a wide variety of areas that are not best addressed by process validation.

It is foolish to repeat design verification activities in production. If production processes can introduce specific failure modes, and/or if there is variation introduced by production processes that can lead to non-conforming products, these are things that need to be addressed in production. Retesting that design requirements are met in production is otherwise a waste of time and effort, and points to poor design verification. When a production process can introduce a failure mode... implement controls to reduce the occurrence of the failure mode and/or improve the detectability of such failure modes... this is test method validation(*1). If the process of just making the product can have variability that leads to non-conformances, then the process is validated to understand/reduce the variation.

Similarly reliability failures are not a test method validation issue, that are due to a lack of reliability validation itself. And a lack of real specifications via appropriate experimental desings.

^This^ is a real concern, but reliability issues point to a defect in design verification which cannot be adequately addressed in manufacturing, which is where process validation occurs. I've seen companies ignore quality in the designs and push it to manufacturing. Those companies suffer; some go out of business.

(*1) the checksum verifying a memory flash/copy is a test method.
 
:deadhorse:
Flashing is a process.
A failed flash is not a coding error or a design ‘flaw’. It cannot be detected by design verification or design validation.
Process validation is not in any way limited to a myopic, insular, parochial or dogmatic interpretation of ‘process validation’ within the medical device industry.

similar, definition: resembling without being identical
 
:deadhorse:
Flashing is a process.

What are the process-related variables that have margins to be studied during a "process validation"? It's not like the operator can adjust the wall voltage, or has a choice between "fast flash" and "slow flash".

Yes we can construct a "process flow" but there aren't any variables to challenge for process validation. At best this is a tool validation.

A link to the de facto resource for process validation for medical device manufacturing (IMDRF)
 
You can validate a process like flashing without directly manipulating “input variables”. Physics matters.

Everything doesn’t fit into a single narrow box.
 
You can validate a process like flashing without directly manipulating “input variables”. Physics matters.
Once the flashing tool is established(*1) to be working, there is nothing on the manufacturing floor to be subjected to process validation. In terms of medical device process validation, it is necessary to point to the "physics" that are (a) part of the process and (b) can reasonably be expected to introduce variability such that controls can be implemented to demonstrate their effectiveness in reducing/eliminating that variability.

(*1) This establishment is a tool qualification. Every flashing tool I've ever used has involved a checksum of the written memory... which is 100% verification.

Asking for OQs and PQs for something that is 100% verified is a waste of time and effort. If we need a statistical sample size on the ability of the flash device to (a) do what it is supposed to and (b) detect when something goes wrong... a simple hypothesis test is all that is needed to establish such a thing. This isn't complicated, and I'm not ignoring "physics".
 
Back
Top Bottom