medtech.panda
Registered
Hello!
I am validating a manufacturing line (IQ/OQ/PQ) for a medical device for the first time and am looking for some input on my sampling and data analysis techniques; I am mainly looking to focus on OQ/PQ since IQ seems straightforward.
OQ:
The requirements we developed are not exclusively quantitative, so I am planning to use a combination of techniques for both attribute and continuous data. This is a low risk electronic device, so I am planning to target 95% confidence / 90% reliability, which I know would require n=29 samples for my attribute requirements. I do not have prior data I can use for the quantitative requirements, so I am assuming that I will need n=30 samples to be able to perform a meaningful statistical analysis. However, these devices can be expensive to manufacture, so I am planning to use n=3-5 "golden" devices (we inspect them from our supplier and sign off on them prior to putting them through our portion of the manufacturing process), but then testing each sample multiple times to get 29 or 30 total data points to analyze.
This feels like it might be overkill a bit since the tests we're running should really not have any variability across multiple test runs (we are just flashing firmware and making sure other components turn on / can communicate), but I am pretty sure we still need a statistically significant sample size?
PQ:
From doing some research, it seems that the standard is to build 3 lots that you then sample from; I am planning to take n=29 or n=30 samples from each lot using the same rationale I was discussing in OQ - my PQ requirements will also be a mix of quantitative / qualitative checks. I was planning on doing a similar data analysis (targeting 95% confidence / 90% reliability), but then I started to learn about another parameter - Cpk - that seems to get referenced in various process validation forums.
Is there a preference for using tolerance intervals vs Cpk in the medical device world? Additionally, Cpk seems to require quantitative data, so unsure how it would work for any qualitative requirements/specifications?
Any and all feedback on my strategy outside of my questions is more than welcome, this strategy was developed through online research which included a lot of forums here. Thank you!
I am validating a manufacturing line (IQ/OQ/PQ) for a medical device for the first time and am looking for some input on my sampling and data analysis techniques; I am mainly looking to focus on OQ/PQ since IQ seems straightforward.
OQ:
The requirements we developed are not exclusively quantitative, so I am planning to use a combination of techniques for both attribute and continuous data. This is a low risk electronic device, so I am planning to target 95% confidence / 90% reliability, which I know would require n=29 samples for my attribute requirements. I do not have prior data I can use for the quantitative requirements, so I am assuming that I will need n=30 samples to be able to perform a meaningful statistical analysis. However, these devices can be expensive to manufacture, so I am planning to use n=3-5 "golden" devices (we inspect them from our supplier and sign off on them prior to putting them through our portion of the manufacturing process), but then testing each sample multiple times to get 29 or 30 total data points to analyze.
This feels like it might be overkill a bit since the tests we're running should really not have any variability across multiple test runs (we are just flashing firmware and making sure other components turn on / can communicate), but I am pretty sure we still need a statistically significant sample size?
PQ:
From doing some research, it seems that the standard is to build 3 lots that you then sample from; I am planning to take n=29 or n=30 samples from each lot using the same rationale I was discussing in OQ - my PQ requirements will also be a mix of quantitative / qualitative checks. I was planning on doing a similar data analysis (targeting 95% confidence / 90% reliability), but then I started to learn about another parameter - Cpk - that seems to get referenced in various process validation forums.
Is there a preference for using tolerance intervals vs Cpk in the medical device world? Additionally, Cpk seems to require quantitative data, so unsure how it would work for any qualitative requirements/specifications?
Any and all feedback on my strategy outside of my questions is more than welcome, this strategy was developed through online research which included a lot of forums here. Thank you!