Sample Size for Design Validation

T

TSwilson

Hi All,

I am currently working on product validation testing for a medical device (US/FDA) and we in the process of trying to figure out how many devices are needed for each test. All of the validation testing we have to do amounts to making sure one or another of our products properties (e.g., shear strength, crush strength, etc.) is greater than our specification or in cases like number of particles released during use, less than our specification.

We have already built enough prototypes, samples, etc. that we have a very good idea of the average and “standard deviation” of these properties for our product (some properties clearly do not fall on a normal distribution). The average for each is typically well above our spec (or for particles, well below). For crush strength, the average is high enough, but the deviation is also quite high, though mostly due to some products having a very high crush strength – this was as expected due to the mechanical design, there are some interferences that can come into play to dramatically improve the crush tolerance. To be concrete, let’s say our spec for crush is >=10, the average failure is 14 and the deviation is 4.2 and our measurement precision is 0.5. (Some units can survive many times our spec, but again, none has ever failed to meet 10 – in fact the lowest shear we have encountered was on a device purposely built way out of spec and it failed at 10.8).

For other properties, such as shear strength, we again have to validate that the device survives up to at least our specified shear strength, but again we don’t care if it’s over, just if it’s under. For properties like the shear strength, our data looks like it could be a normal distribution. To give sample numbers, our shear spec is >=1.0 and our average shear at failure is 1.5 with a standard deviation of 0.25 and a measurement precision of 0.05.

Lastly, for particles shed by our device, we are well under the spec and our data does look vaguely like a normal distribution. Our spec is 100 and we measure an average of 30 with a standard deviation of 10 and a measurement precision of (about) 3.

How would I calculate the required sample size for these situations, all one-sided specs, where one is very likely not a normal distribution, one has an a lower bound but no upper bound, and the other has an upper bound, but no lower bound (down to zero)?

Our previous compliance consultants used a formula very similar to the one Bev D gave in an attachment to the thread entitled “Determining Sample Size for FDA Verification and Validation Activities,” but it’s not clear to me that this formula applies to any of these situations (I am however statistically challenged, so please feel free to set me straight).

Thanks in advance for your help,

-Tom
 

michaelcwang2

Registered
I think there are several steps in general. It seems you done some preliminary experiments so with a few trial result available.
First, From to decide which distribution could apply better. See the link :
Then next step is to take appropriate "Operating Characteristics curve" for your distribution. As normal distribution is a kind of Poisson distribution, even with like Z transformation should be able to transform your nominal and tolerance to such distribution.
It seems we can also find OC curve available for other distributions like Lognormal distribution on web but not for publicity.
 
T

TSwilson

Sounds like you are conducting design verification, not design validation.

Hi Ronan,

Thanks for the speedy reply. These tests are all portions of tests being performed on devices under one of the relevant accepted FDA guidance documents for performance testing and labeling. As part of the determination of whether or not we meet the user needs, our compliance people are having us use the simulated use requirements from the guidance before some of the testing (and also comparing it to the predicate device). The three specs I mentioned above are some of the quantitative safety endpoints our compliance people gave us to test after simulated misuse of the devices based on risk analysis of the user needs. They have been calling this testing “design validation.” Maybe calling the series of tests "design verification and validation" is more accurate. I will bring this issue (validation/verification) up at the next meeting; thank you for bringing it to my attention. If calling it validation or verification affects the number of parts we should test, that would also be very useful to know.

-Tom
 

Ronen E

Problem Solver
Moderator
Hi Ronan,

Thanks for the speedy reply. These tests are all portions of tests being performed on devices under one of the relevant accepted FDA guidance documents for performance testing and labeling. As part of the determination of whether or not we meet the user needs, our compliance people are having us use the simulated use requirements from the guidance before some of the testing (and also comparing it to the predicate device). The three specs I mentioned above are some of the quantitative safety endpoints our compliance people gave us to test after simulated misuse of the devices based on risk analysis of the user needs. They have been calling this testing “design validation.” Maybe calling the series of tests "design verification and validation" is more accurate. I will bring this issue (validation/verification) up at the next meeting; thank you for bringing it to my attention. If calling it validation or verification affects the number of parts we should test, that would also be very useful to know.

-Tom
Hi Tom,

To be honest, I'm a little confused / overwhelmed by the situation's description above (also taking into account your post #1 above). It is very important to sort out terms & definitions, roles and responsibilities, and a logical process flow, if you want to gain clarity.

Design Verification and Design Validation are normally separate activities with different goals and different characteristics. Bundling them together doesn't add clarity, though it might "solve" some org-internal difficulties (I wouldn't know).

It's also important to understand who is the process/activity owner, and according to what methodology they act. Normally it's R&D / D&D / PD / Engineering staff who own Design Verification and Design Validation, with input from "Compliance" (Regulatory Affairs? QA?) staff; and normally they'd follow the Design Control flow (e.g. ISO 13485 s. 7.3 or 21 CFR 820.30). What exact standards / guidance (or parts thereof) to apply and what exact tests to conduct should initially be part of preparing the Design Input and later revisited when preparing the Verification protocol(s) (later, Validation protocols too).

It might be useful to lay down the thought process that guides the various activities so it's easier to understand what stage it actually is and what formal requirements should apply. Currently the terms seem a little mixed-up / mis-sequenced (or at least not that clearly laid out) and I wonder how clear you all are on the process flow and how all the pieces fit together.
 
Top Bottom