Determining Sample Size for Design Verification and Design Validation

E

edisoar

What is the most common process for identifying sample sizes for Design Verification / Design Validation activities within the Medical Device Industry? I have seen some references to AQL which I believe is incorrect. I have a suite of testing to complete and I want to identify how many pieces I need for each test. Any help would be appreciated.
 
E

edisoar

Re: Sample size for Design Verification / Design Validation

To further clarify what I am looking for, there is a combination of variable and attribute data that I will be collecting. One option I am considering is a sample size of 20pcs for variable data and 59pcs for attribute data as these are what are typical used to prove normality.
 

harry

Trusted Information Resource
Last edited:
E

edisoar

Re: Sample size for Design Verification / Design Validation

Thanks for this information. I did not see the attachment when I reviewed the original posts. I will use this and see what the outcome is.
 
J

jscholen

Re: Sample size for Design Verification / Design Validation

Here's how it's been done in some of my circles. You need to identify what is critical vs "does it work as needed" (write in your own terms)...then establish what confidence level( or other criteria) you want for what you identified, ie:
Critical 95/99
Acceptable 95/95
Adequate 95/90 or 95/80

As to variable data, I have seen it done 2 ways....treat it as attribute data (in-spec-Pass/Outta spec-Fail) or evaluate to establish your confidence interval, T tests, etc..

Typically in Development, there is a push to get a design out the door, hence treating as much data as attribute data will help, but if you are also focused on making sure you can have a process that can build what you designed, make sure you work with manufacturing to develop test plans that gives them confidence that they can produce your design at higher volumes.
 
S

someguyinback

Re: Sample size for Design Verification / Design Validation

My comment concerns design verification ONLY.

What I see is that for design verification, one has, at least in our case, a limited number of "near production" devices (ten, in our case) that "work". This means they don't blow up, the software runs long enough to complete a procedure and the R&D folks believe that what is supposed to happen at the business end seems to be happening.
Now we want to Verify the design; confirming the as designed and built system has specifications (design output) that meet the design inputs.

How many do we test (there are only 10 in existence)? How many times do we test them? How do we satisfactorily justify whatever we do?

There is no body of quantifiable data, the actual specifications need to be written the burn continues and the clock is ticking.

No set of equations, handbook or forum post has to date been able to help me answer these 3 questions above.

Thanks for any assistance.
 
B

Burnett

Re: Sample size for Design Verification / Design Validation

Did you ever resolve this? I'm having the exact same question.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Re: Sample size for Design Verification / Design Validation

we've had several discussion regarding this in other threads (some of which are shown at the bottom of thsi page...)

the quick answer is that for verification and validation the "AQL" inspection tables are not sufficient. One must use the 'estimation' sample size formulas. The numbers used for these formulas are derived from the nature of thing being validated. If a new thing you must usually meet label claim within some delta with soem confidence. and this is often reviewed and agreed upon with your regulatory agency - if you have one - or your customer. If the thing is a revision you are usually looking for no substantial difference (layman's terms = equivalence) and this typically requires that the delta of interest and beta and alpha errors be specified...again with agreement from yoru regulatory agency.

I know that prototype sample sizes can be realtivley small but this doesn't faze the FDA.

in the case of medical devices there are several aspects to sample size: one is the number of instruments and the other is the number of samples or patients etc. typically regulatory agencies focus on the number of samples (or patients) over the number of instruments. If the instrument is a consumable the question is moot. if it is re-usable then it is highly advisable to consult your agency or customer...
 
B

Burnett

Re: Sample size for Design Verification / Design Validation

Thanks Bev D. Basically I'm taking the approach of using Tolerance Intervals and k table to develop my testing. The only hiccup is how to reconcile that with the k table and other stat tables, n can be very small and much lower than the standard practice of 30pcs minimum. How is this justified statistically? Must I always use 30pcs min in spite of the table?
 

Bev D

Heretical Statistician
Leader
Super Moderator
Re: Sample size for Design Verification / Design Validation

using tolerance intervals for V&V is not a typical approach. What are you physically trying to doin the testing? can you describe it better? as with anything the best answer depends on the specifics of your situation.

(and the old wives tale about n=30 is just that, it's a rule of thumb for estimates when you know nothing about the process and have no real 'goal' other thatn to 'get some idea' about what the average and/or SD might be. it certainly has no real statistical basis...)
 
Top Bottom