Risk Based Sample Size and Standards Compliance

d_addams

Quite Involved in Discussions
A partner is suggesting adding something along the lines of 'if a standard requires a sample size, that sample size can be used as the minimum rather than sample size listed in our risk based sample size requirement table (which uses the risk category for the control being verified to determine minimum sample size).
I get there are some type tests where a reduced sample size is appropriate, but are there standards where it calls out sample size which aren't type tests. I'm somewhat familiar with IEC 60601 items and getting UL/Intertek certifications which may only require certain sample sizes, but aren't those seem more like type tests?

We already have a provision for type tests. Am I missing something here, this seems like basically saying if a standard calls out a sample size you can forgo our risk based sample size requirements. I don't think I've ever seen this before so it feels like I'm missing something if this is really an appropriate approach.
 
Elsmar Forum Sponsor
I was going to bring up destructive testing (a la ASTM standards) of representative samples, but I think this may be an example of what you refer to here:

I'm somewhat familiar with IEC 60601 items and getting UL/Intertek certifications which may only require certain sample sizes, but aren't those seem more like type tests?

Many of the ASTM (for example) test methods for materials and products have well-established sample sizes, so for such examples I think there is a general consensus that for medical (after some critical thinking) it wouldn't be necessary for a medical device company to have to re-invent a test method (including sample size determination) if there exists an appropriate consensus standard.

I have seen groups "sidestep" deriving sample sizes (that is: no study design was created to determine a necessary sample size) by referring to published papers, especially those which have been referenced by regulatory authorities. For example: Appendix B of this FDA Guidance on Human Factors/Usability calls out some sample sizes for human factors studies. The numbers quoted there are typically smaller than what many risk-based study designs would lead to, but this is in the area of "human factors" ¯\_(ツ)_/¯ .

One published paper I like to refer to (in place of doing my own formal study designs) is this one on binomial hypothesis testing by A'Hern (my interpretation of the title, not the paper's title). I find the tables in this paper to be a very handy reference for circumstances in which I have an effect/defect/circumstance (null hypothesis) occuring with a well-known frequency and I want to see if some sort of (design) change is eliminating it. If nothing else, the tables can serve as a 'gut check' if a sample size (for the test of a binomial outcome) is within the realm of reason.

The math is all spelled out in the A'Hern paper for anyone to do for themselves; referencing this paper saves me the trouble of doing it myself and acts as a sort of "appeal to authority" for anyone who may not be otherwise inclined to trust my math.
 
I was going to bring up destructive testing (a la ASTM standards) of representative samples, but I think this may be an example of what you refer to here:



Many of the ASTM (for example) test methods for materials and products have well-established sample sizes, so for such examples I think there is a general consensus that for medical (after some critical thinking) it wouldn't be necessary for a medical device company to have to re-invent a test method (including sample size determination) if there exists an appropriate consensus standard.

I have seen groups "sidestep" deriving sample sizes (that is: no study design was created to determine a necessary sample size) by referring to published papers, especially those which have been referenced by regulatory authorities. For example: Appendix B of this FDA Guidance on Human Factors/Usability calls out some sample sizes for human factors studies. The numbers quoted there are typically smaller than what many risk-based study designs would lead to, but this is in the area of "human factors" ¯\_(ツ)_/¯ .

One published paper I like to refer to (in place of doing my own formal study designs) is this one on binomial hypothesis testing by A'Hern (my interpretation of the title, not the paper's title). I find the tables in this paper to be a very handy reference for circumstances in which I have an effect/defect/circumstance (null hypothesis) occuring with a well-known frequency and I want to see if some sort of (design) change is eliminating it. If nothing else, the tables can serve as a 'gut check' if a sample size (for the test of a binomial outcome) is within the realm of reason.

The math is all spelled out in the A'Hern paper for anyone to do for themselves; referencing this paper saves me the trouble of doing it myself and acts as a sort of "appeal to authority" for anyone who may not be otherwise inclined to trust my math.
yeah it would be aligned with the first example where a standard calls out a specific number of units. I'm being told there were 1 or 2 cases where the team went to a test house and just asked for X units to be tested and they were 'are you sure, in 25 years we've never had anyone ask for more than what the standard requires'. So we're looking at adding language to explicitly allow following the sample size in a standard when applicable.
 
Back
Top Bottom