Testing to failure for design verification

DesignAssurance

Starting to get Involved
#1
Hi All,

Im working on design verification of a medical electrical device which has a input characteristic that the device will survive 1000 cycles. Now the characteristic is high risk if fails. So per our internal procedures, this constitutes to about 59 samples with the acceptance criteria of pass/fail. Im thinking to test the device to failure rather than to specification thereby providing justification for reduced sample size ( like substantially reduced to maybe 3 or 5 since this is a non disposable equipment). If in pre verification tests, we see that failure occurs somewhere in the range of 5000 or upwards cycles, what is a good mathematical equation or any rationale I could use to justify that because we know that we are testing to 5X of the specification, X number of samples is justified.

Thanks
 
Elsmar Forum Sponsor
#2
What you are suggesting is to go from a 0 failure reliability demonstration test to a life data test / Weibull analysis. This is a good idea. What you will have to do is take your failure times (and non-failure times, also called suspensions) and fit an appropriate distribution, often a Weibull distribution. In fitting the parameters of this distribution, you will also get a "standard error" of the parameters, which can be used to quantify uncertainty in your result. This could then directly tell you, for example, "the 90% lower bound on reliability at 1,000 cycles is 99.4%" which is a really nice, intuitive way to report these results. There is no way to say "I tested it 5 times as long so it's good" without taking into account sample size, life distribution, etc. and at that point you are essentially fitting the distribution anyway.

In short:

Test for a long time
Record failure and suspension times
Fit a distribution
Get results from that distribution
 

Bev D

Heretical Statistician
Leader
Super Moderator
#3
I think you are conflating two different verification activities: design and "product" verification. It's not just you - most people do.
verifying the capability of the design isn't a statistical thing it's a physics thing.
What I do first is to verify that the design meets requirements when built at nominal and at worst case conditions. (conditions include user conditions and product components set at the max/min allowable tolerances. This substantially reduces the sample size as you really only need one at each of these states. (you may miss some worst case condition - especially for 'new designs' but that's OK because you'll find it in the next step). We may make 3 parts or so at the extremes depending on the 'closeness of the product to the requirement. We will almost always use 3-6 parts if we are testing for 'equivalence' to a prior version on the design. This doesn't have to be statistical because you aren't relying on random or happenstance variation to catch the worst cases, which is what statistical sampling plans are made for. Sample size does come in when the device is a one time use device that we know has variation in performance within a lot. Sample sizes here are typically 100-200 and are based on estimating a %defective. For multi use devices sample size comes in as the number of runs or uses...

The next step is verify (some call it validation) that the total variation that will be seen in normal manufacturing (both your manufacturing and your suppliers) is repeatable and capable of producing product that you will sell meets all requirements. I typically manufacture at east 3 lots of product under normal manufacturing conditions and test each lot for all release criteria. we must have 3 sequential lots that meet the requirements, before we declare that the design is released to production. we will sell any product made during this time that meets the requirements. depending on the regulatory agency we may hold this material for sale until we receive approval from the agency.

I think in your case it makes sense to create worst case units and test them to failure to understand the basic performance. If you have very little history about this type of design or very little data from the development process it makes sense to create 3 units at each worst case condition to 'range find' your performance. If you have very little margin you will need to either fix your design or gamble that a larger sample size will prove that while you are marginal you are above the limit.
 

DesignAssurance

Starting to get Involved
#4
What you are suggesting is to go from a 0 failure reliability demonstration test to a life data test / Weibull analysis. This is a good idea. What you will have to do is take your failure times (and non-failure times, also called suspensions) and fit an appropriate distribution, often a Weibull distribution. In fitting the parameters of this distribution, you will also get a "standard error" of the parameters, which can be used to quantify uncertainty in your result. This could then directly tell you, for example, "the 90% lower bound on reliability at 1,000 cycles is 99.4%" which is a really nice, intuitive way to report these results. There is no way to say "I tested it 5 times as long so it's good" without taking into account sample size, life distribution, etc. and at that point you are essentially fitting the distribution anyway.

In short:

Test for a long time
Record failure and suspension times
Fit a distribution
Get results from that distribution


Thanks for your response Daniel. I'm fairly new to weibull distribution fitting. So if say I run a sample for 5000 cycles and failure occurs, failure maybe due to a component failed/burnt. I will replace that component on the same sample/device and run it again until failure and next failure occurs at 5050 cycles and I don't know what caused the failure. I'll test another device and run it until failure occurs at 4070 cycles. Now do i have 3 samples tested and all failed ( so no non-failure data) but towards 5000 range cycles. what is the weibull distribution look like?
 

DesignAssurance

Starting to get Involved
#5
I think you are conflating two different verification activities: design and "product" verification. It's not just you - most people do.
verifying the capability of the design isn't a statistical thing it's a physics thing.
What I do first is to verify that the design meets requirements when built at nominal and at worst case conditions. (conditions include user conditions and product components set at the max/min allowable tolerances. This substantially reduces the sample size as you really only need one at each of these states. (you may miss some worst case condition - especially for 'new designs' but that's OK because you'll find it in the next step). We may make 3 parts or so at the extremes depending on the 'closeness of the product to the requirement. We will almost always use 3-6 parts if we are testing for 'equivalence' to a prior version on the design. This doesn't have to be statistical because you aren't relying on random or happenstance variation to catch the worst cases, which is what statistical sampling plans are made for. Sample size does come in when the device is a one time use device that we know has variation in performance within a lot. Sample sizes here are typically 100-200 and are based on estimating a %defective. For multi use devices sample size comes in as the number of runs or uses...

The next step is verify (some call it validation) that the total variation that will be seen in normal manufacturing (both your manufacturing and your suppliers) is repeatable and capable of producing product that you will sell meets all requirements. I typically manufacture at east 3 lots of product under normal manufacturing conditions and test each lot for all release criteria. we must have 3 sequential lots that meet the requirements, before we declare that the design is released to production. we will sell any product made during this time that meets the requirements. depending on the regulatory agency we may hold this material for sale until we receive approval from the agency.

I think in your case it makes sense to create worst case units and test them to failure to understand the basic performance. If you have very little history about this type of design or very little data from the development process it makes sense to create 3 units at each worst case condition to 'range find' your performance. If you have very little margin you will need to either fix your design or gamble that a larger sample size will prove that while you are marginal you are above the limit.

Thanks for your response Bev. In my world (rather what I'm used to differentiating between design and product verification ) is that the former is design verification and latter is process validation.
Per your response, for the first one if I have to test at worse case conditions of the design, I need to have some information on what could cause the failure. That may or maynot be the possible i.e. cause of failure may not be accurately determined. So how do i determine the worst case design conditions?
 

DesignAssurance

Starting to get Involved
#7
Ok good point, Bev. If the cause of failure is determined, still there needs to be some sample size we need to use to justify we "verified" our device with statistical rationale ( FDA stresses on this). How is testing 3 samples as opposed any other number like 1 or 5 statistically justified?
 

Bev D

Heretical Statistician
Leader
Super Moderator
#8
It's justified because it's physics and probability not inferential statistics. Again the justification is that if you 'directed testing' or deliberately created units at the worst cases of the tolerances then you don't need a 'statistically determined sample size. Statistical sample sizes are used when you are trying to estimate from 'random' samples. (in this case you are not looking at the accuracy of the device in dealing with different patient conditions or samples right? that's where the FDA should be concerned about 'statistical sample sizes) You will need to discuss this rationally with your reviewer - or statistician. something you will need to do anyway. Remember that the weibull approach above will also have a problem with sample size: although you will have many runs if they are on only one device you have a sample size of 1. and if that device is built to nominal - something design engineers are wont to do - then you won't be verifying the design space.
 

Ronen E

Problem Solver
Moderator
#9
@Bev D :applause:

@DesignAssurance Just a terminology clarification:
The 1st step Bev was talking about is Design Verification.
The 2nd belongs in what is usually/officially termed Design Validation, though in my opinion Product Validation is a more appropriate term. This is not to be confused with Process Validation, which is about challenging the process(es) more than about challenging the design/product.

I would add a #0 step (preceding step 1) which is theoretical/analytical design verification, before ANY testing. Do you fully understand what in your design constitutes the failure modes and what mechanisms bring them about? If not, your design work is not complete and moving on to testing is a little like gambling in scientific disguise. It shouldn't be lip service; it's likely to be way more important and cost-effective than lots of bench work.

The other comment I had was that before you try to fit any distribution model to any data you have to make sure that your data is homogeneous. Otherwise it's just mumbo jumbo.
 

Bev D

Heretical Statistician
Leader
Super Moderator
#10
Thanks Ronan - your clarification of design verification, product validation (I like that terminology too!) and process validation are clear and concise - thank you!
 
Thread starter Similar threads Forum Replies Date
B Package integrity Testing failure ISO 13485:2016 - Medical Device Quality Management Systems 6
J 'Failure rate leading to false-negatives and repeat testing' - MDCG and Common Tech Specs Other Medical Device Regulations World-Wide 0
alonFAI Component Failure (Relay) during ICT Testing after Surface Mount Assembly Manufacturing and Related Processes 5
Q Corrective Action - Product failure during testing Nonconformance and Corrective Action 7
C Difference between "Pass/Fail Testing" "Test To Failure" "Degradation Test" FMEA and Control Plans 1
A Destructive Testing Failure Mode Sampling Plan Inspection, Prints (Drawings), Testing, Sampling and Related Topics 5
M Failure (timepoint - 6 months) in Long Term Stability testing @ 25 degC Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 9
J Out-of-Specification (OOS) / testing deviation / failure SOP required? General Measurement Device and Calibration Topics 6
B A testing lab within an organization ISO 17025 related Discussions 1
R Basic Lab Practices - related to ISO 17025 testing labs ISO 17025 related Discussions 1
N Microbial testing on food packaging materials Food Safety - ISO 22000, HACCP (21 CFR 120) 3
A Stability Testing of a device with sterile fluid pathway claim Other Medical Device and Orthopedic Related Topics 1
M Cosmetic Shelf Life Testing Misc. Quality Assurance and Business Systems Related Topics 10
S Safety testing IEC 60601 series Medical Device and FDA Regulations and Standards News 2
M EE's - Laptops/Printer Change and Emissions Testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 6
keya1tos EU to UK plug adapter correct testing standard CE Marking (Conformité Européene) / CB Scheme 9
D IEC 60601-1-2: Is EMC immunity testing required for a device without essential performance? IEC 60601 - Medical Electrical Equipment Safety Standards Series 15
M Oral irritation testing Medical Information Technology, Medical Software and Health Informatics 0
A Fire Testing - EN 50339 Not 20 minutes ISO 17025 related Discussions 0
M Biological testing at the end of the real time of MD Other Medical Device Related Standards 1
M In Vitro irritation testing according to ISO 10993-23 ISO 13485:2016 - Medical Device Quality Management Systems 2
R SaMD Performance Testing US Medical Device Regulations 5
M Extractables testing replicates - Chemical Characterization ISO 10993-18 EU Medical Device Regulations 2
C Strategies to Mitigate Fails in DV Testing Design and Development of Products and Processes 3
E 17025 Accredited Microbiological Testing Laboratory ISO 17025 related Discussions 3
X Looking for 17025 auditor to perform internal audit on IT software testing laboratory ISO 17025 related Discussions 3
J Design Verification Testing and Statistics Reliability Analysis - Predictions, Testing and Standards 3
W External PSU Providing a MOOP -- Will This Necessitate Conducted Emissions Testing? IEC 60601 - Medical Electrical Equipment Safety Standards Series 1
G How to Record Informal Testing (Not Verification/Validation) Other Medical Device and Orthopedic Related Topics 15
J Medical Device Component Change - Testing, Sampling Criteria ISO 14971 - Medical Device Risk Management 3
R Microbiological test (USP61, USP62, USP71, ISO 11137.1, ISO11137.2) --- Testing flow chart Other Medical Device Related Standards 0
R GB9706.1:2020 - in-country testing China Medical Device Regulations 1
A Bench Testing US Food and Drug Administration (FDA) 2
R Validation of Software used in Verification Testing ISO 13485:2016 - Medical Device Quality Management Systems 2
M Do i need to have equipment validation if 100% testing is completed? Qualification and Validation (including 21 CFR Part 11) 6
E Biocompatibility testing of our applied part seems redundant Other Medical Device Related Standards 2
A Recommended testing methods for weather stripping Manufacturing and Related Processes 6
T Shrinkage Testing for EPDM Material Various Other Specifications, Standards, and related Requirements 5
J Managing design verification regression testing of design changes Design and Development of Products and Processes 1
D Conductivity Water Samples testing temperature Qualification and Validation (including 21 CFR Part 11) 2
G Pricing Structure for Physical Mechanical Testing of Labs Manufacturing and Related Processes 1
S Skipped/Reduced testing in Pharma for Raw materials and Intermediates Manufacturing and Related Processes 0
S Pharmaceutical Shelf Life Testing Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 0
D How to testing of Pressure relief valve as per API 6D ? Oil and Gas Industry Standards and Regulations 1
J Impact Testing--Is conversion between Izod and Sharpy Possible? General Measurement Device and Calibration Topics 3
O Should a Covid vaccine and testing policy be included as part of ISO9001 or AS9100 risk management? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 6
V How to select the tests for proficiency testing Measurement Uncertainty (MU) 0
B Customer Preference Testing Customer and Company Specific Requirements 2
N Pre-Clinical Performance Testing Design and Development of Products and Processes 3
I Software (SaMD) mobile application verification testing: objective evidence Medical Information Technology, Medical Software and Health Informatics 2

Similar threads

Top Bottom