Testing to failure for design verification

#1
Hi All,

Im working on design verification of a medical electrical device which has a input characteristic that the device will survive 1000 cycles. Now the characteristic is high risk if fails. So per our internal procedures, this constitutes to about 59 samples with the acceptance criteria of pass/fail. Im thinking to test the device to failure rather than to specification thereby providing justification for reduced sample size ( like substantially reduced to maybe 3 or 5 since this is a non disposable equipment). If in pre verification tests, we see that failure occurs somewhere in the range of 5000 or upwards cycles, what is a good mathematical equation or any rationale I could use to justify that because we know that we are testing to 5X of the specification, X number of samples is justified.

Thanks
 
Elsmar Forum Sponsor
#2
What you are suggesting is to go from a 0 failure reliability demonstration test to a life data test / Weibull analysis. This is a good idea. What you will have to do is take your failure times (and non-failure times, also called suspensions) and fit an appropriate distribution, often a Weibull distribution. In fitting the parameters of this distribution, you will also get a "standard error" of the parameters, which can be used to quantify uncertainty in your result. This could then directly tell you, for example, "the 90% lower bound on reliability at 1,000 cycles is 99.4%" which is a really nice, intuitive way to report these results. There is no way to say "I tested it 5 times as long so it's good" without taking into account sample size, life distribution, etc. and at that point you are essentially fitting the distribution anyway.

In short:

Test for a long time
Record failure and suspension times
Fit a distribution
Get results from that distribution
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#3
I think you are conflating two different verification activities: design and "product" verification. It's not just you - most people do.
verifying the capability of the design isn't a statistical thing it's a physics thing.
What I do first is to verify that the design meets requirements when built at nominal and at worst case conditions. (conditions include user conditions and product components set at the max/min allowable tolerances. This substantially reduces the sample size as you really only need one at each of these states. (you may miss some worst case condition - especially for 'new designs' but that's OK because you'll find it in the next step). We may make 3 parts or so at the extremes depending on the 'closeness of the product to the requirement. We will almost always use 3-6 parts if we are testing for 'equivalence' to a prior version on the design. This doesn't have to be statistical because you aren't relying on random or happenstance variation to catch the worst cases, which is what statistical sampling plans are made for. Sample size does come in when the device is a one time use device that we know has variation in performance within a lot. Sample sizes here are typically 100-200 and are based on estimating a %defective. For multi use devices sample size comes in as the number of runs or uses...

The next step is verify (some call it validation) that the total variation that will be seen in normal manufacturing (both your manufacturing and your suppliers) is repeatable and capable of producing product that you will sell meets all requirements. I typically manufacture at east 3 lots of product under normal manufacturing conditions and test each lot for all release criteria. we must have 3 sequential lots that meet the requirements, before we declare that the design is released to production. we will sell any product made during this time that meets the requirements. depending on the regulatory agency we may hold this material for sale until we receive approval from the agency.

I think in your case it makes sense to create worst case units and test them to failure to understand the basic performance. If you have very little history about this type of design or very little data from the development process it makes sense to create 3 units at each worst case condition to 'range find' your performance. If you have very little margin you will need to either fix your design or gamble that a larger sample size will prove that while you are marginal you are above the limit.
 
#4
What you are suggesting is to go from a 0 failure reliability demonstration test to a life data test / Weibull analysis. This is a good idea. What you will have to do is take your failure times (and non-failure times, also called suspensions) and fit an appropriate distribution, often a Weibull distribution. In fitting the parameters of this distribution, you will also get a "standard error" of the parameters, which can be used to quantify uncertainty in your result. This could then directly tell you, for example, "the 90% lower bound on reliability at 1,000 cycles is 99.4%" which is a really nice, intuitive way to report these results. There is no way to say "I tested it 5 times as long so it's good" without taking into account sample size, life distribution, etc. and at that point you are essentially fitting the distribution anyway.

In short:

Test for a long time
Record failure and suspension times
Fit a distribution
Get results from that distribution


Thanks for your response Daniel. I'm fairly new to weibull distribution fitting. So if say I run a sample for 5000 cycles and failure occurs, failure maybe due to a component failed/burnt. I will replace that component on the same sample/device and run it again until failure and next failure occurs at 5050 cycles and I don't know what caused the failure. I'll test another device and run it until failure occurs at 4070 cycles. Now do i have 3 samples tested and all failed ( so no non-failure data) but towards 5000 range cycles. what is the weibull distribution look like?
 
#5
I think you are conflating two different verification activities: design and "product" verification. It's not just you - most people do.
verifying the capability of the design isn't a statistical thing it's a physics thing.
What I do first is to verify that the design meets requirements when built at nominal and at worst case conditions. (conditions include user conditions and product components set at the max/min allowable tolerances. This substantially reduces the sample size as you really only need one at each of these states. (you may miss some worst case condition - especially for 'new designs' but that's OK because you'll find it in the next step). We may make 3 parts or so at the extremes depending on the 'closeness of the product to the requirement. We will almost always use 3-6 parts if we are testing for 'equivalence' to a prior version on the design. This doesn't have to be statistical because you aren't relying on random or happenstance variation to catch the worst cases, which is what statistical sampling plans are made for. Sample size does come in when the device is a one time use device that we know has variation in performance within a lot. Sample sizes here are typically 100-200 and are based on estimating a %defective. For multi use devices sample size comes in as the number of runs or uses...

The next step is verify (some call it validation) that the total variation that will be seen in normal manufacturing (both your manufacturing and your suppliers) is repeatable and capable of producing product that you will sell meets all requirements. I typically manufacture at east 3 lots of product under normal manufacturing conditions and test each lot for all release criteria. we must have 3 sequential lots that meet the requirements, before we declare that the design is released to production. we will sell any product made during this time that meets the requirements. depending on the regulatory agency we may hold this material for sale until we receive approval from the agency.

I think in your case it makes sense to create worst case units and test them to failure to understand the basic performance. If you have very little history about this type of design or very little data from the development process it makes sense to create 3 units at each worst case condition to 'range find' your performance. If you have very little margin you will need to either fix your design or gamble that a larger sample size will prove that while you are marginal you are above the limit.

Thanks for your response Bev. In my world (rather what I'm used to differentiating between design and product verification ) is that the former is design verification and latter is process validation.
Per your response, for the first one if I have to test at worse case conditions of the design, I need to have some information on what could cause the failure. That may or maynot be the possible i.e. cause of failure may not be accurately determined. So how do i determine the worst case design conditions?
 
#7
Ok good point, Bev. If the cause of failure is determined, still there needs to be some sample size we need to use to justify we "verified" our device with statistical rationale ( FDA stresses on this). How is testing 3 samples as opposed any other number like 1 or 5 statistically justified?
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#8
It's justified because it's physics and probability not inferential statistics. Again the justification is that if you 'directed testing' or deliberately created units at the worst cases of the tolerances then you don't need a 'statistically determined sample size. Statistical sample sizes are used when you are trying to estimate from 'random' samples. (in this case you are not looking at the accuracy of the device in dealing with different patient conditions or samples right? that's where the FDA should be concerned about 'statistical sample sizes) You will need to discuss this rationally with your reviewer - or statistician. something you will need to do anyway. Remember that the weibull approach above will also have a problem with sample size: although you will have many runs if they are on only one device you have a sample size of 1. and if that device is built to nominal - something design engineers are wont to do - then you won't be verifying the design space.
 

Ronen E

Problem Solver
Staff member
Moderator
#9
@Bev D :applause:

@DesignAssurance Just a terminology clarification:
The 1st step Bev was talking about is Design Verification.
The 2nd belongs in what is usually/officially termed Design Validation, though in my opinion Product Validation is a more appropriate term. This is not to be confused with Process Validation, which is about challenging the process(es) more than about challenging the design/product.

I would add a #0 step (preceding step 1) which is theoretical/analytical design verification, before ANY testing. Do you fully understand what in your design constitutes the failure modes and what mechanisms bring them about? If not, your design work is not complete and moving on to testing is a little like gambling in scientific disguise. It shouldn't be lip service; it's likely to be way more important and cost-effective than lots of bench work.

The other comment I had was that before you try to fit any distribution model to any data you have to make sure that your data is homogeneous. Otherwise it's just mumbo jumbo.
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#10
Thanks Ronan - your clarification of design verification, product validation (I like that terminology too!) and process validation are clear and concise - thank you!
 
Thread starter Similar threads Forum Replies Date
alonFAI Component Failure (Relay) during ICT Testing after Surface Mount Assembly Manufacturing and Related Processes 5
Q Corrective Action - Product failure during testing Nonconformance and Corrective Action 7
C Difference between "Pass/Fail Testing" "Test To Failure" "Degradation Test" FMEA and Control Plans 1
A Destructive Testing Failure Mode Sampling Plan Inspection, Prints (Drawings), Testing, Sampling and Related Topics 5
M Failure (timepoint - 6 months) in Long Term Stability testing @ 25 degC Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 9
J Out-of-Specification (OOS) / testing deviation / failure SOP required? General Measurement Device and Calibration Topics 6
E ASTM F2118 - Fatigue testing of bone cement - Changes between the 2003 and the 2014? Other Medical Device Related Standards 1
K Biocompatibility Testing - Multile products of different sizes and shapes US Food and Drug Administration (FDA) 2
S Requirement to Conduct New Shelf-life Testing? (re-do testing for design change) EU Medical Device Regulations 3
JoCam Mobile Patient Hoists and Electrical Testing Other Medical Device Related Standards 0
T Interlaboratory comparison or proficiency testing in destructive testing of welded joints ISO 17025 related Discussions 2
B ASTM E18-2020 - Rockwell testing standard changes? General Measurement Device and Calibration Topics 2
U Medical Device Design finalization testing ISO 13485:2016 - Medical Device Quality Management Systems 2
Jane's Like-for-like critical raw material change qualification - type of testing/ number of lots required ISO 13485:2016 - Medical Device Quality Management Systems 4
J Conflict of Interest Registrar/Notified Body/Testing House Quality Manager and Management Related Issues 4
M Inter-operator Variability Testing - Requirements for EU Medical Device Regulations 5
S High voltage testing - ISO 17025 - 7.2.2 Validation of methods and 7.3 Sampling ISO 17025 related Discussions 3
M Production approval testing - Alternative ideas for Validation Reliability Analysis - Predictions, Testing and Standards 4
JoCam MDL in Canada without Canadian Electrical Testing Canada Medical Device Regulations 0
T HF testing / Summative evaluation for MDDS class I necessary? Human Factors and Ergonomics in Engineering 2
K Diagnostic X-ray devices - Applicability of Biocompatibility Testing per ISO 10993-1 Manufacturing and Related Processes 5
K Proper document of SMPS used in infant warmer for IEC 60601-1 testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 1
K Sequence of testing in IEC 60601-1 IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
M Workplace Drug Testing in 2020 Misc. Quality Assurance and Business Systems Related Topics 9
W ASTM F1929 dye penetration test - Validation for in-house testing ISO 13485:2016 - Medical Device Quality Management Systems 12
J FDA wants electrical safety testing on battery powered medical device US Food and Drug Administration (FDA) 8
S Internal calibrations - Part of an ISO 17025 accredited testing laboratory (Automotive) ISO 17025 related Discussions 3
T Qualification testing of Lead acid batteries Reliability Analysis - Predictions, Testing and Standards 0
D Sterility and BioBurden Testing for Plastics ISO 13485:2016 - Medical Device Quality Management Systems 5
DitchDigger IEC 60601-1 subclause 5.1 - Adequate evaluation in lieu of testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
E Correct way to certify hydrostatic testing when it is not 100% (and Sample Size) Various Other Specifications, Standards, and related Requirements 6
MrTetris Are GLP required for testing cytotoxicity and soil remaining after sterilization of MD? Other Medical Device Related Standards 8
D Design Verification - Is testing required? Design and Development of Products and Processes 5
M Specific Absorption Rate (SAR) Testing - What Standard to Use? Other ISO and International Standards and European Regulations 1
N IATF 16949:2016 7.1.5.3.2 External Laboratory - How to approve the Testing Laboratory without accreditation scope IATF 16949 - Automotive Quality Systems Standard 1
T Spirometer - Pulmonary Function Testing - ATS/ERS:2005 EU Medical Device Regulations 5
M 510(k) 10993 Biocompability Testing Other US Medical Device Regulations 15
C Sterility and Bioburden Testing - LAL passed under 0.5EU/mL Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 3
A ISO 17025 Requirement 6.2 - Department Manager Testing Qualifications ISO 17025 related Discussions 1
P IEC 62304 - evaluation of integration and system testing IEC 62304 - Medical Device Software Life Cycle Processes 4
H EO Sterilization Validation - Sterility Testing and Load Configuration Other Medical Device Related Standards 1
M What to do in SOP Phase of Head unit testing? IATF 16949 - Automotive Quality Systems Standard 0
W SOP examples wanted - Soil, Concrete and Asphalt testing ISO 17025 related Discussions 3
M Informational TGA Consultation: Review of the regulation of certain self-testing IVDs in Australia Medical Device and FDA Regulations and Standards News 0
F Proficiency Testing for ISO 17025: AC Current Ranges ISO 17025 related Discussions 1
K Mammography Machine Bench Testing - FDA 510(K) US Food and Drug Administration (FDA) 5
D Is Human Factors testing mandatory for a 510(k) submission? Human Factors and Ergonomics in Engineering 16
W My company is looking to start a materials testing lab that conforms to ISO17025:2017 ISO 17025 related Discussions 3
P Testing lab in India IEC 60601-2-25 IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
M Microbiological Testing Lab - Layout & Standards Other Medical Device Regulations World-Wide 0
Similar threads


















































Top Bottom