SBS - The best value in QMS software

Testing to failure for design verification

#1
Hi All,

Im working on design verification of a medical electrical device which has a input characteristic that the device will survive 1000 cycles. Now the characteristic is high risk if fails. So per our internal procedures, this constitutes to about 59 samples with the acceptance criteria of pass/fail. Im thinking to test the device to failure rather than to specification thereby providing justification for reduced sample size ( like substantially reduced to maybe 3 or 5 since this is a non disposable equipment). If in pre verification tests, we see that failure occurs somewhere in the range of 5000 or upwards cycles, what is a good mathematical equation or any rationale I could use to justify that because we know that we are testing to 5X of the specification, X number of samples is justified.

Thanks
 
Elsmar Forum Sponsor
#2
What you are suggesting is to go from a 0 failure reliability demonstration test to a life data test / Weibull analysis. This is a good idea. What you will have to do is take your failure times (and non-failure times, also called suspensions) and fit an appropriate distribution, often a Weibull distribution. In fitting the parameters of this distribution, you will also get a "standard error" of the parameters, which can be used to quantify uncertainty in your result. This could then directly tell you, for example, "the 90% lower bound on reliability at 1,000 cycles is 99.4%" which is a really nice, intuitive way to report these results. There is no way to say "I tested it 5 times as long so it's good" without taking into account sample size, life distribution, etc. and at that point you are essentially fitting the distribution anyway.

In short:

Test for a long time
Record failure and suspension times
Fit a distribution
Get results from that distribution
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#3
I think you are conflating two different verification activities: design and "product" verification. It's not just you - most people do.
verifying the capability of the design isn't a statistical thing it's a physics thing.
What I do first is to verify that the design meets requirements when built at nominal and at worst case conditions. (conditions include user conditions and product components set at the max/min allowable tolerances. This substantially reduces the sample size as you really only need one at each of these states. (you may miss some worst case condition - especially for 'new designs' but that's OK because you'll find it in the next step). We may make 3 parts or so at the extremes depending on the 'closeness of the product to the requirement. We will almost always use 3-6 parts if we are testing for 'equivalence' to a prior version on the design. This doesn't have to be statistical because you aren't relying on random or happenstance variation to catch the worst cases, which is what statistical sampling plans are made for. Sample size does come in when the device is a one time use device that we know has variation in performance within a lot. Sample sizes here are typically 100-200 and are based on estimating a %defective. For multi use devices sample size comes in as the number of runs or uses...

The next step is verify (some call it validation) that the total variation that will be seen in normal manufacturing (both your manufacturing and your suppliers) is repeatable and capable of producing product that you will sell meets all requirements. I typically manufacture at east 3 lots of product under normal manufacturing conditions and test each lot for all release criteria. we must have 3 sequential lots that meet the requirements, before we declare that the design is released to production. we will sell any product made during this time that meets the requirements. depending on the regulatory agency we may hold this material for sale until we receive approval from the agency.

I think in your case it makes sense to create worst case units and test them to failure to understand the basic performance. If you have very little history about this type of design or very little data from the development process it makes sense to create 3 units at each worst case condition to 'range find' your performance. If you have very little margin you will need to either fix your design or gamble that a larger sample size will prove that while you are marginal you are above the limit.
 
#4
What you are suggesting is to go from a 0 failure reliability demonstration test to a life data test / Weibull analysis. This is a good idea. What you will have to do is take your failure times (and non-failure times, also called suspensions) and fit an appropriate distribution, often a Weibull distribution. In fitting the parameters of this distribution, you will also get a "standard error" of the parameters, which can be used to quantify uncertainty in your result. This could then directly tell you, for example, "the 90% lower bound on reliability at 1,000 cycles is 99.4%" which is a really nice, intuitive way to report these results. There is no way to say "I tested it 5 times as long so it's good" without taking into account sample size, life distribution, etc. and at that point you are essentially fitting the distribution anyway.

In short:

Test for a long time
Record failure and suspension times
Fit a distribution
Get results from that distribution


Thanks for your response Daniel. I'm fairly new to weibull distribution fitting. So if say I run a sample for 5000 cycles and failure occurs, failure maybe due to a component failed/burnt. I will replace that component on the same sample/device and run it again until failure and next failure occurs at 5050 cycles and I don't know what caused the failure. I'll test another device and run it until failure occurs at 4070 cycles. Now do i have 3 samples tested and all failed ( so no non-failure data) but towards 5000 range cycles. what is the weibull distribution look like?
 
#5
I think you are conflating two different verification activities: design and "product" verification. It's not just you - most people do.
verifying the capability of the design isn't a statistical thing it's a physics thing.
What I do first is to verify that the design meets requirements when built at nominal and at worst case conditions. (conditions include user conditions and product components set at the max/min allowable tolerances. This substantially reduces the sample size as you really only need one at each of these states. (you may miss some worst case condition - especially for 'new designs' but that's OK because you'll find it in the next step). We may make 3 parts or so at the extremes depending on the 'closeness of the product to the requirement. We will almost always use 3-6 parts if we are testing for 'equivalence' to a prior version on the design. This doesn't have to be statistical because you aren't relying on random or happenstance variation to catch the worst cases, which is what statistical sampling plans are made for. Sample size does come in when the device is a one time use device that we know has variation in performance within a lot. Sample sizes here are typically 100-200 and are based on estimating a %defective. For multi use devices sample size comes in as the number of runs or uses...

The next step is verify (some call it validation) that the total variation that will be seen in normal manufacturing (both your manufacturing and your suppliers) is repeatable and capable of producing product that you will sell meets all requirements. I typically manufacture at east 3 lots of product under normal manufacturing conditions and test each lot for all release criteria. we must have 3 sequential lots that meet the requirements, before we declare that the design is released to production. we will sell any product made during this time that meets the requirements. depending on the regulatory agency we may hold this material for sale until we receive approval from the agency.

I think in your case it makes sense to create worst case units and test them to failure to understand the basic performance. If you have very little history about this type of design or very little data from the development process it makes sense to create 3 units at each worst case condition to 'range find' your performance. If you have very little margin you will need to either fix your design or gamble that a larger sample size will prove that while you are marginal you are above the limit.

Thanks for your response Bev. In my world (rather what I'm used to differentiating between design and product verification ) is that the former is design verification and latter is process validation.
Per your response, for the first one if I have to test at worse case conditions of the design, I need to have some information on what could cause the failure. That may or maynot be the possible i.e. cause of failure may not be accurately determined. So how do i determine the worst case design conditions?
 
#7
Ok good point, Bev. If the cause of failure is determined, still there needs to be some sample size we need to use to justify we "verified" our device with statistical rationale ( FDA stresses on this). How is testing 3 samples as opposed any other number like 1 or 5 statistically justified?
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#8
It's justified because it's physics and probability not inferential statistics. Again the justification is that if you 'directed testing' or deliberately created units at the worst cases of the tolerances then you don't need a 'statistically determined sample size. Statistical sample sizes are used when you are trying to estimate from 'random' samples. (in this case you are not looking at the accuracy of the device in dealing with different patient conditions or samples right? that's where the FDA should be concerned about 'statistical sample sizes) You will need to discuss this rationally with your reviewer - or statistician. something you will need to do anyway. Remember that the weibull approach above will also have a problem with sample size: although you will have many runs if they are on only one device you have a sample size of 1. and if that device is built to nominal - something design engineers are wont to do - then you won't be verifying the design space.
 

Ronen E

Problem Solver
Staff member
Moderator
#9
@Bev D :applause:

@DesignAssurance Just a terminology clarification:
The 1st step Bev was talking about is Design Verification.
The 2nd belongs in what is usually/officially termed Design Validation, though in my opinion Product Validation is a more appropriate term. This is not to be confused with Process Validation, which is about challenging the process(es) more than about challenging the design/product.

I would add a #0 step (preceding step 1) which is theoretical/analytical design verification, before ANY testing. Do you fully understand what in your design constitutes the failure modes and what mechanisms bring them about? If not, your design work is not complete and moving on to testing is a little like gambling in scientific disguise. It shouldn't be lip service; it's likely to be way more important and cost-effective than lots of bench work.

The other comment I had was that before you try to fit any distribution model to any data you have to make sure that your data is homogeneous. Otherwise it's just mumbo jumbo.
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#10
Thanks Ronan - your clarification of design verification, product validation (I like that terminology too!) and process validation are clear and concise - thank you!
 
Thread starter Similar threads Forum Replies Date
alonFAI Component Failure (Relay) during ICT Testing after Surface Mount Assembly Manufacturing and Related Processes 5
Q Corrective Action - Product failure during testing Nonconformance and Corrective Action 7
C Difference between "Pass/Fail Testing" "Test To Failure" "Degradation Test" FMEA and Control Plans 1
A Destructive Testing Failure Mode Sampling Plan Inspection, Prints (Drawings), Testing, Sampling and Related Topics 5
M Failure (timepoint - 6 months) in Long Term Stability testing @ 25 degC Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 9
J Out-of-Specification (OOS) / testing deviation / failure SOP required? General Measurement Device and Calibration Topics 6
E 60601-1 - Tilt testing - Tensile safety factor IEC 60601 - Medical Electrical Equipment Safety Standards Series 1
V Setup for testing against ISO14708 clause 16 (protection of the patient from herms caused by heat) Other Medical Device Related Standards 0
gramps What do you think about automated QA testing For software app industry? Misc. Quality Assurance and Business Systems Related Topics 5
B In house NIOSH pre Testing accepted by NIOSH? US Food and Drug Administration (FDA) 1
M Bacteriostasis/Fungistasis Testing Other Medical Device and Orthopedic Related Topics 6
P Sample Size for Distribution Simulation Testing Inspection, Prints (Drawings), Testing, Sampling and Related Topics 11
N EN 813, EN 12277, EN 1497 - Testing some harness prototypes to an EN standard Various Other Specifications, Standards, and related Requirements 0
S What should i choose for "testing procedure" characteristics? (N95) General Information Resources 0
D Essential performance and EMC immunity testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 4
J Biocompatbility testing on Class 1 device requirements Other US Medical Device Regulations 12
Q Summative Usability Evaluation Testing: prior or during Clinical Investigation? Human Factors and Ergonomics in Engineering 6
B NIOSH Approval for Surgical N95 Respirators - Required testing US Food and Drug Administration (FDA) 2
M ECG lead leakage currents - How to specify ECG leads during electrical safety testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 5
C Stress / Challenge Conditions for Design Verification Testing to Reduce Sample Size 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 11
D CFR Title 14: Aeronautics and Space PART 120—DRUG AND ALCOHOL TESTING PROGRAM Federal Aviation Administration (FAA) Standards and Requirements 3
lanley liao Purchase Acceptance Criteria - Tensile testing Oil and Gas Industry Standards and Regulations 2
M Device mounted at IV pole - what about mechanical stability testing? IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
A Outsourcing IEC 60601-1 Ed 3.2 Testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 0
J Cochlear Implant Testing 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 2
A Class I (exempt) testing requirements Other Medical Device Related Standards 0
JoCam Electrical Testing for Japan, PSE or CB Scheme Other Medical Device Regulations World-Wide 0
M Who are the go to companies for non-destructive hardness testing? General Measurement Device and Calibration Topics 3
M Determining if an Insulin Pen Testing Machine is a Medical Device? EU Medical Device Regulations 4
P Testing cloud-based backups IT (Information Technology) Service Management 7
I IATF Lab Scope Testing Qualification and Competency Documentation IATF 16949 - Automotive Quality Systems Standard 3
N Chemical Testing on Medical Devices - Solutions in a container closure system (bag) EU Medical Device Regulations 1
M Comparing data from destructive testing Inspection, Prints (Drawings), Testing, Sampling and Related Topics 7
T Flammability testing Reliability Analysis - Predictions, Testing and Standards 0
E Manufacturers should develop a testing device for covid19 Service Industry Specific Topics 0
K When is Bioburden Testing Required? Other Medical Device Related Standards 4
K IEC 62304 - Testing Independance IEC 62304 - Medical Device Software Life Cycle Processes 5
A ANSI/AAMI versions of 60601-1-2 and related testing requirements Other Medical Device Related Standards 3
C Surgical mask stability testing (CE mark) EU Medical Device Regulations 2
Beliz Biocompatibility Testing for Laser Epilation Device EU Medical Device Regulations 2
C One Time Service Supplier - Temperature and Humidity Testing Service ISO 13485:2016 - Medical Device Quality Management Systems 5
D IEC 60601-1 - Service life testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 8
P Flammability Testing of Aircraft Interior Materials Federal Aviation Administration (FAA) Standards and Requirements 0
N Usability testing required for FDA IDE (investigational device exemption)? Human Factors and Ergonomics in Engineering 8
E ASTM F2118 - Fatigue testing of bone cement - Changes between the 2003 and the 2014? Other Medical Device Related Standards 1
K Biocompatibility Testing - Multile products of different sizes and shapes US Food and Drug Administration (FDA) 2
S Requirement to Conduct New Shelf-life Testing? (re-do testing for design change) EU Medical Device Regulations 3
JoCam Mobile Patient Hoists and Electrical Testing Other Medical Device Related Standards 0
T Interlaboratory comparison or proficiency testing in destructive testing of welded joints ISO 17025 related Discussions 3
B ASTM E18-2020 - Rockwell testing standard changes? General Measurement Device and Calibration Topics 2

Similar threads

Top Bottom