Defining Reliability and Confidence Levels

M

mschaller

#1
Hello,

I've always worked at medical device companies where an SOP (Standard Operating Procedure) defines the reliability and confidence levels for me based on a given risk index. For example, based on the output of an FMEA I will have the estimated severity, occurrence, and detection which combine to form some risk index (sometimes an RPN is used). And based on that, I look at a table in the SOP which says I need .9900 reliability at 95% confidence, or perhaps it's a different level.

How are these levels defined? Is there a standard which defines what typical reliability/confidence levels should be used?

Typically, I have used .99 reliability as tightest control, .975 reliability as medium control, and .95 as loosest control (all with 95% confidence).

What do other companies use?

Thanks for the help!
 
Elsmar Forum Sponsor

Bev D

Heretical Statistician
Staff member
Super Moderator
#2
:bigwave: welcone to the cove!

there is no set standard for the choice of reliability and confidence levels. some companies will create their own standard. some individuals (such as a regulatory statistical reviewer or a QA manager) may have their own 'favorites'.

the tradition of using 95% confidence - or 5% alpha risk - dates back to Sir Ronald Fisher. Although this is often misquoted as Fisher suggested that a 5% alpha risk woudl be sufficient for an analysis IF the experiment were replicated several times with the same results being statistically significant at a 5% alpha level each time. The Mil Std acceptance tables used a confidence level of 95% for AQL based plans and since then 95% has been traditionally used for confidence levels. Reliability is not traditionally anchored...
 
M

mschaller

#3
Thank you Bev, that is very helpful. It sounds like keeping confidence at the industry standard of 95% is the safe thing to do, but reliability levels have a little more wiggle room.

If anyone else has any thoughts about what reliability levels they've seen across companies, chime in!
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#4
Remeber that Reliability (in the context of sampling plans based on a stated reliability at a stated confidence level) has a very specific definition: it is the minimum acceptable quality level. OR better stated it is 1-the defect proprotion that you wish to reject.

These plans typically require no defects in the sample. We then say "the process - or lot - has a quality level of XX% with XX% confidence." where the 'quality level is the reliability. in essesnce this is a c=0, RQL sampling plan.

if you pick a reliability of 90% you are saying that you will accept up to 10% defects...

if you are dealing with true reliability testing there is a time - or stress - component to the test and you are saying that at least XX% of the product will still be working (will be reliable) at the stress or time point with XX% confidence.


typically the reliability is chosen by your customer OR is dictated by what you want ot claim or can afford in your warranty program.

The choice of reliability is not 'customary', 'standard', mythological, or what others do, unlike the confidence level. It has a meaning and it must be based on what is important to you....
 

Apex Hao

Starting to get Involved
#5
Dear covers, could I kindly ask for some opinions here :) This is the closest thread that I could find.

We have a part which needs to pass a mechanical robustness test. The test is destructive. The risk classification is major, thus we are following our aligned target of 95% confidence for 95% reliability.
  • During validation, we have tested 59 samples and no failure is found. We are stating that our part is at least 95% reliable with 95% confidence.
  • After few months, during a routine surveillance checking (3 samples every 3 months), one of the samples is failing the the test (though not terribly failing, it is still a fail). I am in the dilemma if we should stop the production and determine if something is wrong. Essentially, could I expect all the surveillance checking samples to pass the test, since previously we are only validating to 95% reliability, and this failure is perhaps from the balance 5%?
Thanks in advance for your response.
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#6
well 3 is a pretty small sample on it's own. a 95% reliability means that you could have up to 5% defective. so you might take the 1 failure and divide by all of the samples you have done since validation - but this probably won't tell you much. categorical data ahs so little resolution.

Since you said that the part is "only failing a little" I assume you are measuring continuous data of some sort and then just comparing the results to ta specification for pass/fail? You will have much better insight and response if you plot the continuous data in time series. you could try a control chart with n=3 subgroups. But it's better at first to plot all of the individual data points in time series in mulit-vari format. this will show you if the process is stable and the one failure is just to be expected on occasion , or if you have a shift or trend that is a result of a loss of process capability.
 

Apex Hao

Starting to get Involved
#7
Hi Bev, thanks for the reply!

The testing unfortunately is a qualitative one - we subject the part to mechanical impacts and then assess its functionality. Sometimes the part deforms/cracks/shifts and causes unwanted leakage. A leak in this case is a failure (unlikely to quantify), but from observation we know that the part isn't damaged too seriously (again such damage is unlikely to quantify).

I agree with you that 3 sample is pretty small and it might take years to collect sufficient sample to obtain a meaningful percent defective. After this failure occurrence I might try to increase the surveillance checking frequency, and perhaps implement some sort of proper process control.

I love the idea of monitoring the process by having at least a run chart, however my hands are pretty much tied with such categorical pass/fail data. I don't expect a proportion control chart to be much useful here, isn't it?
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#8
A proportion control chart won’t help much.
BUT i dont think this is a ‘job for statistics’, it’s job for physics. At a sample size of 3 you probably should treat each failure as something to be investigated. Ho well do you understand the processes and material properties that create the ‘strength’ you are testing? how are you monitoring those characteristics? What is different about the part that was damaged and leaked from other parts?
 
Thread starter Similar threads Forum Replies Date
K Defining risk control measures IEC 62304 - Medical Device Software Life Cycle Processes 14
A Defining a lower ESD test level in IEC 60601 safety test IEC 60601 - Medical Electrical Equipment Safety Standards Series 5
J Defining staff competence - Small mechanical workshop Occupational Health & Safety Management Standards 20
D Question regarding ECO process, specifically for Life Science products and defining form fit and function ISO 13485:2016 - Medical Device Quality Management Systems 1
T Defining sampling plan for different AQL AQL - Acceptable Quality Level 3
M Defining frequency of measurement tools callibration Calibration and Metrology Software and Hardware 3
M Defining and Documenting Record Retention CE Marking (Conformité Européene) / CB Scheme 5
G Defining performance metrics for DFMA implementation Design and Development of Products and Processes 2
S Defining a Quality System from scratch - Preferred system and documentation names Document Control Systems, Procedures, Forms and Templates 4
A Defining Expected Service Life in Risk Management File Reliability Analysis - Predictions, Testing and Standards 5
C Defining Approvals Required for Design Control Documents ISO 13485:2016 - Medical Device Quality Management Systems 6
K Defining Acceptance Quality Level, I need clarity on AQL 1.5, 2.5, 4.0 AQL - Acceptable Quality Level 5
M Defining the lifetime of orthopedic implants joints Other Medical Device and Orthopedic Related Topics 2
C AS9100 rev D 8.5.1 c 2 - Defining the Machine in-process frequency per ANSI/ASQ Inspection, Prints (Drawings), Testing, Sampling and Related Topics 8
V Defining Safety Precautions for Category 4,5 Molecules Occupational Health & Safety Management Standards 2
E European Regulations defining the terms Repair and Refurbish EU Medical Device Regulations 5
T Defining Major vs. Minor Changes to Procedures ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 11
GStough Adequately Defining Which Suppliers to Audit and Frequency Supplier Quality Assurance and other Supplier Issues 8
E Quality Techucuan (Technician) in Electronics - Defining Postion Requirements Career and Occupation Discussions 4
moritz Defining a good Scope for Critical SOPs ISO 13485:2016 - Medical Device Quality Management Systems 7
T Standards for defining audible alarms/warnings for OR instruments IEC 60601 - Medical Electrical Equipment Safety Standards Series 3
M Defining Critical Vs. Non-Critical Suppliers/Service Providers (API Q1, 9th. Ed.) Oil and Gas Industry Standards and Regulations 2
B IEC 60601-2-24 - Defining Storage Volume IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
E Defining Sub-Disciplines for Chemical Testing Laboratory Employee Proficiency Testing General Measurement Device and Calibration Topics 1
T Defining Nonconformances in a Service Organization ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 6
J Defining Martial Arts and Gymnastics Statistical Techniques Statistical Analysis Tools, Techniques and SPC 4
V Defining the criteria for equipment to be qualified or requalified Qualification and Validation (including 21 CFR Part 11) 2
R Need help on defining scope for Design Verification File for Class III IVD 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 8
J Defining CCP (Critical Control Points) in a Rice Mill Plant Food Safety - ISO 22000, HACCP (21 CFR 120) 9
S Process Map and defining KPIs Misc. Quality Assurance and Business Systems Related Topics 5
5 Major Nonconformance for not "clearly" defining the "device lifetime" ISO 13485:2016 - Medical Device Quality Management Systems 2
E Defining the lifetime of an Implantable Medical Device Other Medical Device and Orthopedic Related Topics 5
B Defining Expected Oxygen Leakage for Safety Testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
G Defining Post Mold Cure Ramp-Down Temperature Manufacturing and Related Processes 2
K Audit Nonconformity on Defining 'Outsourced' Infrastructure Maintenance Quality Manager and Management Related Issues 21
G Points to consider while defining the Quality Policy AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 11
G Defining Quality Objectives for Product Realization and Design and Development AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 5
S Developing Documentation and Defining Processes as Subcontractor IATF 16949 - Automotive Quality Systems Standard 6
C Defining ISO 9001:2008 Scope for a Sterilization Company ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
S Defining Skilled vs. Semi-Skilled vs. Unskilled Labor Manufacturing and Related Processes 1
I Defining the scope for ISO 9001 Registration - Software, Hardware and Customer Care ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 3
M Product Specification vs. Information Defining Product - The differences? 7.3.3.1 IATF 16949 - Automotive Quality Systems Standard 6
R Defining Interaction of Processes in a Software Company Software Quality Assurance 3
N Where to begin defining and monitoring Quality Metrics in a Machine Shop Manufacturing and Related Processes 9
A Testing Process Audit - Defining a Process Compliance Mechanism Software Quality Assurance 2
R Defining the type of Applied Part - Metal Probe (Applied Part) employs Water Cooling IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
N Defining Security Interfaces for Scope for ISMS - Need help IEC 27001 - Information Security Management Systems (ISMS) 10
A Defining the differences between Prototype vs. Production Guidelines? Contract Review Process 5
R Help defining Eyewear Customer Complaint Categories Customer Complaints 5
G Quality Objectives - Where to start defining Quality Objectives? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 11

Similar threads

Top Bottom