Examples of inherent safety by design

Tidge

Trusted Information Resource
#31
Specifying software (or hardware) can be the implementation of a risk control; testing (of software or hardware) can provide evidence of the effectiveness of a risk control. This is a different topic than the categorization of risk controls.
 
Elsmar Forum Sponsor

sagai

Quite Involved in Discussions
#32
Specifying software (or hardware) can be the implementation of a risk control; testing (of software or hardware) can provide evidence of the effectiveness of a risk control. This is a different topic than the categorization of risk controls.
How effective the mitigation can be if it fails with the same probability as the thing that it mitigates? :notme:
 

Tidge

Trusted Information Resource
#33
How effective the mitigation can be if it fails with the same probability as the thing that it mitigates? :notme:
The (assumption of) probability of failure (=1) is the assessment BEFORE testing the software solution, because it is extremely difficult to predict the probability of failure for software solutions in the absence of testing. If the software solution used as a risk control is tested and demonstrated to work, then you have the evidence of an effective risk control, and presumably the probability of the specific failure mode is now zero.

I don't want to have a discussion in this thread about the philosophy of pre- and post- control ratings in FMEA, but:

Contrast the software situation with a classical design FMEA which involves a hardware component. Because of historical experience with a specific component, it is possible to assess probability of failure for a component (thus contributing to or failing to control one or more risks) to some (relatively) precise value less than unity. This would most likely be represented by the rating assigned to Occurence (O) in a classic S x O x D dFMEA. To be clear: FMEA ratings do not typically range between 0 and 1 but are often on some integer scale representative of a powers of 10 (logarithmic) when trying to be 'quantitative'.

Decades (or more) of experience with (many) hardware components has resulted in consensus about the behavior of those designs and (by extension) the effectiveness of those components (in specific designs). This consensus could be (for example) around the flammability and durability of plastic conduit, the protective ratings of power supplies, the strength of specific threaded fasteners. There is essentially no similar consensus on specific software solutions like there are for steel beams used for structural support in buildings (not a medical device!), and thus each medical device developer using software is obligated to verify that the software is meeting its intended use.
 

Watchcat

Trusted Information Resource
#35
I'm in a bit over my head here, but...

If the software solution used as a risk control is tested and demonstrated to work, then you have the evidence of an effective risk control, and presumably the probability of the specific failure mode is now zero.
Before a (hardware or software) medical device is put on the market, all risk control methods are tested and demonstrated to work. I've never seen anyone set the probability of occurrence of the failure mode to zero as a result.

Because of historical experience with a specific component, it is possible to assess probability of failure for a component
Yes, and before all of that history was experienced, those components were tested and demonstrated to work. Historical experience tells us that testing something and demonstrating that it works does not set the probability of its failure to zero. Moreover, historical experience tells us that whatever probability of failure you might calculate based on decades of experience, that probability will continue to change with more experience.

If there is a consensus that has been reached over decades of experience, it is that something works until it doesn't, and the probability that it won't work someday, somewhere, sometime, is never zero.

There is essentially no similar consensus on specific software solutions
Which is why I would set the probability of their failure higher, instead of lower.
 
Last edited:

Ronen E

Problem Solver
Staff member
Moderator
#36
A few thoughts:

No amount of testing (or field experience for that matter) can remove uncertainty completely. With PROPER use of statistics, the uncertainty can be quantified. When available data is ABUNDANT, the statistical confidence may approach certainty, but it will almost never become certainty, and in most cases it's too impractical an approach anyway.


Inherently Safe Design (ISD) vs. Protective Measures (PM) -

Categorisation is meaningless in the absence of specific context. A given means might be either.

A hazard is defined as a potential source of harm. For example, a device that employs electrical potential (for convenience, think about voltage, electric current, capacitors etc.) might have a potential for electric shock. If the device is designed to achieve the same purpose without employing (or potentially harbouring) electrical potential, it won't have the potential to cause an electric shock - no matter what. That's ISD - removal of the hazard altogether.

PM don't remove the hazard, but instead interfere with the formation/unfolding of the hazardous situation and it's further eventuating in harm. The hazard is still there, but the probability of it eventuating in harm is reduced. This can be done through attacking any of the steps or ingredients necessary for the actual harm to come about.

I believe that this view may be used to neatly resolve this confusion. ISD - remove the hazard altogether; PM - interfere with the path from hazard to harm. According to this view most mitigation means will reveal themselves as PM. True ISD is quite rare as a retroactive risk mitigation path; it'd more often be built into the selected design concept from the outset.

As can be seen, the answer to whether a specific means is ISD or PM depends on how the hazards and hazardous situations have been called out.


P=1 for SW PM -

Mainstream risk analysis methodology addresses Single Fault Condition (SFC). Why? Because otherwise it's quite likely that no complex device will ever be deemed safe while still being economically viable (which means it won't exist in reality). Under SFC one should not speculate that both the risky element and the PM intended to mitigate the same risk will fail simultaneously. So where the risky element fails the SW PM should be assumed to work as intended (provided that it passed design verification), i.e. failure P=0. Where the SW PM fails (may well be considered at P=1), the risky element that that SW PM was put in place to cover for should be assumed to be performing to spec. Problem solved.

If you reject the argument above you must be willing to conduct Multiple Fault Condition analysis across the board, at least for a number of simultaneous faults N=2.
 

Tidge

Trusted Information Resource
#37
Which is why I would set the probability of their failure higher, instead of lower.
The probability can never be higher than 1, and never lower than 0. If the a priori probability of failure is 1, the only way to reduce it is through objective evidence that it does not fail (e.g. via comprehensive testing).
 

Watchcat

Trusted Information Resource
#38
If the a priori probability of failure is 1, the only way to reduce it is through objective evidence that it does not fail.
1 essentially means it is guaranteed to fail, first time, every time. In order to reduce that probability, you don't need evidence that it does not fail, only that it will not fail all the time. I would say that everything is guaranteed to fail, somewhere, sometime, so that no risk of failure can ever be at 0.
 

Tidge

Trusted Information Resource
#39
1 essentially means it is guaranteed to fail, first time, every time. In order to reduce that probability, you don't need evidence that it does not fail, only that it will not fail all the time. I would say that everything is guaranteed to fail, somewhere, sometime, so that no risk of failure can ever be at 0.
One of the subtle differences between software and hardware relates to the probabilities of failure. Hardware elements 'wear', software does not. If a defect is in software, it will be in all instances of the software. Contrast this with physical elements which follow the 'bath tub' curve for reliability. This difference has been one of the main drivers for avoiding the use of "Software FMEA" to support 14971.
 

Watchcat

Trusted Information Resource
#40
Theoretically at least, when a hardware medical device wears to the point of failure, the device no longer meets specifications, and therefore it is no longer the medical device, so the medical device did not fail. The length of time/amount of use necessary for a medical device to wear to the point of no longer being itself is addressed in wear and aging studies. Are similar studies been done with software? What about studies to confirm that the software cannot be corrupted via download or copying (i.e., that any defect must be in all instances of the software)?
 
Thread starter Similar threads Forum Replies Date
M Examples of Combination Products - MDR Article 1 (8) and MDR Article 1(9) Medical Device and FDA Regulations and Standards News 3
M Combination products - examples CE Marking (Conformité Européene) / CB Scheme 1
qualprod Examples to mitigate risk from Covid ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 5
B Two excellent examples of process capability analysis from Quality Magazine Capability, Accuracy and Stability - Processes, Machines, etc. 5
U Examples of Quality Objectives for a Medtech start up ISO 13485:2016 - Medical Device Quality Management Systems 4
B ITAR Visitor Log Examples AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 1
J Process FMEA Template with examples - Cold and Hot Forged components FMEA and Control Plans 4
R DFA & DFM - Examples for Design for assembly and design for manufacturability Lean in Manufacturing and Service Industries 2
E Non-GMP examples in Pharmaceutical industry Pharmaceuticals (21 CFR Part 210, 21 CFR Part 211 and related Regulations) 2
I Quality Policy and Objectives examples Elsmar Cove Forum Suggestions, Complaints, Problems and Bug Reports 5
Y Examples of TRB Reports for MIL-PRF-31032 Qualification AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 0
G Any good examples of CAPA forms that include a risk based approach? ISO 13485:2016 - Medical Device Quality Management Systems 8
I ISO 22000:2018 - Operational Prerequisite Program Examples Food Safety - ISO 22000, HACCP (21 CFR 120) 2
S Examples of software changes that required a 510k US Food and Drug Administration (FDA) 2
W SOP examples wanted - Soil, Concrete and Asphalt testing ISO 17025 related Discussions 3
M Where can I find examples of PPAP? APQP and PPAP 6
O Examples of Critical process parameter (CPP) and Critical quality attribute (CQA) Manufacturing and Related Processes 2
S AS9100D PEAR - Examples for organization's method for determining process results? AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 5
L IATF 16949 8.3.3.2 FCA (Fiat Chrysler) Specific Requirements - Examples of AQR and MPFMEA Service Industry Specific Topics 1
L IATF 16949 Warranty Management System examples IATF 16949 - Automotive Quality Systems Standard 0
A Examples of Pre-Sub, SRD, PMA Shells and Templates Other US Medical Device Regulations 3
S IAF Codes - Examples of what falls under each code General Information Resources 2
O Examples of the external and internal issues and their risks and opportunities IATF 16949 - Automotive Quality Systems Standard 2
M Medical Device Traceability Matrix - Examples EU Medical Device Regulations 8
P Examples of Nonconformance, Corrective Action Requests, and Root Cause Analysis Nonconformance and Corrective Action 2
Ajit Basrur Looking for examples of "User Training" - ISO 13485 section 7.2.1 d) ISO 13485:2016 - Medical Device Quality Management Systems 6
S Manufacturing Work Instruction examples that include process pictures Manufacturing and Related Processes 3
G Uncertainty Budget Examples for Caliper, Micrometer and Dial Gauge Measurement Uncertainty (MU) 3
Albert G. What are general examples of audit findings with ISO 9001:2015? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 15
Q Risks Examples in Top Management ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 7
R ISO 13485:2016 - Quality Objectives Regulatory Requirement Examples ISO 13485:2016 - Medical Device Quality Management Systems 1
Sidney Vianna Blockchain Technology - Any examples of practical application? The Reading Room 21
Z SAP Validation for Part 11 Compliance - Examples (executed protocols) Qualification and Validation (including 21 CFR Part 11) 3
M How to Document Internal & External Communications - Suggestions/examples pls IATF 16949 - Automotive Quality Systems Standard 3
Y Examples of Risk and Opportunities based on ISO 9001:2015 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
R Examples of Quality Objectives related to ISO 13485:2016 ISO 13485:2016 - Medical Device Quality Management Systems 6
M CAAC-145 Manuals - Looking for examples of MOM's, MMM's Capability Lists, etc. Federal Aviation Administration (FAA) Standards and Requirements 13
D Seeking Corrective Action Process Examples Nonconformance and Corrective Action 3
P ISO 9001:2008 Design and Development Process & Forms examples wanted Design and Development of Products and Processes 3
K AS9100 5.3 Authorities (Looking for examples) AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 3
B Risk Requirements to meet the explicit Risk Based Approach of ISO 13485:2016 Examples ISO 13485:2016 - Medical Device Quality Management Systems 21
F ISO 14001:2015 EMS Procedures and Form Examples wanted ISO 14001:2015 Specific Discussions 10
K EN ISO 15223-1:2012 Clarification or Examples on when to use Safety Symbols Other Medical Device Related Standards 3
A SIPOC (Suppliers Inputs Process Outputs Customers) examples wanted Quality Manager and Management Related Issues 1
A AS 9100 - Risk Management Procedure and Flow Chart examples AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 4
A Aseptic Sampling Procedure Examples (Syringe and Dipper) wanted Manufacturing and Related Processes 1
S Customer Scorecard Analysis Templates and Examples IATF 16949 - Automotive Quality Systems Standard 3
H Root Cause Corrective Action Examples Nonconformance and Corrective Action 4
S Seeking examples of Nonconforming Materials Nonconformance and Corrective Action 5
C Need examples for Controlled Shipping I and II (CSL) IATF 16949 - Automotive Quality Systems Standard 3

Similar threads

Top Bottom