Performance specification as a Risk Control Measure, EN 14971

DamienL

Involved In Discussions
#1
This is my 2nd post trying to decipher EN14971, so thanks to all for helping me out so far. My current problem is that I'm trying to construct a design FMEA but am totally confused about what type of risk control measures I have. Here's an example to explain my problem:

On my medical device, I have a polymer tube bonded to a plastic handle. There is potential for this bond to break which would present a serious enough risk of harm to the patient. To mititgate this I have already identified the best known adhesives for the two materials, the optimal bond gaps, surface roughnesses, etc - these will all be specified on engineering drawings for the device. I have developed a tensile specfication for the bond between the 2 components that will be tested for during Design Verification.

So to summarise: User need (derived from Risk Analysis) = device doesn't break into 2 pieces. Corresponding Design Input = Tensile Specification for the joint between the 2 parts. Corresponding Design Output = Engineering Specfications describing the adhesive, bond gaps, bond overlap, surface finishes, etc.

My question is how do I present the Tensile Specification in my Design FMEA. I've included an excerpt from our template. To me it seems that the tensile spec is the appropriate Risk Control Measure here, but maybe it's the bond design? Anyway, EN14971 doesn't seem to include tensile or other performance specifications as a contol measure? It only talks about eliminating the risk altogether (inherent safety by design), or adding alarms/gaurds/shut-offs (protective measures) - and none of these match the type of control that I actually have.

Any thoughts on whether this type of control is a) or b) in the Risk Control section of attached and how best to present in a Design FMEA? Thanks DFMEA_1.JPG
 
Last edited:
Elsmar Forum Sponsor

Tidge

Trusted Information Resource
#2
I have some advice, but I want to disclose up front that my preference for distinguishing between "Inherent by Design (IDB)" and "Protective Measure in the Design (PMD)" is:

IDB : A characteristic inherent to the design concept, such that no matter what specific implementation chosen the risk control is present with the same effectiveness.
PMD: A specifically implemented characteristic of the concept, used as a risk control such that different implementations of the characteristic could have different levels of effectiveness.

Different people have different approaches to IBD/PMD, and some will no doubt followup here. In my practical experience: there exists a risk control spectrum of IBD<->PMD that shifts depending on the approach taken for identifying the risks to be analyzed. I am at my place of understanding because it has been my experience that this position allows for a meaningful use of IBD and PMD in the sort of worksheet you have provided. I will also reference "Protective Measure in Manufacturing (PMM)" as a specific subset of PMD. Some elements (e.g. process validation) of PMM are best explored in a pFMEA, but I believe a dFMEA will motivate process elements.

It is possible to have the risk control measurement reference a specifically established (sub)requirement for tensile strength of the assembly. For example:

PMD: Subassembly exceeds tensile strength requirements (SUBREQ20200107)​

For folks that don't establish such requirements or want to see more engineering thought, my recommendation would be to identify the risk control measure(s) that reflect the engineering process of addressing the failure mode:

PMD (1): Individual mating surfaces designed to exceed tensile strength​
PMD (2): Adhesive chosen to exceed tensile strength​
PMM (3): Adhesive cure time established to exceed tensile strength​

Then each risk control should have verification(s) of implementation (VI), which are in the specs/work instructions:

VI (1a): drawing D0001 of 1st part (referencing material, dimensions, and finish)​
VI (1b): drawing D0002 of 2nd part (referencing material, dimensions, and finish)​
VI (2a): Specification S0010 for adhesive​
VI (3a): Manufacturing Work Instruction MWI4321 (referencing section 3.4 for cure time)​

Then each verification of implementation needs to feed to a verification of effectiveness (VE). If you intend to claim a reduction of occurrence you need VE, if you don't intend to claim a reduction VE is essentially an academic exercise. I suppose an implementation might make the occurrence worse, so I don't stress this point too much.

VE (1a, 1b, 2a, 3a): Lab Study L0002-2020​

In my opinion, the risk file is doing much more than 'showing compliance to 14971'; the risk file provides a road map for the investigation of complaints, non-conformances and alternate design choices. If you were to get complaints that the tube is detaching from the handle, it should be obvious to an investigator which elements of the design were chosen to prevent this occurrence (VI), and what evidence was generated that convinced the design team that this wouldn't happen (VE). It would also be obvious from the risk file if the identified risk control turned out to be inappropriate (or incomplete) for the failure mode. (E.g. the tensile strength requirement is too low, or something other than tensile strength is required as a risk control.)
 
Last edited:

yodon

Leader
Super Moderator
#3
Certainly agree with the last paragraph in the post @Tidge made. I don't disagree with the rest of the post but have other thoughts. First, I suggest you re-consider the effect to be the harm to patient (or users or environment...). This will help better frame the rest of the risk, I believe (does inability to operate the device really indicate a severity score of 3?). Second, what are the use conditions? How much pressure is being applied to the device that would cause the separation? Knowing how much force is applied will then give you a basis for the control (hold under 2x the force?) and will enable you to claim the control is effective in mitigating the risk (just a specified value doesn't relate back to the risk and so the existence of a spec cannot show effectiveness).

Does the system have any means to detect when separation occurs (possible protective measures) - either partially (leaks?) or fully?

Do you need to tell the user to not apply force when using the device or not use it in a manner that would likely exceed the safe force limits (information for safety)?

In this case, there should indeed be a tie-in to the production process as @Tidge mentions and the bonding process should be validated to ensure it will effectively implement the control.
 

DamienL

Involved In Discussions
#4
my preference for distinguishing between "Inherent by Design (IDB)" and "Protective Measure in the Design (PMD)" is:

IDB : A characteristic inherent to the design concept, such that no matter what specific implementation chosen the risk control is present with the same effectiveness.
PMD: A specifically implemented characteristic of the concept, used as a risk control such that different implementations of the characteristic could have different levels of effectiveness.

I really like this approach for distinguishing the two as it removes any ambiguity and very easy to get your head around. However, the standard lists 3 ways of desigining for inherent safety: Eliminating the hazard altogether; Reducing the probability of occurrence; and Reducing the severity. Doesn't the second one conflict with your definition for IBD above? For example a better bond design will reduce the probability of occurrence so wouldn't that make design of a better bond "inherent safety by design" rather than a "protective measure"? I know this might seem like just semantics as the end result will probably be the same, but I do think it's important if we want a consistent approach to Design FMEAs across our organisation.
 

Tidge

Trusted Information Resource
#5
I don't have a specific answer to the very good questions immediately above!

One of the 'burrs' I encounter in 14971 is in the definition of Severity = "measure of the possible consequences of a hazard". The systems I have most of my experience with have shortcut (over-simplified, perhaps) this exact definition by assigning a 'Severity' rating to Harms (also defined in the standard, distinct from hazards). An example might be "permanent loss of vision" assigned with some specific numeric value on the severity scale. This approach is relatively easy to explain, and straightforward to apply across multiple product line but once adopted, this system has essentially lost the ability to reduce Severity. If blame is to be assigned, I am tempted to point a finger at figure C.1 in the informative annex of 14971, as it explicitly includes the term 'severity of harm'.

In the Risk Management system I use, there is a hierarchy with a master Hazard Analysis supported by subordinate files (Software HA, design/use/process FMEA). The Hazard Analysis is generally organized by hazards, with each line of analysis more or less a unique harm... and in this approach the assignment of severity has the advantage that the extent and nature of the harm(s) will be visible in the subordinate documents. However, as I concluded in the paragraph above, if the severity is assigned at the top HA level, it is not logically possible to claim a reduction of severity in a subordinate document. Keep in mind that FMEA are strictly analyzing the effects of failure modes.

I can think of complicating factors when using a somewhat strict definition of IBD = "eliminating the hazard altogether". Most professionals would agree that it would be tempting absurdity to list risks from hazards that are not physically present in a device and then claim 'safety' (vis-a-vis those hazards) because "IBD, our device doesn't use that method". For example, "risks of users being overcome by exhaust fumes are eliminated because we didn't include a combustion engine as part of the design of the dental equipment." The risk files would end up being very diluted (and would be impractical for investigative purposes) if such lines were present.

Another complicating factor in using IBD as the identifier of a chosen risk control: If the risk control for "better bond design" is an IBD control and it fails (or the design choice is later implicated in a failure, or simply does not provide the level of risk reduction that is claimed) what could that imply for the rest of the risk assessments? In such circumstances I personally wouldn't think that there was evidence that the risk control was "inherent" in the implemented solution; instead I think that the engineering choice of the adhesive was a a specific implementation of a protective/preventative measure. I suppose I believe there is a subtle difference between "going back to the drawing board" and "going back to the stockroom".

All that being written: we do use both IBD and PMD in our risk files, but it is not always clear-cut which is more appropriate for certain risk controls. We try to apply critical thinking when applying our choice of identifiers and I encourage all teams involved with risk assessments to do the same.
 

DamienL

Involved In Discussions
#6
this system has essentially lost the ability to reduce Severity
I totally understand this - don't think a reduction in severity is going to happen too often. I was focusing more on the reduction in occurrence by, for example, improving the bond design. And that doesn't seem to fit into your guideline above for what is IBD.

However, I take your point about implications for your whole FMEA if you're stating "inherent by design" all over the place and subsequently get a failure. So I think I'll stick with your rule of thumb for IBD vs Protective Measure. I'm not sure if it's a perfect fit with EN14971, but is the best I've seen so far for getting some consistency into the process. Thanks for your time.
 

Jean_B

Trusted Information Resource
#7
TLDR; P2 is an array [probability of low severity harm, probability of medium severity harm, probability of high severity harm], and unitless. P1 has a unit and scales the distribution of P2. You lose the distinction and its associated advantages if you simplistically take P2 to be the summed likelihood of non-negligible likelihood and link it to the maximum possible severity.
---
A concept, which is both intuitively and informatively (guidance-level) clear:
Take the relevant core definitions (publicly available through ISO OBP):
A) harm: injury or damage to the health of people, or damage to property or the environment
B) hazardous situation: circumstance in which people, property or the environment is/are exposed to one or more hazards
C) risk: combination of the probability of occurrence of harm (A) and the severity (D) of that harm (A)
D) severity: measure of the possible consequences of a hazard (E)
E) hazard: potential source of harm

Now amend your interpretation of C to result in an array of values, which is as big as the number of severity categories you employ within your risk management.
Let's take a (further disconnected; not going to build a full risk management system) 5 bin scale of severity:
  1. No notable harm
  2. Superficial
  3. Partial,
  4. Fully, prolonged yet incomplete (scars) healing
  5. Permanent, functional impairment
  6. Death (if you go nitty gritty not a harm, but hey, standards aren't perfect and let's be pragmatic).
Now, given a (thermal?) hazard, construct an empty risk array: [null,null,null,null,null,null] (6 times no value; won't go into the differences between null, nill, zero and not available and such)
Any amount of causes, hazard could be either generally hot or even split into hot liquid, hot surface. Sequence of events might include length of exposure, etc. Are the splits useful? Most likely only when you take into account how design can control their levels somewhat independently.

Your risk management team, including the ever relevant clinical expertise, assess probabilities (for hot liquid and hot surface).
Read these as: given that the harm (of burn) occurs, what is are the most likely likelihoods of each severity of outcome (P2. No particular reason ;) ).
Results: the hazard arrays (These must sum to one.):
Hot Liquid[0, 0.1, 0.2, 0.5, 0.15, 0.05];
Hot Surface[0.4, 0.4, 0.19, 0.01, 0, 0];
Compared (roughly) visually:
_ - -≡=_
==-_ _ _

But wait: we forgot occurrence of harm, which is an a priori condition prior to the casino of life distributing the severity. (P1. No particular reason, again ;) ). If your scale for P1 is well thought out (i.e. normalizable to either per service year, or per 1000 devices, basically any reasonably accurate quantity you can realistically normalize your statistics against.), you can link post-market surveillance directly to your risk analysis. Then your feedback will show whether your distribution (P2), or your assessment of occurrence of the hazardous situation.

Your design and development department could act on the priority of severity of harm. Let's say additional screens present between the possible exit points for hot liquid in single fault mode condition (ah, base assumptions of risk management, justify thee well) and all probable locations of users of the device. Let's imagine they're placed to prevent splashing on the most lethal areas at all, and reduce the likelihood a bit from. Necessarily it moves the distribution (independent of actual occurrence; remember: we're dealing with the given presence of exposure at the moment) left.
Design assessed result:
Hot Liquid[0.2, 0.25, 0.2, 0.3, 0.05, 0]; (it was [0, 0.1, 0.2, 0.5, 0.15, 0.05];)
Hot Surface[0.4, 0.4, 0.19, 0.01, 0, 0];
Compared (roughly) visually:
_ - -≡=_ (what hot liquid was)
-===__ (what hot liquid is newly assessed with control; skipping over implementation and effectiveness verification steps for clarity)
==-_ _ _ (hot surface, still the same)
Yes, we're accepting a relatively higher occurrence of lower degree harms for the benefit of practiclaly eliminating the high severity harms.
Funnily enough, this means that you've also reduced, under the assumption of single fault mode, the maximum severity you're likely to encounter by adjustment of the distribution to exclude the high severity ones. (no stupid user continuing to jump into the very new slowly growing hot liquid puddle).

Perhaps foreseen, perhaps unsuspecting, you could see a change in P1 as well, which would affect the total amount of occurrences you'd get independent of the distribution of severity for the outcomes. Whether you analyze that deep, up to you. Note that in the split version, only P1 has a unit. P2 is merely a cumulative distribution function without unit. P derives its unit (/year, /use) from P1.

If you're really good you've excluded all risk to have P2 result in [ 1, 0, 0, 0, 0, 0] ≡_ _ _ _ _. Give yourself a cookie. Your risk mitigation is not included in your design output as it's not a risk for your device at all (complete negligibly severity means socially acceptable in the norms and morals of the communities you distribute to).
But then it would not be in your design input. So it would not be in your characteristics of the medical device. So why is it in your risk management file? (answer, iterative design. It was a risk in some previous design interation, and you are remembering not to make that dangerous choice by introducing a prohibition (requirement to not do something) into your customer requirements).

Note: the severities (roughly) match those of burn wounds. The probabilities don't (instructiveness above accuracy in this case)
The far more difficult question would be attribution of (final) clinical conditions to specific yet related harms (burn directly, smoke due to fire due to faulty device, infection from exposed skin due to burn, or some other consequence harm) in your feedback mechanisms/post-market surveillance.
The medium difficulty one (more effort than difficult really), is constructing the (mathematical) graph, as there are multiple connections from a single node of hazard to multiple hazardous situation, and from a hazardous situation to multiple harms; and in reverse from a harm to multiple hazardous situations that can cause it, and the hazardous situation that can arise from multiple failure modes of the system hazards.

To keep on-topic:
The type of control (design, protection, information) is merely an indication of the tendency on how they transforms the distribution (P2) and likelihood of exposure (P1), and the preference by politicians made to reflect the (loudest) voice of the people: to make stuff as foolproof as possible, as fools are ingenious.
 

akp060

Involved In Discussions
#8
I can only give a qualitative comment. When we are discussing FMEA/FMECA, we also have the element of Detectability. Contrary to an HA where the PMD can reduce the impact of the hazard in terms of probability, in the case of FMEA, it increases detectability. and hence can reduce the occurrence of the failure.

In your case, for the regulatory purpose, you should consider "Inherent safety by design" as a measure that improves your design effort for the given design input (which is the Tensile strength as mentioned) assuming you have a number for that in mind. AS far as your concern of "Protective measures" goes, please know that the Protective measures could be in the device or in the manufacturing process (which in your case is bond design, surface finishes, etc.) as per Annexure ZA, ZB, ZC in 14971:2012. So yes, you can cite them altogether as fas as they are scientifically related to tensile strength (Sorry, no idea about adhesive technology!).
 
Thread starter Similar threads Forum Replies Date
G What does performance specification include? US Food and Drug Administration (FDA) 1
shimonv IEC 60601-1 Essential Performance - Is the signal accuracy specification an essential requirement? IEC 60601 - Medical Electrical Equipment Safety Standards Series 4
W "Performance Specification" vs. "Specification"? US Food and Drug Administration (FDA) 5
E Clause 15.4.2.1 d) - Loss of ESSENTIAL PERFORMANCE IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
Dazzur Sharing Suppliers Performance Data with Supplier. Supplier Quality Assurance and other Supplier Issues 6
E ISO 17025 Key Performance Indicators (KPIs) ISO 17025 related Discussions 5
L Supplier Performance when your supplier is also the customer ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 17
L Supplier performance evaluation Supplier Quality Assurance and other Supplier Issues 8
K SGS Certification - Advice on their Performance Service Industry Specific Topics 9
D IEC 60601-1-2: Is EMC immunity testing required for a device without essential performance? IEC 60601 - Medical Electrical Equipment Safety Standards Series 25
S Performance Qualification and Process Validation ISO 13485:2016 - Medical Device Quality Management Systems 5
Sidney Vianna IATF 16949 News IATF Performance Complaint Management System (IATF CMS) within the IATF Database IATF 16949 - Automotive Quality Systems Standard 4
malasuerte Performance of ISO 9001 certified vs. non-certified manufacturing companies ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 11
S Link Between Essential Performance Requirements and Essential Design Outputs 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 3
R SaMD Performance Testing US Medical Device Regulations 5
S Supplier performance required to be reported to supplier? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 8
C Transfer of Performance specifications 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 5
A Do clinical performance studies for IVDs need to be conducted in a member state? EU Medical Device Regulations 2
A Do clinical performance studies for IVDs need to be conducted in a member state? CE Marking (Conformité Européene) / CB Scheme 0
I Post-market surveillance and Post-market performance follow-up for in vitro diagnostic medical devices EU Medical Device Regulations 0
D Key Performance Indicators / KPI Review IATF 16949 - Automotive Quality Systems Standard 11
Ed Panek Notified Body Performance Trending Poll EU Medical Device Regulations 8
Aymaneh IVDR and Performance Evaluation Plan EU Medical Device Regulations 2
Aymaneh Summary of safety and performance (SSP) IVDR 2017/746 EU Medical Device Regulations 1
N Pre-Clinical Performance Testing Design and Development of Products and Processes 3
S Summary of safety and clinical performance and Article 120(3) of the MDR EU Medical Device Regulations 4
A How to measure the performance of ISMS? IEC 27001 - Information Security Management Systems (ISMS) 3
D Essential performance and EMC immunity testing IEC 60601 - Medical Electrical Equipment Safety Standards Series 4
B Handling lower detection limits for SPC and process performance Statistical Analysis Tools, Techniques and SPC 1
MaHoDie IVDR and Performance Evaluation Plan CE Marking (Conformité Européene) / CB Scheme 2
C Where to draw the line for "sufficient evidence" to verify safety/performance of a device? CE Marking (Conformité Européene) / CB Scheme 2
K IEC 62304 - Functional and performance requirements for SOUP items IEC 62304 - Medical Device Software Life Cycle Processes 2
K EU MDR Annex 1 Chapter III: Information in the Instructions for Use-23.4 (e) the performance characteristics of the device; EU Medical Device Regulations 2
Q ISO 9001 8.5.1 - Control of production and service performance ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 5
W IEC 60601 - Essential performance c.2.34 IEC 60601 - Medical Electrical Equipment Safety Standards Series 2
S A clinical performance evaluation study with an IVD product as Investagional Use product - Clinical Monitor requirements 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 4
D Performance of high shear mixer (or rapid mixing granulator Qualification and Validation (including 21 CFR Part 11) 4
M Do you need an Applicable general safety and performance requirements Checklist? EU Medical Device Regulations 2
G Defining performance metrics for DFMA implementation Design and Development of Products and Processes 2
B Four Key Performance Indicators for Continual Improvement (Dec. 3 2019) [Deleted] Training - Internal, External, Online and Distance Learning 2
F 5520A High Performance Multi-Product Calibrators General Measurement Device and Calibration Topics 0
D Performance Qualification per GHTF Guidance Other Medical Device Related Standards 12
M Informational US FDA Final Guidance – Coronary, Peripheral, and Neurovascular Guidewires – Performance Tests and Recommended Labeling Medical Device and FDA Regulations and Standards News 0
O Performance Measurement ISO 9001: 2015 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 2
O Monitoring performance - How do I determine performance measurement basis within my organization? Misc. Quality Assurance and Business Systems Related Topics 4
M Informational MDCG 2019-9 Summary of safety and clinical performance A guide for manufacturers and notified bodies – August 2019 Medical Device and FDA Regulations and Standards News 1
M Informational Several US FDA draft guidances, including some specific device guidances for the Safety and Performance Based Pathway Medical Device and FDA Regulations and Standards News 0
M Informational USFDA final guidance – Safety and Performance Based Pathway Medical Device and FDA Regulations and Standards News 0
H How to prepare Performance Qualification (PQ) for autoclave General Measurement Device and Calibration Topics 2
M Performance Standard for Balloon Trocars Other Medical Device Related Standards 2

Similar threads

Top Bottom