At what level (harm, hazardous situation, seq. of events, etc) is "risk" estimated?

Hi_Its_Matt

Involved In Discussions
#1
I suspect this is one of those scenarios where different companies do things differently, but I figured what better way to confirm suspicion than to put it out to the cove.
At what "level" (i.e. hazard, harm, hazardous situation, or sequence of events) is risk estimated and evaluated for acceptability?

(I want to acknowledge that:
  1. This question is basically a repeat of this one, but that was from 2017, and I'm curious to see if anything about the changes in the 2019 version of the standard or its guidance document has caused any changes in thought/approach.
  2. There seems to be differences in interpretation as to what qualifies as a harm, a hazard, a hazardous situation, or a particular sequence of events (as evidenced by this thread) and
  3. It can be difficult to discuss these topics in the abstract, without intimate knowledge of the particulars of a specific device or use scenario/context
Having said all that, I think it's still a worthwhile discussion.)

Context:
The definition of Risk, per 14971:2019, is "the combination of the probability of occurrence of harm and the severity of that harm."

The standard states, in Section 5.5 Risk Estimation, "For each identified hazardous situation, the manufacturer shall estimate the associated risk(s) using
available information or data."

I interpret this to mean that the manufacturer must estimate the probability of each hazardous situation occurring (P1), so that they can estimate the risk associated with that particular hazardous situation. (To further simplify this discussion, I'm going to assume that P2=100%; that is, when any particular hazardous situation occurs, it will always result in occurrence of a harm. I know this is an invalid and dangerous assumption is real life).

Practically speaking, this means that if there are theoretically only 2 harms that are associated with a particular device, and only 4 unique hazardous situations in which harm can occur, that the risk associated with each of the 4 situations must be analyzed independently. (See top table in the attached image).

However, it also seems highly beneficial (and I think this is what the Overall Residual Risk section is getting at) to consider the combined probability of a specific harm occurring given all the different hazardous situations in which it can occur. (Bottom table in image). Analyzing risk at this level seems more in line with the definition of risk, but yet seems at odds with the requirement in Section 5.5.

Lastly, it seems to me that the latter approach would make analysis of post-market surveillance data much easier than the former approach, as PM data and investigations will almost certainly contain information related to whether or not a particular harm occurred, and what type of harm occurred, but may not always specify what actually caused the harm.

So, given all that, I return to actual question, what is your interpretation and approach for estimating and evaluating risks?

Risk Estimation & Evaluation.PNG
 
Elsmar Forum Sponsor

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
#2
Just to comment on the intent of estimating the risk; Risk Control is a living process. If you have sold a large volume of products and the feedback indicates the risk you suspected is more or less serious, the documentation should change. Same with probability.

List of risks --> Analysis --> Acceptable --> DONE
Not Acceptable --> Mitigate to acceptable --> DONE
Cannot mitigate to acceptable --> Conduct Risk Benefit Analysis to determine acceptance
 

yodon

Leader
Super Moderator
#3
risk associated with each of the 4 situations must be analyzed independently
Correct.

consider the combined probability of a specific harm occurring given all the different hazardous situations in which it can occur
I did get a little lost here. Are you NOT considering the controls applied to determine acceptability? The approach @Ed Panek mentions (and please correct me if I misunderstood) does not take into account the :2012 content deviation (specific to the EU harmonized version) to reduce ALL risks to the greatest extent possible.

Another content deviation says to consider the benefit-risk analysis for ALL individual risks but I think pretty much everyone has backed down from that (we still use the 3-tiered acceptability table and would do a benefit-risk analysis for anything above generally acceptable). I could see possibly considering the combined probability as you describe as a means to approach this content deviation.

I think, though, that an assessment of the probabilities post-control would be more informative.

As you note, 14971 does require that you conduct an overall benefit-risk analysis but isn't exactly prescriptive in the approach. They do provide some approaches to consider. This is at the system level and wouldn't drill down to specific risks.
 

Tidge

Trusted Information Resource
#4
I don't know if this will be at all clear, but I will try to explain my preference. The top level document is called a "Hazard Analysis". I group Hazards into different sections with subsections as appropriate. Think a section for "mechanical hazards", with potential subsections for "entrapment", "drop", etc.

The beginning of each line of analysis starts with the Hazardous Situation (with a tie to appropriate use cases) and then the (various) Harms. These are used to assess/assign values for (S)everity, P1 and P2. I like to have individual lines up to this point for (related) reasons:
  • rarely occurring, high-severity harms may be a priori ore acceptable than frequently occurring, low-severity harms
If you try to play the game of "just go with the 'worst' rating for whatever" you lose visibility and discriminating power when deciding on risk controls.
  • Periodic review of risks has the possibility of changing P1, P2 ratings and can reveal new harms
Trying to minimize lines early clouds later assessments. Not making updates because a team "thinks" that existing lines provide explanation for customer complaints is often indistinguishable from not making updates. I also find it to be reassuring when the complaints investigations team knows "how the harm occurred" and they can search via use case. I've found it to be a little trickier for complaints investigators to figure out other elements.

This was not asked, but I fully support "merged cells" when the same risk controls apply (and improve the ratings of) a common group of risk analysis lines.
 

Hi_Its_Matt

Involved In Discussions
#5
Thank you @Tidge for your overview.
I don't know if this will be at all clear...
You were quite clear, and what you described aligns with my understanding and my historic practice.

What brought this question about was I recently reorganized a hazard analysis to be grouped by harm (as shown in the images above), rather than by hazard. (In hindsight, what the images don't capture is the fact that one hazard can often result in multiple different harms, oftentimes of different severity... but I digress.)
That then made wonder if it didn't make more sense for the probability of occurrence estimate to be associated to overall occurrence of a harm (bottom of the two tables), rather than occurrence of a harm in the context of a particular hazardous situation (top of the two tables).

My thought behind this modified arrangement was two-fold: 1) it enables an easy understanding of the harms associated with the device, and comparison of their relative likelihoods, and 2) it may make analysis of post-market data easier, because we could easily compare reports of harm against our probability of occurrence estimates. (But this particular project is still somewhat early in design, and quite a bit of ways away from a commercial release. So there is time to hash out those details later).


To answer @yodon's question:
Are you NOT considering the controls applied to determine acceptability?
We absolutely evaluate risks "post mitigation", taking into account the risk controls implemented.

What I was trying to get at is, if we are estimating and evaluating the acceptability of risk at the hazardous situation level, then how/where do we consider the total combined likelihood of a particular harm occurring?

Say for example a particular harm can result from three distinct hazardous situations. And let's assume that each of those three hazardous situations will occur on average 2 times out of every 100 uses of the device. That means that (assuming again that P2=100%, for simplicity sake) in 100 uses of the device, the actual harm will occur on average 6 times (2 occurrences each of 3 different situations). It is possible that 6 times out of 100 uses could be seen as unacceptable when looking at the total combined risk of the harm occurring, even if 2 out of 100 uses is acceptable for each of the individual hazardous situations? Or would this indicate some kind of logical breakdown in either our analysis or our acceptability criteria? Where is the acceptability of that combined risk evaluated? IS IT evaluated?

I'm assuming it would be considered when performing the overall residual risk analysis, but I've never actually been intimately involved with one of those analyses before.
 

Tidge

Trusted Information Resource
#6
What I was trying to get at is, if we are estimating and evaluating the acceptability of risk at the hazardous situation level, then how/where do we consider the total combined likelihood of a particular harm occurring?
I don't think this is a well-formed (in the mathematical sense, no personal slight intended!) problem. Risk assessments (certainly those that include "(S)everity" are fundamentally qualitative assessments. I don't want to discourage innovation in this area; I have a somewhat deep appreciation for efforts to improve the state of knowledge from diverse lines of analysis. Setting aside the qualitative argument, and just looking at the math: Most ratings (S, P1, P2 or S,O,D) are integers... that when folded/multiplied forbid certain values... but combining likelihoods is generally most informative when done with continuous functions. (link to the PDG provided for general info, as well as for some insight on "counting" experiments including ones with null results). There is also the issue that the final decision point for acceptability is also arbitrary.

If I wanted to rationalize my personal reasons for not investigating this (intellectually interesting) approach I would appeal to the Pareto principle: from my PoV, nearly all of what I require from risk management (both in the regulatory sense, and in the public-health mission sense) can be done without trying to introduce (more complicated?) mathematical techniques... or flipped around... "that would be a lot of effort for little reward."

Maybe I am overthinking the suggested approach?
 

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
#7
You are correct. Risks must be reduced as far as possible. For example, wearing a lead vest during X-Rays doesn't reduce the Radiation to Zero but it does mitigate it
 

Hi_Its_Matt

Involved In Discussions
#9
Take a look at section 7.4 in 24971:2020.
Thank you. I am familiar with the benefit-risk analysis but understood this to be required only for risks which, after mitigation, are still deemed unacceptable. My thought was more around a scenario in which the individual risks were acceptable, but somehow a higher level assessment of the total harm were somehow unacceptable.
(Ex. I try to avoid driving near construction sites because I don't want to get a flat tire. Sure the risk of getting a flat tire due to a nail, a screw, a rock, a piece of rebar, etc are all sufficiently low individually, but the overall likelihood of at least one of those happening is high enough that I try to avoid construction sites).

@Tidge you may be overthinking it a bit. (Although I'm likely overcomplicating things as well).
The real initiator is we were getting information from our client's medical officer and physician consultant along the lines of "My experience with similar devices on the market is that XXX complication/adverse event happens about ### times in a thousand surgeries, so I would expect your new device to be in that ballpark or better."
Which then led to the discussion of "should we be looking at risks/probabilities at the hazardous situation level, or at the harm level?"

Again, going back to the definitions, Risk is defined as "the combination of the severity of harm, and probability of occurrence of harm."
If a device can cause a burn, what is the severity of that burn? What is the probability that a burn happens? Those two things together determine the "risk" associated with "burn."
If there is only one hazardous situation that can cause a burn, then the analysis is straightforward.
But if there is more than one hazardous situation that can cause a burn, then the analysis could be more complex. (See final question below).
It seems like you would want to evaluate the risk of burn occurring due to each hazardous situation alone, as well as evaluate the overall risk of burn occurring taking all possible situations into account.

Honestly, the more I think about it, the more I'm convinced that the 2nd part of that is actually the purpose of the overall residual risk analysis, and we just ended up a bit too far down the risk-management rabbit hole trying to take a "pseudo-quantitative" approach within the hazard analysis, rather than a more qualitative approach within a standalone document.

So hopefully a final related question, is it appropriate/possible for one specific harm to be caused by more than one hazardous situation? Or would this indicate some breakdown in the risk management process (perhaps an overly broad harm, or overly specific hazardous situations)?
 

Tidge

Trusted Information Resource
#10
So hopefully a final related question, is it appropriate/possible for one specific harm to be caused by more than one hazardous situation?
Absolutely. I would need to know if the burn came from something overheating (like an exposed halogen bulb) or from excess electrical current.
 
Thread starter Similar threads Forum Replies Date
qualprod Do sum of results of quality objectives should met a high level goal? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 10
D Maturity level and the way to obtain it Various Other Specifications, Standards, and related Requirements 2
T FDA level of concern vs IEC 62304 safety classification - 510(k) exempt device Other US Medical Device Regulations 12
F GM BIQS level 4 requirements Customer and Company Specific Requirements 1
K Level 1 PPAP Submission APQP and PPAP 11
F Site level CMRT reporting: how? Miscellaneous Environmental Standards and EMS Related Discussions 0
T NDT/NDE Level 2 Technicians Certification Oil and Gas Industry Standards and Regulations 20
T How can I determine the current level of exposure to talc in the workplace? Occupational Health & Safety Management Standards 6
M Packaging level EU Medical Device Regulations 2
B UKRP to what level should you audit Class I Technical Documentation? UK Medical Device Regulations 0
D What does a level 1 (PSW) PPAP actually promise? APQP and PPAP 19
A % of defects on the whole batch based on result from inspection under AQL Level II Inspection, Prints (Drawings), Testing, Sampling and Related Topics 6
R IATF and ASPICE level 2 Design and Development of Products and Processes 9
B ANOVA 3 level and 4 factors f and p values are not showing Using Minitab Software 0
N Level 3 PPAPs APQP and PPAP 4
J Part submission warrant for Level 1 PPAP APQP and PPAP 1
A Defining a lower ESD test level in IEC 60601 safety test IEC 60601 - Medical Electrical Equipment Safety Standards Series 5
was named killer Onsite Level II N.D.T. Training in Florida Training - Internal, External, Online and Distance Learning 0
D High level understanding of EUDAMED EU Medical Device Regulations 3
D Supplier Quality level category help - high level ISO 13485:2016 - Medical Device Quality Management Systems 6
silentmonkey Rationalising the level of effort and depth of software validation based on risk ISO 13485:2016 - Medical Device Quality Management Systems 10
Y What are different Special Inspection Level 1-4 and General spesification 1-3 ? AQL - Acceptable Quality Level 0
D Supplier Quality - How to classify a supplier level Medical Device and FDA Regulations and Standards News 10
E Received a Major finding during IATF Surveillance audit for loss of BIQS Level 3 (more than 6 SPPS in 6 months)...how should we address SYSTEMIC CA? IATF 16949 - Automotive Quality Systems Standard 11
J Level 3 KPI Excel Template Manufacturing and Related Processes 1
M Informational How to perform a clinical evaluation of medical devices – Part 2 – Level of clinical evidence and what sufficient clinical evidence means Medical Device and FDA Regulations and Standards News 9
R Bottom up approach versus system level ISO 14971 - Medical Device Risk Management 2
C Failure nets - Same level effects FMEA and Control Plans 0
H Graphical analysis of results - Confidence level bands nomenclature Gage R&R (GR&R) and MSA (Measurement Systems Analysis) 2
S Level of Clinical Evidence - MDR EU Medical Device Regulations 3
D GM BIQS level 5 requirements Customer and Company Specific Requirements 5
I Sampling processes - Who must define the AQL level? AQL - Acceptable Quality Level 9
R Identifying internal issues.. at what level? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 9
I What level of change in documentation requires re-training? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 5
M Level 3 PPAP submission of intentional defective parts APQP and PPAP 4
eule del ayre List of Level 3 PPAP requirements for automotive suppliers APQP and PPAP 20
I Document levels and approval requirements for lower level documents like work instructions, forms etc. Document Control Systems, Procedures, Forms and Templates 18
B AS9102 FAI & Lower Level Drawings - How should we perform the FAI? AS9100, IAQG, NADCAP and Aerospace related Standards and Requirements 1
Rameshwar25 What is a Control Plan at "Material Level"? IATF 16949 - Automotive Quality Systems Standard 4
N How to resolve discrepancies in Level 3 PPAP supplier dimensional reports? APQP and PPAP 11
M Creating a Plant Level Value Stream Map Process Maps, Process Mapping and Turtle Diagrams 1
K Defining Acceptance Quality Level, I need clarity on AQL 1.5, 2.5, 4.0 AQL - Acceptable Quality Level 5
P GM BIQS level 5 training? Training - Internal, External, Online and Distance Learning 2
Ron Rompen Dual level holes - Measurement method suggestions wanted General Measurement Device and Calibration Topics 9
A Level of details required for Class IIb Product Verification and Validation CE Marking (Conformité Européene) / CB Scheme 1
D System Level FMEA example wanted FMEA and Control Plans 2
samer Do we have to determine Educational level for all Staff as pr 7.2 b clause? ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 5
J What level PPAP for 5 critical parts? APQP and PPAP 8
Paul Simpson Informational The role of Annex SL - High Level Structure of ISO MSS's - Revision Update February 2019 ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 26
E High level structure - Planning and operation control Occupational Health & Safety Management Standards 2

Similar threads

Top Bottom