At what level (harm, hazardous situation, seq. of events, etc) is "risk" estimated?

Hi_Its_Matt

Involved In Discussions
I suspect this is one of those scenarios where different companies do things differently, but I figured what better way to confirm suspicion than to put it out to the cove.
At what "level" (i.e. hazard, harm, hazardous situation, or sequence of events) is risk estimated and evaluated for acceptability?

(I want to acknowledge that:
  1. This question is basically a repeat of this one, but that was from 2017, and I'm curious to see if anything about the changes in the 2019 version of the standard or its guidance document has caused any changes in thought/approach.
  2. There seems to be differences in interpretation as to what qualifies as a harm, a hazard, a hazardous situation, or a particular sequence of events (as evidenced by this thread) and
  3. It can be difficult to discuss these topics in the abstract, without intimate knowledge of the particulars of a specific device or use scenario/context
Having said all that, I think it's still a worthwhile discussion.)

Context:
The definition of Risk, per 14971:2019, is "the combination of the probability of occurrence of harm and the severity of that harm."

The standard states, in Section 5.5 Risk Estimation, "For each identified hazardous situation, the manufacturer shall estimate the associated risk(s) using
available information or data."

I interpret this to mean that the manufacturer must estimate the probability of each hazardous situation occurring (P1), so that they can estimate the risk associated with that particular hazardous situation. (To further simplify this discussion, I'm going to assume that P2=100%; that is, when any particular hazardous situation occurs, it will always result in occurrence of a harm. I know this is an invalid and dangerous assumption is real life).

Practically speaking, this means that if there are theoretically only 2 harms that are associated with a particular device, and only 4 unique hazardous situations in which harm can occur, that the risk associated with each of the 4 situations must be analyzed independently. (See top table in the attached image).

However, it also seems highly beneficial (and I think this is what the Overall Residual Risk section is getting at) to consider the combined probability of a specific harm occurring given all the different hazardous situations in which it can occur. (Bottom table in image). Analyzing risk at this level seems more in line with the definition of risk, but yet seems at odds with the requirement in Section 5.5.

Lastly, it seems to me that the latter approach would make analysis of post-market surveillance data much easier than the former approach, as PM data and investigations will almost certainly contain information related to whether or not a particular harm occurred, and what type of harm occurred, but may not always specify what actually caused the harm.

So, given all that, I return to actual question, what is your interpretation and approach for estimating and evaluating risks?

At what level (harm, hazardous situation, seq. of events, etc) is "risk" estimated?
 

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
Just to comment on the intent of estimating the risk; Risk Control is a living process. If you have sold a large volume of products and the feedback indicates the risk you suspected is more or less serious, the documentation should change. Same with probability.

List of risks --> Analysis --> Acceptable --> DONE
Not Acceptable --> Mitigate to acceptable --> DONE
Cannot mitigate to acceptable --> Conduct Risk Benefit Analysis to determine acceptance
 

yodon

Leader
Super Moderator
risk associated with each of the 4 situations must be analyzed independently

Correct.

consider the combined probability of a specific harm occurring given all the different hazardous situations in which it can occur

I did get a little lost here. Are you NOT considering the controls applied to determine acceptability? The approach @Ed Panek mentions (and please correct me if I misunderstood) does not take into account the :2012 content deviation (specific to the EU harmonized version) to reduce ALL risks to the greatest extent possible.

Another content deviation says to consider the benefit-risk analysis for ALL individual risks but I think pretty much everyone has backed down from that (we still use the 3-tiered acceptability table and would do a benefit-risk analysis for anything above generally acceptable). I could see possibly considering the combined probability as you describe as a means to approach this content deviation.

I think, though, that an assessment of the probabilities post-control would be more informative.

As you note, 14971 does require that you conduct an overall benefit-risk analysis but isn't exactly prescriptive in the approach. They do provide some approaches to consider. This is at the system level and wouldn't drill down to specific risks.
 

Tidge

Trusted Information Resource
I don't know if this will be at all clear, but I will try to explain my preference. The top level document is called a "Hazard Analysis". I group Hazards into different sections with subsections as appropriate. Think a section for "mechanical hazards", with potential subsections for "entrapment", "drop", etc.

The beginning of each line of analysis starts with the Hazardous Situation (with a tie to appropriate use cases) and then the (various) Harms. These are used to assess/assign values for (S)everity, P1 and P2. I like to have individual lines up to this point for (related) reasons:
  • rarely occurring, high-severity harms may be a priori ore acceptable than frequently occurring, low-severity harms
If you try to play the game of "just go with the 'worst' rating for whatever" you lose visibility and discriminating power when deciding on risk controls.
  • Periodic review of risks has the possibility of changing P1, P2 ratings and can reveal new harms
Trying to minimize lines early clouds later assessments. Not making updates because a team "thinks" that existing lines provide explanation for customer complaints is often indistinguishable from not making updates. I also find it to be reassuring when the complaints investigations team knows "how the harm occurred" and they can search via use case. I've found it to be a little trickier for complaints investigators to figure out other elements.

This was not asked, but I fully support "merged cells" when the same risk controls apply (and improve the ratings of) a common group of risk analysis lines.
 

Hi_Its_Matt

Involved In Discussions
Thank you @Tidge for your overview.
I don't know if this will be at all clear...
You were quite clear, and what you described aligns with my understanding and my historic practice.

What brought this question about was I recently reorganized a hazard analysis to be grouped by harm (as shown in the images above), rather than by hazard. (In hindsight, what the images don't capture is the fact that one hazard can often result in multiple different harms, oftentimes of different severity... but I digress.)
That then made wonder if it didn't make more sense for the probability of occurrence estimate to be associated to overall occurrence of a harm (bottom of the two tables), rather than occurrence of a harm in the context of a particular hazardous situation (top of the two tables).

My thought behind this modified arrangement was two-fold: 1) it enables an easy understanding of the harms associated with the device, and comparison of their relative likelihoods, and 2) it may make analysis of post-market data easier, because we could easily compare reports of harm against our probability of occurrence estimates. (But this particular project is still somewhat early in design, and quite a bit of ways away from a commercial release. So there is time to hash out those details later).


To answer @yodon's question:
Are you NOT considering the controls applied to determine acceptability?
We absolutely evaluate risks "post mitigation", taking into account the risk controls implemented.

What I was trying to get at is, if we are estimating and evaluating the acceptability of risk at the hazardous situation level, then how/where do we consider the total combined likelihood of a particular harm occurring?

Say for example a particular harm can result from three distinct hazardous situations. And let's assume that each of those three hazardous situations will occur on average 2 times out of every 100 uses of the device. That means that (assuming again that P2=100%, for simplicity sake) in 100 uses of the device, the actual harm will occur on average 6 times (2 occurrences each of 3 different situations). It is possible that 6 times out of 100 uses could be seen as unacceptable when looking at the total combined risk of the harm occurring, even if 2 out of 100 uses is acceptable for each of the individual hazardous situations? Or would this indicate some kind of logical breakdown in either our analysis or our acceptability criteria? Where is the acceptability of that combined risk evaluated? IS IT evaluated?

I'm assuming it would be considered when performing the overall residual risk analysis, but I've never actually been intimately involved with one of those analyses before.
 

Tidge

Trusted Information Resource
What I was trying to get at is, if we are estimating and evaluating the acceptability of risk at the hazardous situation level, then how/where do we consider the total combined likelihood of a particular harm occurring?

I don't think this is a well-formed (in the mathematical sense, no personal slight intended!) problem. Risk assessments (certainly those that include "(S)everity" are fundamentally qualitative assessments. I don't want to discourage innovation in this area; I have a somewhat deep appreciation for efforts to improve the state of knowledge from diverse lines of analysis. Setting aside the qualitative argument, and just looking at the math: Most ratings (S, P1, P2 or S,O,D) are integers... that when folded/multiplied forbid certain values... but combining likelihoods is generally most informative when done with continuous functions. (link to the PDG provided for general info, as well as for some insight on "counting" experiments including ones with null results). There is also the issue that the final decision point for acceptability is also arbitrary.

If I wanted to rationalize my personal reasons for not investigating this (intellectually interesting) approach I would appeal to the Pareto principle: from my PoV, nearly all of what I require from risk management (both in the regulatory sense, and in the public-health mission sense) can be done without trying to introduce (more complicated?) mathematical techniques... or flipped around... "that would be a lot of effort for little reward."

Maybe I am overthinking the suggested approach?
 

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
You are correct. Risks must be reduced as far as possible. For example, wearing a lead vest during X-Rays doesn't reduce the Radiation to Zero but it does mitigate it
 

Hi_Its_Matt

Involved In Discussions
Take a look at section 7.4 in 24971:2020.
Thank you. I am familiar with the benefit-risk analysis but understood this to be required only for risks which, after mitigation, are still deemed unacceptable. My thought was more around a scenario in which the individual risks were acceptable, but somehow a higher level assessment of the total harm were somehow unacceptable.
(Ex. I try to avoid driving near construction sites because I don't want to get a flat tire. Sure the risk of getting a flat tire due to a nail, a screw, a rock, a piece of rebar, etc are all sufficiently low individually, but the overall likelihood of at least one of those happening is high enough that I try to avoid construction sites).

@Tidge you may be overthinking it a bit. (Although I'm likely overcomplicating things as well).
The real initiator is we were getting information from our client's medical officer and physician consultant along the lines of "My experience with similar devices on the market is that XXX complication/adverse event happens about ### times in a thousand surgeries, so I would expect your new device to be in that ballpark or better."
Which then led to the discussion of "should we be looking at risks/probabilities at the hazardous situation level, or at the harm level?"

Again, going back to the definitions, Risk is defined as "the combination of the severity of harm, and probability of occurrence of harm."
If a device can cause a burn, what is the severity of that burn? What is the probability that a burn happens? Those two things together determine the "risk" associated with "burn."
If there is only one hazardous situation that can cause a burn, then the analysis is straightforward.
But if there is more than one hazardous situation that can cause a burn, then the analysis could be more complex. (See final question below).
It seems like you would want to evaluate the risk of burn occurring due to each hazardous situation alone, as well as evaluate the overall risk of burn occurring taking all possible situations into account.

Honestly, the more I think about it, the more I'm convinced that the 2nd part of that is actually the purpose of the overall residual risk analysis, and we just ended up a bit too far down the risk-management rabbit hole trying to take a "pseudo-quantitative" approach within the hazard analysis, rather than a more qualitative approach within a standalone document.

So hopefully a final related question, is it appropriate/possible for one specific harm to be caused by more than one hazardous situation? Or would this indicate some breakdown in the risk management process (perhaps an overly broad harm, or overly specific hazardous situations)?
 

Tidge

Trusted Information Resource
So hopefully a final related question, is it appropriate/possible for one specific harm to be caused by more than one hazardous situation?

Absolutely. I would need to know if the burn came from something overheating (like an exposed halogen bulb) or from excess electrical current.
 
Top Bottom