Risk Analysis and Probability of Occurrence

Hi_Its_Matt

Involved In Discussions
Hi Everyone,
First, thanks to so many people on this forum for the information and perspective you provide. Ronen, Bev, Marcello, Ajit, Don, Marc, and others, you have been more valuable than you know. Anyways, on to my question.

When conducting risk analysis according to 14971, and specifically when pulling together a hazard analysis, does it actually make sense to evaluate the risk prior to identification\implementation of risk control measures?

I have seen several times now where companies/individuals want to assign an “initial” probability of occurrence rating without taking into account any risk control measures. These are almost always 5’s or 10’s on the rating scale, indicating harm is nearly all but certain if this imaginary device were to be used. Then they have risk controls listed, and a “residual” probability rating, often very low on the scale, that is determined taking the risk controls into account.

In my mind, the first probability rating is non-value add. The reality is nobody would ever release a medical device without addressing these risks. And people reliably end up debating back and forth about what probability of occurrence ratings to assign to these somewhat imaginary, unaddressed risks.

I understand that having an “initial” risk score could help the company prioritize which risks to address first or with more rigor, but in the end, the expectation is that ALL risks must be reduced as far as possible. The order in which they are addressed doesn’t matter, and the level of “initial risk” doesn’t matter.

I also understand that its convenient to be able to say to an assessor/inspector/auditor “our risk control measures reduced the risk from this early high rating to this new lower rating.” But again, does the amount of reduction from some starting point actually matter? In my perspective, what truly matters is that (1) risks are reduced as far as possible (without sacrificing performance or impacting benefit-risk ratio), and (2) residual risks remaining after implementation of risk controls are deemed acceptable or justified by a benefit-risk analysis.

What I would like to propose on my current project is that we delay assigning any probability of occurrence of harm ratings until the risk controls are identified and we have actual data to support the ratings.

The flow should be (and I'm simplifying here, as I don't think this is a singular linear process, but rather an iterative one): 1. Identify intended use and foreseeable misuse. 2. Identify hazards, hazardous situations, and the series of events that can result in the hazard situation occurring. 3. Identify the severity of the harms. 4. Identify risk control measures to address the aforementioned series of events. 5. Evaluate probability of occurrence of harm, once these controls are identified\implemented, and calculate/determine/classify the residual risk that remains. 6. Evaluate acceptability of residual risks.

Does this make sense? Am I missing something? Sorry for the wall of text, but this is obviously a loaded topic that doesn’t fit into single sentences. Thanks!
 

Tidge

Trusted Information Resource
When conducting risk analysis according to 14971, and specifically when pulling together a hazard analysis, does it actually make sense to evaluate the risk prior to identification\implementation of risk control measures?

Yes. My blunt assessment: Any group that 'blindly' (or rather 'uninformedly') assesses pre-control risks as all being equally 'ultimate bad' probably can't be trusted (to do an appropriate, uniform job) when it comes to assessing risks post-controls for their actual design.

More subtly: One approach to an evaluation of risk reduction is to consider "how powerful" the verification of effectiveness "needs to be" to reduce risks to acceptable levels (previous era) or reduce risks as much as possible (to some arbitrary point of diminishing returns, current era). If you start with all risks as being the "ultimately bad" (in a quantitative approach), logically you won't be able to defend your post control ratings (across the board) as satisfying 'reduced as much as possible' because you didn't do a true assessment of the initial risk. (Not exactly the same thing: you need two defined points to draw a single line.)
 

Bev D

Heretical Statistician
Leader
Super Moderator
It never makes sense to assess 'probability of occurrence' without data. no data = wild a$$ guess.
It does make sense to assign severity as that can be logically derived. This prioritizes the failure modes or hazards you need to focus on for control plans and mitigations and design changes...
 
When conducting risk analysis according to 14971, and specifically when pulling together a hazard analysis, does it actually make sense to evaluate the risk prior to identification\implementation of risk control measures?
Yes it does, and this is why: You should be doing risk analysis activities from the very beginning of design and development. These activities provide design input, which, when implemented, are risk control measures. Risk assessment informs design. If you cannot estimate probability, don't worry; you don't have to. For items where the probability of occurrence cannot be estimated, you can assess based on the severity alone (as Bev states above). Just be sure to state this in your risk management plan.

Many companies start doing risk management activities too late. Maybe they had these risks in their heads when they were designing the device, but they did not document them. Now the device is fully designed and the company is documenting the pre-control risk levels as though they actually started doing risk assessment when they were supposed to. This just feels weird. If risk management is done the right way, the pre-control risk doesn't feel weird or wrong; it feels right and truthful.
 
Top Bottom