# Risk Management ISO 14971 - Probability of Occurrence

#### Quality Engineer

##### Starting to get Involved
I am confuse realted to the probability of Hazardous situation. My industry has SOP for the list of Hazards and Hazardous Situtation and only those can be used. The SOP also describe how the Hazard and Hazardous situations are to be used in pair.

Q1: P1 is the "probability of Hazardous Situation"?
Q2: In previous revisions of the Risk analysis document I observed that the P1 (probability of Hazardous Situation) is different for same Hazardous situation, is this possible?
Example: "Compromised Graft Integrity" (this Hazardous situation is defined in the SOP) and in Risk analysis document the P1 value of this Hazardous Situation is 0.01%, 0.05 and 0.10% in different lines. So this correct? Or the P1 should be same for each Hazardous Situation?

#### yodon

Super Moderator
Maybe this excerpt from 24971 will help:

Probability estimation encompasses the circumstances and the sequences of events from the occurrence of the initiating event through to the occurrence of the harm. The probability P of occurrence of harm can be decomposed into a probability P1 that a hazardous situation occurs (i.e. that persons are exposed to the hazard) and a probability P2 that the hazardous situation leads to harm. See Figure C.1 in ISO 14971:2019. A decomposition into P1 and P2 can be useful to estimate the probability P of occurrence of harm, but such decomposition is not mandatory.

The industry is moving to a P1 x P2 method for estimating probability.

#### kys123

##### Involved In Discussions
Q1:
I'm assuming your SOP took the definition of P1 from ISO 14971:2019 Annex C (which by the way, is merely informative)

First, for harm to happen, a hazardous situation must arise, which would be probability P1.
P1 would then be multiplied by P2, which would be the probability that the hazardous situation leads to actual harm, hence:
P (probability of occurence of harm) = P1 (probability that the hazardous situation occurs) * P2 (probability for harm in the specific situation)

However, you can only know that for sure by looking at your SOP, since this notation is not a normative requirement.

Q2:
Yes, different probabilities for the same hazardous situations are possible, since they can be due to different failure causes.

Using an in-vitro product as an example:
The hazardous situation of not enough sample volume can be caused by wrong use of the pipette (probability score X) or wrong sample preparation (probability score Y)

I wish I could give you a direct example but I have no experience with grafts.
What do you mean by "different lines"? Are they different products?

#### Quality Engineer

##### Starting to get Involved
Q1:
I'm assuming your SOP took the definition of P1 from ISO 14971:2019 Annex C (which by the way, is merely informative)

First, for harm to happen, a hazardous situation must arise, which would be probability P1.
P1 would then be multiplied by P2, which would be the probability that the hazardous situation leads to actual harm, hence:
P (probability of occurence of harm) = P1 (probability that the hazardous situation occurs) * P2 (probability for harm in the specific situation)

However, you can only know that for sure by looking at your SOP, since this notation is not a normative requirement.

Q2:
Yes, different probabilities for the same hazardous situations are possible, since they can be due to different failure causes.

Using an in-vitro product as an example:
The hazardous situation of not enough sample volume can be caused by wrong use of the pipette (probability score X) or wrong sample preparation (probability score Y)

I wish I could give you a direct example but I have no experience with grafts.
What do you mean by "different lines"? Are they different products?

Thank you for the time.

You just mentioned that the P1 value can be different for same Hazardous situation based on the "DIFFERENT FAILURE CAUSES".

Can you just confirmed you said different causes and not the failure modes?

But isn't the P1 value independent?

#### Tidge

Trusted Information Resource
I want to inject a note of caution regarding terminology:

First, for harm to happen, a hazardous situation must arise, which would be probability P1.
P1 would then be multiplied by P2, which would be the probability that the hazardous situation leads to actual harm, hence:
P (probability of occurence of harm) = P1 (probability that the hazardous situation occurs) * P2 (probability for harm in the specific situation)

We (in the industry of medical device risk management) use the term "multiply" liberally, and (almost always) only as shorthand to indicate that there are "multiple" factors that contribute to an occurrence of HARM. It is common that an actual "multiplication" of P1 and P2 ratings (along with ratings of Severity) appear in Risk Analyses, but those ratings are established by policy and do not typically represent a precise measurement with well-established uncertainties.

For medical devices, it would be unusual to have precise measurements for all the circumstances where we need assessments of P1 and P2, and practically it would be futile to try to establish them given the wide variability in humans (and medical device design possibilities). Typically there are simply broad (integer) ratings of P1, P2 established such that ordinal values of the ratings offers the ability to make a quick, broad assessment of the relative value of the probabilities and is not meant to be a precise estimate. It is common to use five ordinal ratings for P1 and P2 (and Severity), I believe this is the practice because "only three" implies a little too much simplicity (a la Goldilocks and the Three Bears) and "more than five" invites too much effort to categorize by degree.

For example, depending on the nature of the device, and the context of use, ratings can be established as appropriate...
Example: "Compromised Graft Integrity" (this Hazardous situation is defined in the SOP) and in Risk analysis document the P1 value of this Hazardous Situation is 0.01%, 0.05 and 0.10% in different lines.

In my experience, with my personal biases, I can see a "powers of ten" difference between 0.01% and 0.10%... and with more direct experience I might be swayed that there was a meaningful enough difference between 0.01% and 0.05% or 0.05% and 0.10% using a different power scale (than 'powers of 10') so understand I am not passing judgement on these divisions, just on the implied precision.

#### Ed Panek

##### QA RA Small Med Dev Company
Super Moderator
Post Market Surveillance is intended to be feedback into the Risk Control Plan. Despite human factors and other testing, a risk may be over or underestimated at launch. Using user complaints/feedback you should update your risk assessment.

We have a product overall risk review one month prior to the Management review. During this risk review, we look at our product's feedback along with industry feedback (MAUDE) for similar product codes. If in that meeting we determine a risk needs adjusting we make a suggested update. One month later during Management Review, we detail the changes proposed to all of the management and obtain buy-in via signature. We may also suggest investigations or new complaint buckets to better refine our understanding of the failure. Plan Do Check Act.

#### Quality Engineer

##### Starting to get Involved
Post Market Surveillance is intended to be feedback into the Risk Control Plan. Despite human factors and other testing, a risk may be over or underestimated at launch. Using user complaints/feedback you should update your risk assessment.

We have a product overall risk review one month prior to the Management review. During this risk review, we look at our product's feedback along with industry feedback (MAUDE) for similar product codes. If in that meeting we determine a risk needs adjusting we make a suggested update. One month later during Management Review, we detail the changes proposed to all of the management and obtain buy-in via signature. We may also suggest investigations or new complaint buckets to better refine our understanding of the failure. Plan Do Check Act.

I will adding the post market data and updating the risk. But if I am creating a new PFMEA (risk analysis as per 14971) what is the best way to estimate the P1 (pre risk and post risk) value keeping in mind that the industry have pre-established Hazardous situations that can only be used?

#### Quality Engineer

##### Starting to get Involved
I want to inject a note of caution regarding terminology:

We (in the industry of medical device risk management) use the term "multiply" liberally, and (almost always) only as shorthand to indicate that there are "multiple" factors that contribute to an occurrence of HARM. It is common that an actual "multiplication" of P1 and P2 ratings (along with ratings of Severity) appear in Risk Analyses, but those ratings are established by policy and do not typically represent a precise measurement with well-established uncertainties.

For medical devices, it would be unusual to have precise measurements for all the circumstances where we need assessments of P1 and P2, and practically it would be futile to try to establish them given the wide variability in humans (and medical device design possibilities). Typically there are simply broad (integer) ratings of P1, P2 established such that ordinal values of the ratings offers the ability to make a quick, broad assessment of the relative value of the probabilities and is not meant to be a precise estimate. It is common to use five ordinal ratings for P1 and P2 (and Severity), I believe this is the practice because "only three" implies a little too much simplicity (a la Goldilocks and the Three Bears) and "more than five" invites too much effort to categorize by degree.

For example, depending on the nature of the device, and the context of use, ratings can be established as appropriate...

In my experience, with my personal biases, I can see a "powers of ten" difference between 0.01% and 0.10%... and with more direct experience I might be swayed that there was a meaningful enough difference between 0.01% and 0.05% or 0.05% and 0.10% using a different power scale (than 'powers of 10') so understand I am not passing judgement on these divisions, just on the implied precision.

Taking three ordinal values can only be used but there have to have some estimation criteria related to that?

how would you be estimating the P1 (pre risk and post control values)?

#### Tidge

Trusted Information Resource
how would you be estimating the P1 (pre risk and post control values)?

In my opinion, one of the clever and beautiful requirements of the "reduce all risks" paradigm is that it has the effect (or should) of requiring OE post-controls that firmly establish the state-of-knowledge of the post-control risks, which means (to me, anyway) that it is not necessary to fret about the values of the pre-control assessments... assuming that the process is to verify the state of those risks even if you don't put controls in place. In my mind this is like (pre-controls) assigning a poor rating because the error bars (uncertainty) is large, even if the expected value is small and then assigning an improved value because some experiment or analysis was done to shrink the size of the error bars.

One of the reasons folks have a high-level document like a Hazard Analysis supported by FMEA is because within an FMEA it is possible to do more quantitative analyses for specific failure modes (which are easy to associate with P1 "probability an event leads to a hazardous situation", IMO). The HA is generally making more subjective assessments, supported by FMEA data (often multiple relevant lines of FMEA for any given risk) and explained in the HA's Risk Control Option Analysis.