How to Validate Effectiveness of Mitigations?

A

anna.olsson

Hi all!
I work for a medical device software company.

As part of our risk management scheme, we do risk analysis using different methods. For all hazards that are unacceptable according to our criteria, we require mitigations until we agree that the risk has been sufficiently reduced. Later in the process, we assess the total residual risk for the product and (hopefully) make a statement that it is acceptable. We trace hazards to mitigation and verify that the mitigations are implemented as specified. Fine...

Now, for a 510(k) application, we are asked to "include validation testing references to show that the mitigation factors have been validated to reduce the risk to an acceptable level". I think this refers to proving the effectivness of the mitigations. Do you agree?
If so, I'm not sure what to do...

An example from our analysis: The user is required to enter a value. The user entering the wrong value may lead to patient harm. Our system has no way of verifying if the entered value is correct for a particular patient.
As mitigations, we would typically require that there is no default value and that only values within defined limits are acceptable. Perhaps we would add a labelling warning that it is crucial that this particular value is correctly entered or require that the value shall be visualized in different ways to increase the chance of error detection.

How can we verify the effectiveness of these mitigations? We add mitigations until we "feel" that the risk is reduced to "as low as reasonably practicable" and agree that the risk is acceptable, but I don't see how we can provide any objective evidence that the mitigations actually reduce the risk. It's all based on "expertise" and judgement in the risk management team...

Any ideas?

Thanks!
/A
 
Q

QA_RA_Lady

Re: How to validate effectivness of mitigations?

Hey there Anna,

I'm a Management Rep for a medical imaging software company too... I disagree with your assessment. I don't think they're looking to verify effectiveness of your mitigatation. I've submitted 30+ 510ks for medical imaging software and never been asked for that. Do you have more info on that? Is it part of a longer paragraph that you can post?

Without more info... if one of my employees came to me with that, I'd tell them that the request is for tracability... That the FDA is looking for verification that our mitigations are tested. This is definintely a 510k requirement for software. I'd be willing to bet that's what they want.

All you do is identify the mitigations in your tracability matrix. Its simple. You can make them bold, then make a note at the bottom of the form indicating that all bold entries represent risk mitigations.

So in your example:
Risk:
The user entering the wrong value may lead to patient harm.
Mitigation:
We would typically require that there is 1) no default value and that 2) only values within defined limits are acceptable. Perhaps we would add 3) a labelling warning that it is crucial that this particular value is correctly entered or 4) require that the value shall be visualized in different ways to increase the chance of error detection.

Where each of these mitigations appear in the customer, functional, system, or software requirements in your traceability matrix, you'd make them bold. And add the note at the bottom of the tracability form...

If the FDA did ask me for verification of effectivness of mitigations involving patient safety, I'd tell them that I cannoot provide that infomation without using my device clinically... which is not possible without the 510k.

Only exception - if you are doing clinical trials then its a whole different story. Are you doing clinical trials? If you are please comment and I will give you my opinion on that.

Hope this helps!

Note: Not all FDA reviewers are masters of the english language. I see where this may be misleading. Most likely it is a syntax error, and what was meant was: include validation testing references to show that the mitigation factors that reduce the risk to an acceptable level have been validated.
 
Last edited by a moderator:

Peter Selvey

Leader
Super Moderator
Re: How to validate effectivness of mitigations?

I think QA_RA_Lady is on the right track, but you might ask why.

Hypothetically, it is possible to use an infinite amount of resources in risk management, ranging from the initial analysis through to validating the effectiveness of risk control measures. In your example, it is possible to perform human based non-clinical testing, and in the extreme case even clinical validation.

But the use of resources comes at a cost and delay in getting the product to market. And there are other risks associated with both costs and delays - both make the medical device less available to perform its intended purpose.

Imagine if before leaving for an overseas business trip you analyzed all the possible risks associated with missing the flight, implemented countermeasures, verified effectiveness and maintained documentation. However, the analysis and documentation takes so long, that you end up missing the flight ...

Thus it is possible to demonstrate that excessive risk management is actually harmful. So, we need to take a grain of salt with statements regarding risk and risk management. To put it more scientifically, we need to limit the resources used for risk management to a level that is expected to give a net improvement in safety (reduction in risk), taking into account the risks associated with the use of resources.

For sure, there is a huge amount of uncertainty about this, but there will be clear cases in both directions.

In your example, we don't know the risk (severity/probability) of harm associated with entering a wrong value.

If death was a reasonable possibility (i.e. more than 0.001), then it would be clearly wrong to rely only on a feeling that the mitigation is effective. Usability testing (with real people) would be justified.

On the other hand, if we are talking about a recoverable, moderate injury or illness at a probability of 0.001, usability testing would seem to be a waste of resources (overkill).

Sure, there is a big gap in between, that's why it is up to your top management to decide where to draw the line (set the policy for deciding acceptable risk).
 

Marcelo

Inactive Registered Visitor
Great discussion...just a quick comment....what is really expected is, as Anna pointed, the effectiveness of the mitigations, meaning if the mitigations do really reduce the risks as expected. This is one of the two components of the implementation of risk control measures, as defined in 6.3 of ISO 14971, for example.
 
A

anna.olsson

QA RA Lady, I hope you are right! We have never before been required to submit verification of mitigation effectiveness either.

But...The thing is, in my opinion we already provide very clear traceability: hazards - mitigation - implementation (e.g. system req.) - verification (e.g. system test) - verification result (must be success or we don't release the product). (But perhaps there is some misunderstanding or erroneous reference, so that this traceability is not as clear to everyone as I think it is.)

In addition, as Marcelo pointed out, we are really required by the standard to verify effectiveness, but at least for us, this has been based only on judgement and has never been documented. I am very curious to know how others do it!

I can easily see that for some types of risks, mitigation effectivness can be truly verified. Say, we see the risk of burns from a hot surface and require some thermal insulation. We could then validate the mitigation effectivness before implementation by checking the insulation properties of the selected material and after implementation verify that the surface temperature is indeed not burning hot.

However, for the type of mitigations we require for our software, its not as easy...

From 14971:2007:
Implementation of each risk control measure shall be verified. This verification shall be recorded in the risk management file.
The effectiveness of the risk control measure(s) shall be verified and the results shall be recorded in the risk management file.
NOTE The verification of effectiveness can include validation activities.

Again, I'm very curious to know how you all deal with this requirement!
 

Marcelo

Inactive Registered Visitor
Not being asked for documented effectiveness validation before (or even not being required to perform risk control effectiveness verificationqvalidation) does not mean that it must not be done. Particularly in the case of ISO 14971, it´s common knoledge that there a very few manufacturers can really comply with the standards. What i can say is, if you do not perform this step, you are not in compliance with ISo 14971 at least.

Oh, and regarding software, IEC/TR 80002-1 Ed.1: Medical device software – Guidance on the application of ISO 14971 to medical device software has some more toughts about this issue.
 
A

anna.olsson

So, how did we solve the original issue (see the first post in thread)?

In the end, we only added a rating of "effectivness of mitigations" for each hazard, stating either that risk was removed, probability reduced to acceptable, severity reduced to low or risk still unacceptable for each hazard.
This was accepted by third party reviewer and FDA.

Just wanted to let you know...
 

Marcelo

Inactive Registered Visitor
So, how did we solve the original issue (see the first post in thread)?

In the end, we only added a rating of "effectivness of mitigations" for each hazard, stating either that risk was removed, probability reduced to acceptable, severity reduced to low or risk still unacceptable for each hazard.
This was accepted by third party reviewer and FDA.

Just wanted to let you know...

Yep, nothing unusual, just one more case of accepting things without really knowing what to accept :).

Please note that as the manufacturer, the responsibility regarding the product is still yours, not the FDA. If an adverse event happens, and it it´s discovered that it happened because you did not analysed the effectiveness of mitigations as you should, you will still be in trouble. This kind of thing has happened more than once.

Not to be pushy, just stating a fact :)
 
A

anna.olsson

Well... Now at least we require a statement of efficiency for each hazard, which hopefully makes the risk management team think a little more about whether or not they are really satisfied with the required mitigations. Besides, nothing prevents us from referencing proof of effectiveness where we think it is relevant.

I'm thinking maybe we should add a "proof of effectiveness" coulmn also, where we could reference e.g. publications on the checksum algorithm used, but for most cases, we would probably just enter "professional judgement" anyway. Honestly, spending a lot of effort on trying to proove effectiveness of hundreds of detailed mitigations is not where I would like to put more resources.

We do place a lot of responsibility on the risk management team. It is, in the end, up to their judgement (or rather our judgement, cause I'm usually involved) to say if a mitigation is "enough" whether or not the product is "safe".

BTW, I'm still very interested in examples of how other companies handle these issues!
 

Peter Selvey

Leader
Super Moderator
Earlier this year I prepared training material on the changes between the 2000 and 2007 version of ISO 14971. During this process one of things that I realised was that the standard has carefully worded requirements for records. In some places you need objective evidence or more details, in others, just the results (yes/no) are OK.

For example Clause 6.3 covers both "implementation" and "effectiveness" of risk control measures. Both need to be "verified" (which is a defined term, meaning gaining "objective evidence").

However, while you need to keep objective evidence for implementation as a record, for effectiveness only the result (effective / not effective) needs to be recorded. The objective evidence of effectiveness can be thrown away and you still comply with ISO 14971.

I believe this is deliberate. While keeping objective evidence of effectiveness sounds nice, it quickly gets messy in practice. I guess from a regulatory point of view what they are saying is that at least someone should take responsibility for saying that a risk control measure is effective.

I would agree with Anna, a system for linking to objective evidence is reasonable, but keep it optional, used when it makes sense to keep more records, such as borderline cases, or cases which effectiveness might be reasonably questioned by a regulatory reviewer.
 
Top Bottom