Use Error Risk Controls and Control Verification

Dan123456

Registered
If you have use errors in your risk management file and you rely on the clinician's education and knowledge as a form of risk control, how can you verify that risk control?

-Usability studies are too costly and impractical for most class 1 and 2 devices.
-Information for use cannot be used to reduce risk, therefore it cannot be used as a form of risk control

Example 1:
-Hazardous Situation: Clinician does not position implant properly.
-Sequence of Events: Clinician use error
-Risk Control: Clinician's education and certification process
-Control Verification: ????

Example 2:
-Hazardous Situation: Clinician forgets to remove ancillary device from patient post surgery
-Sequence of Events: Clinician use error
-Risk Control: Clinician's education and certification process
-Control Verification: ????

any help would be greatly appreciated.
 

RoxaneB

Change Agent and Data Storyteller
Super Moderator
Is an independent double check a possibility here?

We have nurses in the field. Prior to starting a pump that will auto-dose a patient, they need to call our clinical "hot line" to read off pump settings and send a picture. Only once the independent double check confirms can they start the pump.
 

Bill Hansen

Registered
This is a tricky situation... we face the same thing often; we must rely on the user for certain mitigations.
One thing we do is identify any design elements that would reasonably help the user to avoid the error. I don't know what your implant is, but possibly some feature in the right place to aid in proper positioning could be claimed. Your clinical support should be able to help with identifying such things.
Another possibility, if your device application would work for this, is to do a validation study that mimics a usability test but uses in-house personnel. If you can isolate the possible Use Error type/cause (see 62366 for a nice breakdown), you may be able to attack the cause like you would in a DFMEA or PFMEA. But, it probably still relies on the user. This would be cheaper than a full clinical usability test, and not as strong an argument, but would provide some support and show you did your due diligence.
As a last resort, the risk(s) can go to the Benefit-Risk Analysis process; then you can weigh them properly and explain 1) that the device cannot fully compensate for certain user errors due to the nature of it, and 2) that the benefit, even with possible errors, still outweighs the risks of those errors.
Good luck!
 

yodon

Leader
Super Moderator
-Usability studies are too costly and impractical for most class 1 and 2 devices.

For Class I devices, there's little risk so it may be justifiable; however, class 2 devices do have risk and if use (errors) can cause harm, I don't think saying it's too costly will fly very far.

Talking to the users early on could certainly benefit. For orientation, maybe some kind of indicator (arrow, color coding, etc.) might easily guide the user in correct use. Maybe training is in order. For the ancillary device, is there any way to make it obvious it needs to be removed?

As @Bill Hansen points out, 62366 outlines a good approach (we use Use FMEAs to explore use error... and possible misuse).

You can't eliminate use errors but you should certainly show due diligence in ensuring safe and effective use. While it may be expensive to validate, think of the potential cost if you're found to be negligent in ensuring proper use and patients are harmed. Even if no harm, if word gets around that your products are hard to use, that could also be devastating.
 

ThatSinc

Quite Involved in Discussions
Information for use cannot be used to reduce risk, therefore it cannot be used as a form of risk control

This is a common misconception, driven by poor wording between the old directives and 14971, and then included in the content deviations of EN ISO 14971:2012

Information for safe use can and does reduce risk. (Wear UV protective eye protection when using machinery, wear lead aprons when using equipment)

Disclosure of residual risk does not.
(Exposure to UV Light may cause skin cancer and irreparable eye damage, exposure to X-Rays may cause cancer)

The two are very different.

There was a NB consensus paper on the topic, and all NBs I've worked with have accepted rationales provided regarding this.

You should be able to provide evidence that users understand the information and take it onboard, typically through some form of usability study.
I admit, I am one of the few that *does* read the user manual with things I buy when they have a significant risk of causing me irreversible harm, but appreciate I may be in the minority.
 

Tidge

Trusted Information Resource
If you have use errors in your risk management file and you rely on the clinician's education and knowledge as a form of risk control, how can you verify that risk control?

-Usability studies are too costly and impractical for most class 1 and 2 devices.

I want to challenge (one dimension) of the bolded quote above. I am of the firm belief that while it is difficult (e.g. required numbers may be large!) to construct study designs that demonstrate safety (at some arbitrarily significant confidence) it is typical that effectiveness of risk controls at eliminating/reducing specific, identified failure rates below some arbitrary level do not necessarily require large sample sizes (*1). It is also occasionally the case that a study could be performed to assess the state of affairs (i.e. the arbitrary assessment of risk) without trying to prove that some specific risk control reduces risk further... assuming no claim is made that risk has been further reduced.

In layman's terms: You "flip the script" to assess the level at which the clinician's expertise could be introducing the failure modes you have already identified, and you implement methods to make sure that you are monitoring for these types of failure modes (as well as others which you may not have considered) as part of the feedback loop to risk management of medical devices. This advice is subtly different than 62366 (all flavors, all generations) and is not intended to replace the necessary activities of 62366.

(*1) Small sample sizes are a natural outcome of hypothesis testing when the two hypotheses being considered have radically different outcomes. If the null hypothesis and an alternate hypothesis have somewhat similar outcomes (i.e. only a marginal improvement is predicted) then you will be back to extremely large sample sizes. For the "qualitative" approach to scales used in medical device risk management, many organizations adopt a "powers of ten" description of their qualitative scales, and it is a rather straightforward exercise to construct study designs that can discriminate between "powers of ten" (or to demonstrate just how hard it would be to discriminate between certain "powers of ten").
 
Top Bottom