Is foreseeable misuse considered as single fault condition?

Roland chung

Trusted Information Resource
Hello folks,

As you can see from the caption, I am just confusing a little bit if foreseeable misuse belongs to single fault condition. Anyway, I think misuse is neither normal condition nor single fault condition. It is just independency.

Please kindly advise it.

Thanks and regards,
Roland
 

Mikishots

Trusted Information Resource
Hello folks,

As you can see from the caption, I am just confusing a little bit if foreseeable misuse belongs to single fault condition. Anyway, I think misuse is neither normal condition nor single fault condition. It is just independency.

Please kindly advise it.

Thanks and regards,
Roland

Risk analysis is intended to identify foreseeable hazards and their associated risks under normal condition AND single fault condition, during both intended use and foreseeable misuse. In that vein, I see it as applying to both.
 

Marcelo

Inactive Registered Visitor
Misuse is related to incorrect or improper use. This is tied to the usability engineering process.

Single-fault condition (it´s a 60601-related term tied to risk management) is related to a problem in a risk control measure, or single abnormal condition.


Reasonably foreseeable misuse is in principle independent of single fault condition. However, some misuses might led to single fault conditions.
 

Peter Selvey

Leader
Super Moderator
The definition of single fault condition includes abnormal conditions, of which misuse could reasonably be considered.

For example, if equipment is rated for 1min on / 10 min off, foreseeable misuse includes the user ignoring this rating and using continuously. The rationale in Annex A, 13.2.13.4 specifically states this is "foreseeable misuse". The test also applies abnormal condition limits (i.e. higher limits than for normal use), thus making it equivalent to a SFC.

In practice it is not really critical about the definition provided the risk assessment is robust.

Misuse normally falls in a probability range that is below normal use but above typical single fault conditions: e.g. 0.01 times / procedure, but if there are 200 procedures / year, it means it happens 2 times / year. SFC rates are typically 0.01~0.001/device/year.

On the other hand, typical misuse rarely causes severe direct harm.

So, it needs a risk evaluation scheme that handles high probability/low severity range effectively. If done properly, you can then decide if a risk control is necessary, and proceed from there.
 
T

tomshoup

A slightly different way to analyze this is to consider the sequence of events that might occur in foreseeable misuse. If a sequence of events occurs after foreseeable misuse, that can lead to a hazardous situation in the presence of a single or multiple fault, then you have your answer.

Tom
 

MediKit

Starting to get Involved
Hi all, my first post on this forum. I understand this is an old thread but interesting.

Peter, you mention the following which makes sense.

Misuse normally falls in a probability range that is below normal use but above typical single fault conditions: e.g. 0.01 times / procedure, but if there are 200 procedures / year, it means it happens 2 times / year. SFC rates are typically 0.01~0.001/device/year.

However, what about a misuse that can disable a risk control? For example, consider the followings:
1) A device (with software) controls heating to the patient.
2) It has a temperature sensor to detect overheating of patient and cut off heating to prevent patient burn (serious harm)
3) However, the temperature sensor is a detachable probe, which rely on the nurse to plug it in.
4) Because this relies on the user action, the probability of the risk control being disabled is ~1 time / year.
5) To mitigate against the misuse, the software continuously monitor the probe connection during operation and alarm if disconnection detected.

The probe connection monitor is also implemented in the same software as the control system. This type of configuration seems reasonably common and it seems safe to me. But if we consider the disconnection as a misuse, then the probability of harm would be something like control software failure (0.001/year) x probe disconnection (1/year) = 0.001/year, which is unacceptable for a serious injury.

Is the above analysis correct? Would you consider the system be unsafe and further control is required? Or would you consider the probe connection monitor algorithm is independent from the control although they are implemented in the same software? Or would you consider the probe disconnection as a single fault instead of misuse?

Thanks.
 
T

tomshoup

Having the software monitor whether or not the probe is connected as part of a risk mitigation is flawed. One should assume that the software has a probability of failure of 100%, so there should be a hardware-only circuit that detects the missing probe and prevents the use of the device.

In terms of risk management, when a risk-control measure is added, it should be evaluated for new risks which it might add. Given that this probe is a risk-control measure, it brings along with it the scenario you describe, and the misuse of the temperature probe, whether accidental or willful, needs to be addressed.

Section 15.4.2 of 60601 describes the reliance on thermal cutouts, which is your situation. Section 15.4.2.1 c) applies to your situation since the missing probe can be viewed as the failure of a thermostat and you need an independent safety circuit. Detecting the missing probe and preventing operation would satisfy this.
 

MediKit

Starting to get Involved
Hi tomshoup, thanks for your reply.

However, assuming 100% failure of software does not seem reasonable to me in such situation. I believe cl.15.4.2.1c) refers to the thermostat being the normal control device, and a fault of the thermostat will lead to a hazardous situation. In this case I agree that the software self checking (let's assuming it is possible) may not be sufficient to protect against a serious harm and additional control is required. Actually, even this I am not 100% sure if additional control is required if a software can detect the fault, as the protection (software self checking) is independent of the target failure (thermostat). We probably need a risk control against the software failing during operation though.

In my example, I am considering the temperature sensor being used as the risk control against the failure of the control system (with software). A disconnection of the probe will disable to risk control, but does not lead to a hazardous situation UNLESS the normal control also fails (double fault). Would you consider this to be different to cl.15.4.1c)? Thanks.
 
Top Bottom