Information of safety can reduce risk now?

Jean_B

Trusted Information Resource
Disclaimer: I respect Tidge's views and experience enormously. Our concepts probably reach quite similar results, but our philosophy to get there differs on irreconcilable points. I still regard his view to have a lot of value, and presume mine does too.

TLDR: Yes, always should have. However, a notification of residual risk is not the same as information for safety. That is however intuitively clear.

Information for safety are imperative instructions (do or do not), including "do be aware" where awareness results in desired behavior that prevents harm (with some degree of certainty).

Information for safety is not intended to result in functional effectiveness. The only case where this statement is undone is where absence of effectiveness in itself causes harms. That latter case is called essential performance. Information for effectiveness thus can be information for safety, but it's not always the case.

Both kinds of information (as well as information for durability and probably dozens of others kinds of information, including the very important indication for use and intended use) are found in the set of information that can be defined as accompanying information. That set comprises both documentation (IFU's, manuals, reference sheets inlays, e-help) and labels. Accompanying information is also often referred to as accompanying documentation (often by standards), or as labelling (most often by regulators).

Labelling is not labels though. Labels alert people to matters. Intending to draw attention either to a thing/event or to important information for a thing/event. In the same way, many warnings are alerts. When warnings become dynamic, and demand (immediate) action, they become alarms.

To me Protective Measure by Design (PMD) are independent of actors (be they human or computer) to be active/activated, but as Tidge has stated can still vary in effectiveness due to the circumstances and sequence/combination of events. E.g. Insulation on wires is protective (the design still contains a hazard), but depending on the use conditions and where the insulation breaks you can have different rates of exposure and perhaps even harm. The presence of that insulation is not dependent on actor action though.

14971 considers alarms to be protective measures by design. 24971 gives context through the concept that protective measures either prevent hazards from leading to hazardous situations or preventing hazardous situations from leading to harm. It gives as example that visual or acoustic alarms alert the user to a hazardous situation, which can then be prevented. Though unstated, it seems to do so also based on the fact that this alarm information is circumstantially transmitted. To my mind this is not enough, and there should also be a consideration of the fact whether an alarm can be missed/ignored or must be taken notice of. An alarm message that needs to be appropriately acknowledged is different from an alarm message that is present but doesn't obstruct actions leading to hazardous situations.

24971's information for safety notes do not clearly link to its place versus the definitions, except for reducing the severity of harm. This in an example by informing of actions to take after infliction (successful exposure?) to harm.

Also note that in Tidges view Inherent by Design (IbD) does not have variance. It is independent of implementation and (run-time) execution.
I do not regard variability and circumstance as leading, but the 'faith' one can have in the control. Inherent by design usually doesn't fail unless the product as a whole fails in manner which is fail-safe. Protective measures do fail, and often do so in fail-safe, but perhaps not always. Refer back to my can it be ignored part for the alarm. They are however somewhat predictable as its force of nature based with only exposure being less predictable due to its human-dependency.

Information (for safety) needs to be acted upon by someone. To me this can be both a human or a computer (e.g. feedback loop). That effectiveness of actioning information is what needs to be evaluated under risk management:
  • Does the actor notice (on time),
  • do they understand what they need to do (and what the urgency is),
  • can they do the correct action and abstain from wrong actions with the given information considering what they already know.
For humans usability would look at this information and evaluate this under the Perception, Comprehension, Action (PCA) cycle.
Computers are then often elevated to PMD due to being perceived as more reliable, though the standard then again recognize their unpredictable fallibility and discounts perfect reliability. Perception and comprehension for non-human tests would have specificity and sensitivy as measures. These would be akin to test method validation with added software validation (with the appropriate amount of boundary and edge case testing).

Computerized alarms blocking action before being acknowledged force the perception stage. Proper acknowledgement and information design help with comprehension. That would be a PMD level control. A vague alarm light in an odd spot that can be ignored or missed while just proceeding is more like information for safety in its reliability to change the course of events.

Thus given that information for safety can be evaluated to reduce risks, you can claim it to be a risk control measure. Not that odd.

But what isn't information for safety? Residual risk without context.

The confusion probably arose because an important part of warnings is noting what the hazard and harm is, just like residual risks notification to users in the IFU. Unlike residual risks, warnings should also have a grade of criticality (DANGER, WARNING, CAUTION, NOTICE), and a call to action (even if the act is to abstain from an action or to don some passive Personal Protective Equipment).

Significant residual risk in 14971 is the "'any' residual risk" you must notify the user of. It doesn't need a specified call to action and its grade of criticality is inherent: it is so critical to be 'unacceptable' under your risk acceptability criteria, which is based on your risk management policy, as being informed of the social norms of acceptable risks for the markets and users you distribute to within intended use, but accepted given the benefits that would be absent were it not for your device (obligation to be state-of-the-art, even benchmarked against non-typical alternatives).
You do so, such that a user prior to using your product can decide whether he'll take upon himself the inherent risk in the device that is above the presumed acceptable risk given the social norms, choose an alternative with equal or less benefit or abstain and live without the benefits.
The call to action of significant residual risk is: decide - knowing the additional risks beyond the ones you can presume to be there given the product - whether you want to have a shot at these benefits by applying the product at all. We then categorize the residual risks to help with the type of information to put against it. Limitations for proven effect, contra-indications for definite don't combine's. Precautions for preparedness (just remember pre-caution, pre-cause (of harm) to know you should have done something even prior to harm).
And warning is the biggest offender, and remains ambiguous. My intuition says:
  1. Residual risk warnings should probably tell you what to do once something has happened. It would then be the complement to precautions as its useful for after the harm has been caused.
  2. Information for safety warnings tell you what to do or don't.
The warning ambiguity is probably why you are expected to duplicate on-device warnings into the manual, even when the location or situation of the on-product warning is essential. Any good manual writer will tell you to list warnings at the appropriate level: front of manual for always in effect, section for circumstance, step for event-specific. That's also what is intuitively accepted and useful.

Yet given all this we often put them in the same document, categorize them like information for safety (danger/warning/caution/notice) versus residual risk information for deciding to use at all (limitation/warnings/precautions/contraindications ) and expect people (makers and users alike) to know when they are dealing with each.

Thus the regulations and standards themselves were not PCA tested versus the diverse set of users, in implicit recognition of the constraints facing everyone in existence.

Apologies if my message does not make sense to you. This has not been PCA tested like an official training would should have.

--Rant over.
 

Tidge

Trusted Information Resource
What a nice chewy reply! I can offer the following extra piece of information that I am balancing with a 14971-perspective: my teams are also responsible for implementing 60601-1-8 compliant alarm systems. That particular standard is much more prescriptive that 14971, in the sense that if you don't establish a particular approach to risk assessment (involving a 'scale of injury' as well as a measure of 'how long before the injury occurs') you will find it difficult to explain an alarm system in a manner compliant with 60601-1-8. 60601-1-8 explicitly requires an assessment of alarm conditions via a 2-dimensional matrix ("onset of potential harm" x "scale of injury"). The matrix is used to determine the priority of the alarm conditions. For example: An immediate loss of power may have a different priority than a loss of power "many minutes" from now.

I mention this because the disclosure of the priorities involved with the alarm system is 100% Information For Safety, completely independent of the nature of the alarm system.... whereas the implementation of the alarm system is actually designed, and must therefore be a PMD. The implementation can (and must) be assessed for both functionality and usability... but let me warn readers that NRTLs are extremely reluctant to certify an alarm system as compliant with 60601-1-8 if formative and summative testing demonstrates that one or more clauses of 60601-1-8 introduce more risk than an alternate approach... despite the explicit allowance 4.5 from 60601-1.

I do believe that the explanation of an alarm signal can fall into an IFS category, but that's because of my related experience with NRTLs on another matter. I'll just repeat myself: The alarm system itself must be treated like PMD. I've no doubt there may be people that think "Alarms make things inherently safer", if there are I point them to their local copy of Google to search for "Alarm Fatigue".
 
Top Bottom