New ISO 14971:2019 Harm: unreasonable psychological stress, and cybersecurity

Bill Hansen

Registered
As I'm trying to bring cybersecurity concerns into our safety risk management process, I am struggling with the right way to capture Loss of Confidentiality. Has anyone come up with a solid solution?

I think the addition of "unreasonable psychological stress" in 14971:2019 (see A.2.3) provides a good path. This is supported by 24971's example (table F.1), linking "loss of data confidentiality" to "psychological stress", as well as "deterioration of health"... but the latter has many options in current Harms Lists.

So, to add "psych stress" to a Harms List, using a typical 5-point Severity scale, I have this thought. Someone tell me if this is sound, or something else.

  • Unreasonable Psychological Stress, Minor; Severity 2 – patient or user is aware of issue (such as loss of PII), causes stress, distraction. No professional intervention is required.

  • UPS, Major; Severity 3 – issue causes patient/user stress, requiring professional intervention, such as counseling. Temporary condition.

  • UPS, Critical; Severity 4 – issue causes patient/user stress resulting in long-term/permanent effects (PTSD or similar); professional intervention, including psychiatric treatment, required for quality of life.

I don’t see a need for a Severity 5.

This remains consistent with the FDA’s definition of “serious harm”, aligning with S=3 and above: medical intervention is required.

So we have to keep people safe... AND sane!
 

Mark Meer

Trusted Information Resource
As I'm trying to bring cybersecurity concerns into our safety risk management process, I am struggling with the right way to capture Loss of Confidentiality. Has anyone come up with a solid solution?

This is a great question!

I'm dubious about the idea of linking it to psychological stress in the way you've proposed though, mainly because the line between someone's confidential information being compromised, and the severity scale you've proposed is compounded by so many factors.

My suggestion would be to link severity to the nature of the information that could be compromised, and the potential harmful ways it might be used by a nefarious actor. For example:
- Low Severity: Names, dates (assuming nothing else could be inferred). Outcome of breach: Knowledge that you use the device.
- Medium Severity: General information on common / non-sensitive diagnoses. Outcome of breach: Knowledge that you have a condition.
- High Severity: Home address, Sensitive Personal Details, Specific information regarding sensitive diagnoses and/or treatments. Outcome of breach: targeted harassment, fraud, extortion

Obviously this is just an example, and would vary depending on the nature of the device, and what could possibly be inferred from the information compromised.

Just my suggestion....curious how others approach cybersecurity within their 14971 frameworks...

MM
 

Tidge

Trusted Information Resource
My suggestion is to avoid addressing (non-medical, 1) cybersecurity harms in a 14971-based process. I suggest instead to establish a parallel risk-management process for cybersecurity concerns, and allow the two parallel processes to inform each other. As an example: design choices implemented to specifically address cybersecurity concerns could have an impact on the risk profile for medical harms, and vice-versa. This is no different than asking if a 14971 risk control has introduced a new hazardous situation.

(1) I've simplifying the description of "14971 harms" as "medical", because the medical device industry typically has some appreciation for harms (to patients, users, environment) that is derived from medical professionals, literature, etc. Such people/literature would not generally be able to make an honest assessment of the different types of cybersecurity harms, so my advices is to not confuse the 14971-compliant process with cybersecurity.
 

yodon

Leader
Super Moderator
...so my advice is to not confuse the 14971-compliant process with cybersecurity.

First off, I *completely* agree with your post. 14971:2019 is chock-full of mentions of cybersecurity and as @Bill Hansen points out, the TR does, in fact list "psychological stress" as a harm resulting from a data breach (and I also agree with @Mark Meer that linking data exposure to stress is a bit dubious but if one does consider worst case, in terms of the patient, that's understandable, I guess). Obviously there are some devices (implanted pacemakers) that, if hacked, could result in death.

Need to do some more noodling on this to come up with a solid approach. Good discussion.
 

Tidge

Trusted Information Resource
AAMI TIR57 (2016?) spells out three areas of concern for cybersecurity (my paraphrasing follows, these are actually defined)

Confidentiality: The concept of restricting access to and disclosure of information.​
Integrity: The concept of improper data loss/modification of system/information.​
Availability: The concept of timely and reliable use of system/information.​

TIR57 is based on the principles of 14971 (2007): It recognizes the potential overlap with (medical device) safety concerns, but it explicitly considers elements of the three areas above to be worth addressing outside of safety. If any of the areas can be shown to lead to patient/user/stakeholder harm, then I would expect them to appear in a 14971-compliant process. An example of integrity issue leading to a safety issue could be: an infusion pump which is hacked to allow for more delivery of an opioid. A hacked pacemaker that is shut off would be an example of availability leading to a safety issue. If a harm can be established between confidentiality and psychological stress, then I won't stand in the way of addressing it in a 14971 process.

My reading of the discussion in informative annex A.2.3 implies that potential psychological harms are meant to be considered (as part of 14971) in the context of intended use (dare I write "essential performance"?); the example specifically given is of "false positives" from some sort of diagnostic determination. I'm not privy to the discussions of the joint working group so I won't speculate to what level TIR57 was considered.
 

Mark Meer

Trusted Information Resource
...TIR57 is based on the principles of 14971 (2007): It recognizes the potential overlap with (medical device) safety concerns, but it explicitly considers elements of the three areas above to be worth addressing outside of safety. If any of the areas can be shown to lead to patient/user/stakeholder harm, then I would expect them to appear in a 14971-compliant process....

It seems to me that, with respect to risk-management, these can all be considered within a 14971 framework.

Otherwise, I'm curious what is intended by "addressing [confidentiality, integrity, availability] outside of safety"?
Presumably there is still the concept of severity assessment in cybersecurity risk-management, no? If taken outside a context of harm/safety, then what risks are we talking about? Strictly customer satisfaction and product reliability?
 

yodon

Leader
Super Moderator
For one, a data breach may well be considered "acceptable" within the framework of patient harm. But if you consider it in the framework of security, it's pegging out. (And given the fines associated with GDPR, you're darn well going to take actions to prevent.)
 

Bill Hansen

Registered
Good discussion so far. Thanks to all.

I agree with the point that not all security issues are safety issues... TIR57's "crossover" diagram illustrates the concept. We're looking to build a security process that is parallel to safety risk management, with distinct analyses, containing references back and forth when appropriate. As @yodon and @Tidge said, Loss of Availability or Integrity had some obvious related safety risks, like delay of procedure or inaccurate measurements.

Prior to 14971:2019 coming out I'd planned to keep Loss of Confidentiality in the security path, but it seems we're compelled to bridge the gap for that as well. With ransomware attacks as a prime example, how does one capture the safety risk? The possibilities of Harms due to denied service because the HDO is locked down, are endless... so that's why I'm leaning toward using the Psych Stress. (Selected by our Clinical Expert, of course... it has to make sense! @Mark Meer , your suggestions for levels of severity would factor into the situation for sure.)

I've also struggled with the security process and what to call its version of "severity"... being a Risk Management guy for the last decade, I try to protect 14971's "back yard"... "Impact" seems to make sense. So, the Impact of a Security risk is not necessarily equal to the Severity of a Safety risk. One event could lead to either, or both.

But I've been told I'm overthinking this...
 

Tidge

Trusted Information Resource
Otherwise, I'm curious what is intended by "addressing [confidentiality, integrity, availability] outside of safety"?
Presumably there is still the concept of severity assessment in cybersecurity risk-management, no? If taken outside a context of harm/safety, then what risks are we talking about? Strictly customer satisfaction and product reliability?

For one, a data breach may well be considered "acceptable" within the framework of patient harm. But if you consider it in the framework of security, it's pegging out. (And given the fines associated with GDPR, you're darn well going to take actions to prevent.)

@yodon has hit the bullseye with his response.

I think it is important to recognize that safety-compliance (in a 14971-context) is 'mandated' through regulatory requirements for marketing medical devices. As regulations are added and modified (such as those that relate to cybersecurity), new standards will have to be developed to allow for consistent application of controls such that risks which are the focus of the new/modified regulations will be controlled in an acceptable manner.

I personally don't think that expanding the safety aspects of 14971 to cover cybersecurity is a good idea. The primary reason for my attitude is that cybersecurity is an important concept that is completely independent of patient/user/stakeholder safety. I expect devices that are not medical devices to also consider cybersecurity, and it doesn't make sense to me to have medical device manufacturers examining a security concept like confidentiality as a medical safety issue while a non-medical device manufacturer 'ignores' this potential for user harm. This is not to say that I'm downplaying the potential liability which can arise from poor (non) implementation of risk controls relating to cybersecurity... rather that we medical device professionals should recognize that we need a new tool to analyze these risks rather than reach for our comfortable 14971 tool.
 

cbartlett

Registered
If you check out AAMI ISO DTIR24971:2020, Table F.1 provides some examples, including:

Hazard = Loss of data confidentiality
Sequence of Events = The vulnerability of unnecessarily opened network port is exploited, and disclosure of personal health information
Hazardous situation = Denial of insurance coverage leading to lack of treatment
Harm = psychological stress, deterioration of health

It's a draft, but it works in the meantime until we get the final version.

Personally, I think the distinction between the degrees of stress doesn't add value because it is must more subjective than the physical harm, and your risk control measures are probably going to be designed-in cybersecurity features verified by standard testing regardless. As the manufacturer, you have very little ability to control what happens to the data once it is breached, so your RCMs largely be will be preventative.

I wouldn't worry about the severity ranking too much and choose a reasonable estimate with your team. The production and post-production risk management reviews will tell you if you were wrong.

I'm not sure how you've defined the criteria for individual Benefit-Risk Analysis, but I think this would be a good candidate for one to plead your case that the risk is acceptable.
 
Top Bottom