Cybersecurity and Risk Management: Loss of confidentiality

DanMann

Quite Involved in Discussions
I'm sorry if this has been answered before, I can't seem to find it in the many topics and guidance documents I've read through.

I'm trying to integrate Cybersecurity risks into a Risk Management system built for ISO 14971 and IEC 62304.

How do you determine the severity of loss of confidentiality of medical information?
e.g. if an unauthorised, external person (e.g. a hacker) gained access to a medical complete medical record (not including/considering financial information, but including patient identifying information like name and addrees)? I don't think it is on the same level as death or limb amputation, but is it as severe as an electric shock or superficial laceration? Is it as serious as a first degree burn or abrasion? Does it not fall into the definition of harm (injury or damage to the health of people, or damage to property or the environment)?

Does it matter which condition the diagnosis is related to? People with HIV or mental illnesses face terrible stigma, so I assume this would be more harmful for unauthorised people to access information on this than a cancer diagnosis, which would be more harmful than exposure of a diagnosis of common cold.
Do you lessen the severity (or the occurrence of consequences) if the condition has obvious symptoms (like a missing limb) vs something that is easier to keep private?

Also, should identity theft be included and if so, how severe do you classify identity theft?

I think I understand how things like loss of data, changes to data or ability to remote control a product could lead to a harm that I have a severity score for already (like misdiagnosis, delayed diagnosis), but I'm struggling with this specific aspect.

Thanks in advance.
 

Hemanth Kumar

Registered
I'm sorry if this has been answered before, I can't seem to find it in the many topics and guidance documents I've read through.

I'm trying to integrate Cybersecurity risks into a Risk Management system built for ISO 14971 and IEC 62304.

How do you determine the severity of loss of confidentiality of medical information?
e.g. if an unauthorised, external person (e.g. a hacker) gained access to a medical complete medical record (not including/considering financial information, but including patient identifying information like name and addrees)? I don't think it is on the same level as death or limb amputation, but is it as severe as an electric shock or superficial laceration? Is it as serious as a first degree burn or abrasion? Does it not fall into the definition of harm (injury or damage to the health of people, or damage to property or the environment)?

Does it matter which condition the diagnosis is related to? People with HIV or mental illnesses face terrible stigma, so I assume this would be more harmful for unauthorised people to access information on this than a cancer diagnosis, which would be more harmful than exposure of a diagnosis of common cold.
Do you lessen the severity (or the occurrence of consequences) if the condition has obvious symptoms (like a missing limb) vs something that is easier to keep private?

Also, should identity theft be included and if so, how severe do you classify identity theft?

I think I understand how things like loss of data, changes to data or ability to remote control a product could lead to a harm that I have a severity score for already (like misdiagnosis, delayed diagnosis), but I'm struggling with this specific aspect.

Thanks in advance.

Hi,

For vulnerability risk assessment, we are using MITRE’s Rubric for Applying CVSS to Medical Devices. The Rubric is a medical-devices-specific CVSS scoring tool that places value on data exposure and patient harm. Also, the FDA recognizes the Rubric as a qualified Medical Device Development Tool (MDDT).
 

Tidge

Trusted Information Resource
Short answer: Don't consider confidentiality in a 14971-compliant process. I realize how blunt this statement reads.

AAMI TIR57 recommends that a risk management process for security (rather than safety, per 14971) be established parallel to a 14971-compliant process. This won't help anyone specifically answer the question posed, but it does offer the advantages of not trying to assign "severities of harm" in a clinical sense. The parallel process will certainly direct the implementation of controls around vulnerabilities related to confidentiality, and you can then examine the impact of the controls in the 14971-compliant process.

I see great advantages to establishing a parallel RM process for security to consider vulnerabilities to confidentiality, integrity, and availability in a holistic manner (that is, by viewing the role the device/software plays in different use scenarios). My opinion is that this approach will only work if we recognize the specific elements of a familiar 14971-compliant process that don't directly apply. I will passionately plead to anyone participating in industry and regulatory groups considering the development of standards and regulations to avoid muddying the well-defined analysis of patient safety with computerized systems security.

I believe that there is a potentially useful mode of thinking that is similar to the classical evaluation of (safety) risks from software implementations (e.g. potential bugs) as opposed to safety risks from hardware implementations (e.g. physical performance characteristics of materials).
 

Tagin

Trusted Information Resource
Short answer: Don't consider confidentiality in a 14971-compliant process. I realize how blunt this statement reads.

So, should integrity and availability to be considered in a 14971 RM? I imagine so, since integrity (e.g., electrical signal patterns are maliciously altered) and/or availability (e.g., it stops working) of, say, a pacemaker could certainly create harm.

I see how confidentiality could sit outside of 14971, I just wanted to understand how you viewed the other parts of CIA.
 

Tidge

Trusted Information Resource
So, should integrity and availability to be considered in a 14971 RM? I imagine so, since integrity (e.g., electrical signal patterns are maliciously altered) and/or availability (e.g., it stops working) of, say, a pacemaker could certainly create harm.

I see how confidentiality could sit outside of 14971, I just wanted to understand how you viewed the other parts of CIA.

I do included analysis around integrity and availability within 14971 analyses, but only for the safety risks related to the medical device itself, with a slight expansion in the software hazard analysis to detail the potential impact (if any) of software system security risk controls that may have been added to the system. The security risk controls could have been added for any reason, but a medical device is presumed (by me, YMMV) to not require security risk controls around confidentiality to be safe.

The 14971-compliant analysis is, in my opinion, not required to go into an in-depth analysis of how secure/insecure the device is in the greater context of a healthcare delivery organization... including effectiveness of security controls... except of course within the scope of the intended (medical) use of the device.

I'm not sure if there is an appropriate analogy to be made involving radiated emissions and EM susceptibility for ME devices.
 

colinkmorgan

Managing Director
Great discussion on this! I tend to agree with the parallel process for cybersecurity, as discussed in TIR57. This approach is taken by many medical device manufacturers and seems to be working fairly well. The key element is to ensure the process is clearly defined for determining when cybersecurity risks cross the threshold of potentially impacting patient safety. Not all cybersecurity risks impact patient safety and there are many cases where these type of risks still need to be addressed.
  1. Cybersecurity risk is identified
  2. Using the Healthcare CVSS rubric the risk is assessed
  3. Based on a pre-defined criteria, certain risks are evaluated for patient safety using FMEA or like
  4. Treatment plans are defined and executed on
If you look at this from a maturity standpoint, here is an example approach:
  • Maturity 1 – Use CVSS and state any risk that is CVSS High or Critical is required to be evaluated for patient safety
  • Maturity 2 – Use CVSS exploitability sub-score and any risk above a certain numerical value is required to be evaluated for patient safety (defined by manufacturer)
  • Maturity 3 – Custom scoring criteria defined by manufacturer and based on organization learnings and examples. Thresholds defined in the process to determine when something needs to be evaluated for potential impact to patient safety
Also remember, regardless of approach it’s important to define timeframes around these activities to ensure alignment with FDA 2016 Postmarket Cybersecurity guidance.

Colin Morgan, CISSP, CISM, GPEN
 
Last edited:
Top Bottom