Understanding FDA draft "Management of Cybersecurity in Medical Devices"

#1
Hi everyone, I'm trying to get an idea of what this new FDA draft - "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices" (Document issued on October 18, 2018) implies for our company and our new designs.

The way I see it is that medical devices with complex User Interfaces tend to "look & feel" more and more like consumer products (mobile devices, tablets, etc.). This is generating a trend towards using advanced Graphic Frameworks (like QT or similar) running on top of "big" OS (Android, Linux-based OS, or other similar commercial OS) because the processor used is as well very complex (due to the need of high performance graphic engine, multi-core architecture, etc.).
My idea is to start a discussion and maybe suggest alternative solutions for this new trend (at least "new" for me).

a) Does anyone have any experience on using such complex environments?
b) Is it ok to use Linux?
c) Which one (Yocto, Armstrong, commercial, etc.)?
d) Won't we end up with a high work-load to validate SOUP, cybersecurity issue tracking, risk of cybersecurity mandatory updates (and possible recalls!!!)?

---

Let suppose we are using Linux-based OS to run a multi-core processor, and after the risk analysis we end up with a "Tier 1 - Higher Cybersecurity Risk" classification (for example, due to communication interface with the HIS or because we support Pendrive connection).

Here come several questions:

1) The draft states: "... protection mechanisms should prevent all unauthorized use (through all interfaces); ensure code, data, and execution integrity ...". What is the meaning of "interfaces" in this context? Are they only communication interfaces (like Ethernet, USB, serial, etc.) or they are refering to HMI aswell (like touch-screen entries, keys, etc.)?

2) What does it mean when the draft states: "Consider physical locks on devices and their communication ports to minimize tampering". Does this means to lock the access to the communication ports with a key? or maybe to activate the use of those ports using a kinf-of authentication dongle?
Anyone has an example of a device doing something like this?

3) In section B.1 it says "Design the Device to Detect Cybersecurity Events in a Timely Fashion", and in point (b) "Devices should be designed to permit routine security and antivirus scanning". Really?? Should we put an anti-virus inside the Device? Are there any alternatives?

(sorry if my post is messy, my understanding of the subject is also kind of messy at the moment)
Any comment you may have I will appreciate it!
 

yodon

Staff member
Super Moderator
#2
a) yes... but complexity is not necessarily a factor - a 'simple' interface can be vulnerable
b) I don't see why not
c) no specific experience
d) yes, but it should be commensurate with the risk. If someone hacking the device could lead to (potential) death or serious injury or breach of medical information as well, you should WANT to take appropriate measures. If you follow 62304, you still are expected to do a lot of that.

1) I think both.
2) I think both could be reasonable. I think you need to consider this on a case-by-case basis
3) Potentially, if it makes sense. An embedded application is treated much differently than a desktop application. Bear in mind they're trying to be general here, covering ALL types of software, including stand-alone software.

Yes, applying cybersecurity to medical devices is in its infancy and so there will be dirty diapers. :) Approach it all from a risk-based, common-sense perspective. What makes sense for your device? What's the potential harm (including access to protected info - you don't want to run afoul of GDPR!!) if hacked? What are the vulnerabilities and how have you mitigated likelihood of a breach?
 

pmg76

Registered
#3
Thanks Yodon for your answer!

I think one of the key questions here is:
"Are vulnerabilities only applicable when connecting the device using any communication port (ethernet, USB, serial, etc.) that is accessible without physically disassembling the device?"

This question has everything to do with the "Tier 1 Higher Cybersecurity Risk" definition:
"The device is capable of connecting (e.g., wired, wirelessly) to another medical or non-medical product, or to a network, or to the Internet"

Nowadays a lot of microcontroller/microprocessors can be updated using JTAG, USB, etc. interfaces with pre-loaded bootloader in an internal ROM. This means you could connect the device to another non-medical device and modify the firmware. However these interfaces would not be exposed to the outside world. Would this be considered as a Cybersecurity risk? What do you think?

Also, if Linux is considered as a very vulnerable operating system (using the rational mentioned at the beginning that the software is vulnerable to attackers that can only access through a communication port), then a possible "work around" would be to remove the communication capabilities from Linux and giving this responsibilities to another safer operating system running in a "separated" hardware.
Does this make any sense?
Would then be ok to use Linux without worrying about possible cyber-attacks?

[Just thinking out loud]
 

yodon

Staff member
Super Moderator
#4
Clearly, a big concern with a networked device is the potential for using the device as an entry point for attack of the larger (hospital) system. I think the first part there is trying to address all possible routes. So if you connect your device to a non-medical device (say a networked computer) to do the software update, you need to consider the possibility of opening doors on Pandora's box (and taking measures to prevent such).

For your second part, yes, certainly architecting a system to isolate vulnerabilities is very much a recommended approach.
 

Top Bottom