Cybersecurity within SDLC and Software Unit Classification

AngelRose

QA is a thankless job
Dear all,

my humblest apologies in advance if my understanding of these topics isn't as in-depth as many of yours... I have Non Conformities to solve, I'm at my wit's end and I don't even have hands-on experience with software.

I'm currently exploring two interconnected topics related to software documentation for SiMD (software in medical device). I’d like to understand better how cybersecurity should be structured within the software documentation... would you say it shoulg have a dedicated structure (meaning its own standalone file)? I'm assuming its own PDCA cycle or should it be treated as an integration to the existing IEC 62304 processes, right? Is cybersecurity expected to be transversally embedded throughout all SDLC documentation or does it need to be a parallel process?

I would also appreciate any insight for this concrete example.
I'm currently assessing whether it's necessary to add a startup password for a non-networked, simplified HMI medical device used in controlled hospital environments... This device has no network connection, handles no sensitive data, has no access to the source code, only uses USB port for downloading non-critical technical logs and physwically prevents access to the controller via USB.

We are considering whether the absence of a boot-time password could be reaaaasonably justified based on the controlled clinical context, the absence of realistic software tampering risk (on which my NB could understandably push back) and the fact that requiring a password could reduce usability without real security benefit.
A real commercial concern is that clinicians may reject unnecessary barriers to use. We as the manufacturer would prefer to avoid implementing features that add burden without clear added value unless REALLY necessary from a regulatory standpoint.

On a related note, I’m also seeking advice on how to handle software unit classification under IEC 62304... do individual software units need to be classified separately? Based on what I read in point 4.3(b) of the standard, software items will either inherit the overall software system’s classification or declassified with a documented rationale. Is it required to include a unit-level classification matrix? Standard isn't very explicit about this so any shared experience would be most appreciated.

Thank you so much in advance...
 
Elsmar Forum Sponsor
Easy question first:

On a related note, I’m also seeking advice on how to handle software unit classification under IEC 62304... do individual software units need to be classified separately? Based on what I read in point 4.3(b) of the standard, software items will either inherit the overall software system’s classification or declassified with a documented rationale. Is it required to include a unit-level classification matrix? Standard isn't very explicit about this so any shared experience would be most appreciated.
Many development groups simply assign all units the same (most strict) classification for all units, which matches that of the system as a whole. This is almost always done because they don't want to put any effort "up front" into segregating an architecture... in my experience this is not just laziness, but sometimes because software development teams can't/won't stick to an architecture, because "we are wizards, our ways are mysterious and unexplainable."

If you decide you *want* to assign different ratings to different parts of the software system, you pretty much have to do it at the architecture level, and treat the architecture as a design input (as you should anyway).
 
I’d like to understand better how cybersecurity should be structured within the software documentation... would you say it shoulg have a dedicated structure (meaning its own standalone file)?
Our approach is a stand-alone cybersecurity plan (addressing both development and postmarket considerations), a stand-alone cybersecurity risk model (based on the MITRE rubric) & a stand-alone cybersecurity report. We combine the SOUP cybersecurity assessment with the 62304-driven SOUP analysis (known issues, etc.). Penetration and fuzzing testing are stand-alone efforts. Other tests for cybersecurity controls are typically done in parallel with other design verification testing. (And remember that the report needs periodic updates to reflect all the vigilance work which may also drive updates to the other materials and drive additional development / test.)

I'm assuming its own PDCA cycle or should it be treated as an integration to the existing IEC 62304 processes, right? Is cybersecurity expected to be transversally embedded throughout all SDLC documentation or does it need to be a parallel process?
Realize there's a single software development lifecycle for a product so in that regard, everything is "integrated." We have a Cybersecurity Work Instruction that drives our actions, but everything is necessarily done in parallel with 'normal' software development activities.
 
Easy question first:


Many development groups simply assign all units the same (most strict) classification for all units, which matches that of the system as a whole. This is almost always done because they don't want to put any effort "up front" into segregating an architecture... in my experience this is not just laziness, but sometimes because software development teams can't/won't stick to an architecture, because "we are wizards, our ways are mysterious and unexplainable."

If you decide you *want* to assign different ratings to different parts of the software system, you pretty much have to do it at the architecture level, and treat the architecture as a design input (as you should anyway).

Good take! This made me smile for how it actually hits home.
So in my particular case, we did define two distinct software systems and assigned system-level classifications (B and A) but as you sadi... we have not extended that logic down to software items. I believe this gap is exactly what our Notified Body called out...

On paper it seems like the architecture is modular and reasonably clean, though they aren't treated it as formal input for classification or documented unit-level classes amd rationales. We essentially fell into the situation you described to a t: neat system design w/ blanket classification simply because itesm aren't traced back to their safety role.

From what I understand now, I just need to map classification on unit-level based on criticality and declassify if justified. Hopefully this is not too heavy since mostly formalizing what's already there... thankssss!

Our approach is a stand-alone cybersecurity plan (addressing both development and postmarket considerations), a stand-alone cybersecurity risk model (based on the MITRE rubric) & a stand-alone cybersecurity report. We combine the SOUP cybersecurity assessment with the 62304-driven SOUP analysis (known issues, etc.). Penetration and fuzzing testing are stand-alone efforts. Other tests for cybersecurity controls are typically done in parallel with other design verification testing. (And remember that the report needs periodic updates to reflect all the vigilance work which may also drive updates to the other materials and drive additional development / test.)


Realize there's a single software development lifecycle for a product so in that regard, everything is "integrated." We have a Cybersecurity Work Instruction that drives our actions, but everything is necessarily done in parallel with 'normal' software development activities.
Thanks a lot, yodon! I have to reiterate I’m not really familiar with cybersecurity, so I admittedly was a bit lost for a moment there. Thank you for helping me ground things.

Your point about there being one lifecycle makes total sense... I might be mentally over-separating it. Got it: so we’re looking at a Cybersecurity Plan, Risk Analysis, Test Reports (in support of risk control measures), and ultimately a Cybersecurity Report that will tie it altogether, evolving with post-market inputs.

It sounds straightforward enough, I would be lying if I said I have an idea of what it write or how to tell if it's good enough for NB. If anything it still gives me a clearer direction so thanks again...
 
From what I understand now, I just need to map classification on unit-level based on criticality and declassify if justified.
One word of caution: if your market is the US, the FDA has deviated from the documentation requirements based on 62304 safety classes and gone with a 2-tiered approach: basic and enhanced. If you classify something as Class A (62304) & submit in the US, you may be a little short in the submission. NOTE: 62304 is apparently moving towards the 2-tiered approach as well (presumably to be aligned with the FDA guidance).
 
One word of caution: if your market is the US, the FDA has deviated from the documentation requirements based on 62304 safety classes and gone with a 2-tiered approach: basic and enhanced. If you classify something as Class A (62304) & submit in the US, you may be a little short in the submission. NOTE: 62304 is apparently moving towards the 2-tiered approach as well (presumably to be aligned with the FDA guidance).
Indeed 62304 is moving towards two 'rigor levels' to replace the three classes. You can never say it out loud in an IEC meeting but alignment with a certain regulator can be what was on everyone's mind.
 
Last edited:
I'm somewhat curious how 62304 will land. Historically, There was pretty good alignment between (old FDA) LoC Minor/Moderate/Major and SSSC A/B/C... including the subtle points about certain things not mandated for Moderate/B but without made it very difficult to explain other things that were required for Moderate/B.

My recollection of the current basic/enhanced FDA guidance that is the most different ask from (current) 62304 is that the FDA guidance requires an Architecture, whereas the current 62304 does not require one for Class A software. The FDA guidance contains approximately two pages explaining what to consider for an architecture, I can see a LOT of folks with "Class A" software developing heartburn over this point.
 
Per the guidance, the intent is to "facilitate a clear understanding of:
  • The modules and layers that make up the system and software;
  • The relationships among the modules and layers;
  • The data inputs/outputs and flow of data among the modules and layers; and
  • How users or external products, including IT infrastructure and peripherals (e.g., wirelessly connected medical devices) interact with the system and software."
Documenting to a degree commensurate with the risk should not cause heartburn. Unless it's a complete hack, thinking through these things is done anyway. Like anything, there will be folks that overreact and make it a much bigger deal than it needs to be and no doubts the consultants will stoke the FUD fires. :)
 
I agree with @yodon post above... it shouldn't cause heartburn, but I know my people. They'll plan on writing a 6-page memo for the FDA explaining why they don't need an architecture before they'd consider developing an architecture.
 
The FDA premarket cybersecurity guidance requires the submission of security architecture views such as 1. global system view, multi patient harm view and updatability and patchability view. I have no idea what these views look like. Does anyone have examples of these views?

Any generic example will be helpful to get me started.
 
Back
Top Bottom