Software Safety Classification and Legacy Software

Hello,

I would like to discuss my problem with one of our devices.
It is a simple class 1 medical device with a small control board with embedded software (released ~2010).
The device has a built-in actuator and can move up and down. Control is possible with a hand controller.

I am trying to bring the software documentation in line with IEC62304 section 4.4 'Legacy Software'. The device has a Top Down risk analysis prepared in accordance with ISO 14971.

Thus, in order to meet the minimum requirements of the standard I define the safety classification of the software. According to the standard:
- The probability of a software error is taken as 1.
- Only RISK CONTROL measures that have not been implemented in the SOFTWARE SYSTEM are taken into account.

I have identified all the software risks referenced in the Top Down (for example, the spontaneous movement of an actuator that is controlled by the firmware). Of course, the probability of this happening is quite low.

In the Safety Classification I have to assume that the probability of this event occurring is 100% (probability x, occurrence 5?? ).
This means that, based on the decision tree from IEC62304, my software should be classified as C (!) if there is no hardware solution to this problem.

Am I interpreting the standard and the link between IEC 62304 and ISO 14971 correctly?
Even if so, could post-production data evaluation be one of the mitigations that I can reduce the software class from C to B or from B to A?
 

yodon

Leader
Super Moderator
I got a little lost there... but safety class is first and foremost based on the level of risk. If you have a device that can cause no harm whatsoever, there's no way you can walk the tree and conclude Class C (that must result in death or serious injury). To put a finer point on it, even if it can contribute to a hazardous situation, if the risks are still acceptable, it's still class A. Presumably you have your Risk Management process set up where you define acceptable risk levels.

I think where you were going in the post is the opportunity to down-classify software based on controls external to the software (i.e., in your hardware) - which is legitimate.

I would consider probability these days a combination of P1 - the probability that the risk occurs & P2 - the probability that if the risk is realized, the harm occurs. P1 must be your highest number. P2 can vary based on your device. Then the risk is however you factor out P1*P2 and the severity. 24971 gives a much better description.

Post-production data cannot be a mitigation, it's just data. It could, possibly, allow you to justify lowering your probability numbers (P2 since we're still talking about software). You might INCREASE your severity with new data but hopefully those risks are sufficiently mitigated to reduce the likelihood.
 

Tidge

Trusted Information Resource
I'll make one additional contribution: There is another options under 62304, although I have only seen these rarely chosen:
  • A responsible party can (metaphorically) throw up their hands and say "we didn't want to spend too much time thinking about the classification, so we chose to treat the SSSC as Class C"
This advice is parallel to the FDA's (older) "Level of Concern" approach: You can always choose to generate a more complete set of documentation. This doesn't relieve anyone from doing a 14971 risk analysis, but frankly: if a team is going to spend months arguing about the SSSC, I would just as soon go ahead and generate all the documents necessary for Class C, as it will make the development and maintenance smoother. If the regulatory affairs group changes its mind, you don't have to submit all the extra paperwork.

There is, IMO a near-zero risk that any external reviewer would point to a developer's internal documentation (even as part of a submission) and hold it against them. The absolutely worst thing I can imagine in "over-classifying" is that it could slow down the review of a submission, but I can almost guarantee that a submission with a "lower" SSSC with any daylight between classification and paperwork in the submission is much more likely to cause a delay.
 
Hello thank you for your replies. I will try to explain where is my confusion.
I will describe you an example:

Top Down says that there is some risk, that is connected with the software. This risk have a score: Severity: 4/5, Occurence: 2/5, what means in my case that is still accpetable according to Risk management Plan. There is also some Risk control measure defined in Top down that is lowering the Occurence to: 1. This control measure is not software/hardware.

Then I am starting to prepare Software Safety Classification. I am considering this risk and trying to classify software (A/B/C)

I see now 2 ways:

1. My Top Down says that this risk has the severity is 4/5 and occurence is 1/5. This according to my Risk Management Plan means that software shall be classified as A.
This decision is made based on this note from the standard:
"the SOFTWARE SYSTEM can contribute to a HAZARDOUS SITUATION which does not result in unacceptable RISK after consideration of RISK CONTROL measures external to the SOFTWARE SYSTEM."

2. My Top Down says that this risk has the severity is 4/5 and occurence is 1/5.
I see the note that "Probability of software failure shall be assumed to be 1". So I take my risk again, forgeting about the occurence rating from Top down beacuse I assume that probability is 1 so the occurence is 5.
This according to my Risk Management Plan means that software shall be classified as C, beacuse I have a risk with scoring Severity: 4/5, Occurence: 5/5 (unacceptable).

Where is the mistake?
 

Tidge

Trusted Information Resource
Top Down says that there is some risk, that is connected with the software. This risk have a score: Severity: 4/5, Occurence: 2/5, what means in my case that is still accpetable according to Risk management Plan. There is also some Risk control measure defined in Top down that is lowering the Occurence to: 1. This control measure is not software/hardware.

What is the risk control if it is not in software nor hardware?

Where is the mistake?

I hesitate to accept the use of the word "mistake". There are fundamentally two factors to consider when deciding on the correct software system safety classification:

(1) The software system has been allocated (via the design) functions to reduce risk.

There are many, many ways this happens. Typically, "code is written" to "do a thing". The elements of the code that "do the thing" are the elements of the software architecture which will bear the burden of the higher SSSC. A simple example is: The software is monitoring line voltage and switches to battery/notifies the user that the unit has become unplugged.

(2) The software system can contribute to a hazardous situation in some way, even if it otherwise would not be "designed" to do this.

A possible analogy is how medical electrical devices need electricity to function, but we recognize that the electricity can "escape" and harm users/patients. We don't plan to chock patients (or allow a path to discharge patients), but we recognize that this is a possibility. A classic example in the realm of software comes from the AED that had software to go into a test cycle which prevent the AEDs from being used to save a patient in cardiac distress. The "test function" barely qualified as a risk control on its own (although I can imagine a thought path that leads to a "self test" being considered a risk control) but when it has the possibility to interfere with the intended use of the medical device, it is going to get a higher SSSC.

Special comment: Occasionally software systems can contribute to risk even if there is no code written specifically for a function. For example, resource contention, race conditions, asynchronous accesses, etc. can occur as a result of a software system as "side effects" of using software.

NEVER FORGET: The SSSC is fundamentally a tool that describes the minimal amount of documentation necessary to provide objective evidence that the manufacturer understands the software at an appropriate level that is commensurate with the risk of the medical device. I catch no small amount of side-eye when I describe our "Class A" software as being "software that we don't really need that much evidence to convince ourselves is doing what we need it to do (because it isn't going to hurt anyone)." As soon as a programmer (of class A software) gets insulted by this characterization, I ask them to show me the software architecture!
 
Top Bottom