Risk Analysis of Software - ISO 14971:2007

M

Micked

#21
You´re spot on, 1X1 will always equal 1...
That's why I am using the strategy to combine a system and a software risk analysis. In the the system risk analysis you can always use detectability of the anomaly, or estimate the probability that the operator will do something stupid given wrong input.

If your software system does things on its own, without operator intervention, then you really have to think about hardware protective measures or something else outside your software system.

As the standards are written today, I don't see any other viable way forward than to use the system + software approach.
Other than building a redundant system with non-programmable hardware of course...
 
Elsmar Forum Sponsor

Marcelo

Inactive Registered Visitor
#22
Hello all

IMHO validation can never be a Risk Control Measure, you have to design some protective measure in there up front.
As a last resort you can always use labeling (ever seen those first pages of warnings in operator manuals?)
You´re right, validation cannot be a risk conrol measure, but you can use validation (and i mean the device validation, related to it´s intended use) to verify the effectiveness of the risk control measure.

As to probability = 1, I think it is a problem how to handle it in the risk management process.
62304 does not allow us to reduce probability even if there are tons of protective software mechanisms implemented. That's feels unfair somehow
Huh? How do we assert that the probability is lower? The checksum is still software with a failure probability of 1, so now we have the probability of the event as 1x1=1... no change
I´m a little confused here...you know you are talking about two different things?

IEC 62304 says that, if the software SYSTEM, meaning the software as a whole, including the functions of the software and any software risk controls (for example in this case, two software ITEMS), could not behave as specified and thus this failure could lead to a hazard, the the software cannot be "self-controlled". The risk then could be controlled by another, separate software SYSTEM or by hardware. This also means that, if this software SYSTEM is the only part in the SEQUENCE OF EVENTS that "transforms" the HAZARD in the HAZARDOUS SITUATION, or if the other components in this sequence cannot be estimated, the the probability of this HAZARD turning into a HAZARDOUS SITUATION have to be 1 (worst case secnario). This is the probability called "Exposure (P1)" onthe figure from Annex E of ISo 14971:2007. Note that you CAN reduce the probability P1 if the other events on the sequence of events can be estiamted.

Also, in many cases probability P2 (hazardous situation turning into harm) cannot be estimated, and thus the general probability of occurrence of harm (P1 x P2 or whaetver function is used)cannot be estimated, and then yoy need to focus on the severity (outcome). Please note that this approach is not of software only..this is already dealt in ISO 14971.

The example cited from IEC 80002 is a little different. In this case, a software ITEM (not SYSTEM) fails, and the checksum (another software ITEM from the same software SYSTEM which is also risk control measure "inside" the software)controls the risk, reducing it´s probability. This probability reduction has to be a qualitative one because it cannot be quantitatively estimated.

In this case, the checksum is not a software with a failure probability of 1. I think this is the point that is not very clear.
 
I

icare2much

#23
The example cited from IEC 80002 is a little different. In this case, a software ITEM (not SYSTEM) fails, and the checksum (another software ITEM from the same software SYSTEM which is also risk control measure "inside" the software)controls the risk, reducing it´s probability. This probability reduction has to be a qualitative one because it cannot be quantitatively estimated.
Yes, my interpretation matches. The issue is how to do it? How to qualitatively reduce the probability for the software system from a 1 to an acceptable probability?

In this case, the checksum is not a software with a failure probability of 1. I think this is the point that is not very clear.
If it were hardware control it would be the product of the probabilities... the leap that I made was to consider the software control item to be like a hardware item in order to assess the overall risk.

But if this is not valid, then how?
 

sagai

Quite Involved in Discussions
#24
OK... I went through the same principle not so long ago so I'll try to explain the way we did it which so far we get good comment from...

First, when I started the risk analysis, I did it as if it had been done at the beginning of product life-cycle. As a matter of fact, this is where the initial version of the risk analysis should be done (around the architectural/detailed design step), but constant revision of it shall be considered (especially when you have a design change). If you start from that principle, verification/validation cannot really be seen as a way of mitigation since the product has not been built yet! Type of mitigation can be:

  1. Design choice preventing that specific hazards (or reducing the probability of occurance)
  2. Addition of a protective measure
  3. Information on safety/hazards clearly passed on to user (instructions for use)
After product is built, you are right that verification/validation is required to ensure safety and performance of the device, i.e. to verify that the software performs as it was intented to do. Verification/validation should always be performed to verify design requirements, but design requirements should consider the outputs from the risk analysis among other things.

I am not sure it is clear, but this is how I see it... ;-)
Good Luck!
I apologize, but I shall note this is always a confusion point in this industry, verification as a word does not come together with intended use. We validate that intended use fulfilled and verify that system/software requirements are met. oh yes, sometimes we do interchange, but these are different.
 

Marcelo

Inactive Registered Visitor
#25
How to qualitatively reduce the probability for the software system from a 1 to an acceptable probability?
Ipm not sure i´m understading your question....the example is self explanatory, you use the checksum as a risk controle measure (what the example didn´t tell is that usually this has to be segregated from the part it´s controlling), and then you can expect the lowering of the probability. Just that.

If it were hardware control it would be the product of the probabilities... the leap that I made was to consider the software control item to be like a hardware item in order to assess the overall risk.

But if this is not valid, then how?
I really didn´t undrerdstand what you are saying, sorry. If you can elaborate a little, maybe i can answer.
 

Michael Malis

Quite Involved in Discussions
#26
jscholen,
I would disagree, sometime, especially with physical devices the severity of an issue can be reduced. For instance, if an injury can be caused by something falling onto a person, we may not be able to reduce the risk that the object will fall, but we be able reduce the weight (by remoting the power supply) of the falling object, thus reducing the severity of the injury caused. Similarly, our chooses of chemicals used in a product can significantly effect the severity of incidents.

:applause:
Yes, if you modify the design of the product you can make it safer.
You also make it a new product design that maybe overall better.

However, you still talking about reducing the probability of occurance because your injury still can occur from "falling". You did not fix falling, you just reduce the probability (occurance) of the injury by modifying the weight that is falling on the individual.

Also if you talking about poisoning by chemicals in your second example ( the injury may occur from "poisoning as hazard"). Probability of occurance will be less if we use alcohol vs. lead...

I hope this helps,
Mike
 
M

Micked

#27
Let's get back to software now, folks :whip:

Does anyone have a good idea how to handle the very common configuration of a workstation running software that constitutes a medical device?
Just think about all those PACS systems out there...
In this kind of system it is unpractical to add another processor for risk control, like IEC 62304 proposes.
How would you segregate the software for risk control, especially if it is a legacy system we are working with?

Let me give you a concrete example:
The software is supposed to write data to a remote database.
One protective measure against losing data is to store the data locally until the database transaction is finished.
The easiest, and I would boldly state, least error-prone way to implement it, would be to let the same software ITEM handle the local storage.
But to implement the wanted segregation, I could implement a separate ITEM in a separate process which handles the local storage. This is no rocket science, but much more inter-process communication and synchronization would be needed. In the end it would lead to a system that is harder to design, implement and verify.

My point here is that the simplest implementation does not give us any "risk control credit", but the complicated solution does.

Reading TR80002 closely, I think I can get some support for reducing probability. According to section 6.2.1.3 "the MANUFACTURER should demonstrate an adequate segregation between the protective measure and software features that provide essential performance".
Defining "adequate segregation", which of course is related to the HARM involved, could be a way out here. The trick will be to define "adequate" in my example above...
 

Marcelo

Inactive Registered Visitor
#28
The easiest, and I would boldly state, least error-prone way to implement it, would be to let the same software ITEM handle the local storage.
But to implement the wanted segregation, I could implement a separate ITEM in a separate process which handles the local storage.
Could you specify what is the hazardous situation here?

In the end it would lead to a system that is harder to design, implement and verify.

My point here is that the simplest implementation does not give us any "risk control credit", but the complicated solution does.
I would say that, in general, it IS harder to design, implement and verify software with safety related functions.

Defining "adequate segregation", which of course is related to the HARM involved, could be a way out here. The trick will be to define "adequate" in my example above.
The easiest example of software segregation is run separate processosr for the two functions you mentioned. But i´m not sure this kind of segregation is needed.

In fact, i would say the the initial problem is not how to define the adequate segregation, but to define what need to be segregated.
 
M

Micked

#29
Let's see if I get it right here...
The hazardous situation is that the X-ray images from the PACS are lost because the database server crashes or something similar.

The harm is that the patient may need to take new X-ray images and thus be exposed to more radiation. Not a big thing for my client's modality BTW, but it still is an identified hazard. For some other modalities the exposures may be higher.

This is my point Marcelo, just "adding a processor" is not easy in the workstation world. Even if the PC's and Mac's out there have up to 4 or 8 CPU kernels, they are not easily available to the application programmer. And I can assure you that programming an application that makes deliberate use of a multi core CPU is one of the most difficult (error-prone) activities in the SW industry...

To repeat: In this example the simple solution will not receive any "risk control credits", but a complicated solution will. That is counter-productive for safety, in my opinion.
 

Marcelo

Inactive Registered Visitor
#30
This is my point Marcelo, just "adding a processor" is not easy in the workstation world. Even if the PC's and Mac's out there have up to 4 or 8 CPU kernels, they are not easily available to the application programmer. And I can assure you that programming an application that makes deliberate use of a multi core CPU is one of the most difficult (error-prone) activities in the SW industry...

To repeat: In this example the simple solution will not receive any "risk control credits", but a complicated solution will. That is counter-productive for safety, in my opinion.
I do understand that. I only said about the separate processor IS a risk control option, but now one easily implemented one; and, also, not the needed one in general. But you have to know, and justify, why you do not need it.


Back to the example (and i had the ideia of using a practical application for this discussion):

The hazardous situation is that the X-ray images from the PACS are lost because the database server crashes or something similar.

The harm is that the patient may need to take new X-ray images and thus be exposed to more radiation. Not a big thing for my client's modality BTW, but it still is an identified hazard. For some other modalities the exposures may be higher
First of all, the "something similar" is a problematic statement here, because, regarding software, the sequence of events that led to the hazardous situation is extremely important (i´d say more than in the case of only hardware) because you need to know in what part of the sequence the software failure might contribute to the risk.

To illustrate this, please take a look at the attached file. Please keep in mind that it´s only a direct application of the ISO 14971 concepts to the situation you mentioned, not taking into consideration other factors.

In this case as illustrated (and it seems to illustrate the example you mentioned), i think we need more information because, as it´s written, we do not know the cause of the database crash. Please note that the probability P2 will probably be set to 1 (100 % chance) because, if the database images are lost, surely the harm will occur.

But P2 might happen due to a lot of factors...as written, the thing you need to do is to

What you might need to lower, then, is probability P1, but as written, i do not know what is the cause of the database failure (can be software, can be hardware - also, it´s interesting to not that even P2 is novery clear, because if we determine that in this hazardous situation the fault was of the hardware, then P2 would also be due to the hardware fault, and then the software would have played no role in the hazardous situation).

You would need, then, to identify a hazardous situation which would be caused by software, and then you might need software hazard control.
 

Attachments

Thread starter Similar threads Forum Replies Date
MrTetris Should potential bugs be considered in software risk analysis? ISO 14971 - Medical Device Risk Management 5
T Risk analysis of QMS software - Validating software we use for QMS ISO 13485:2016 - Medical Device Quality Management Systems 5
B Software Class A - Lengthy further risk analysis IEC 62304 - Medical Device Software Life Cycle Processes 9
I Medical Device Software Risk Analysis ISO 14971 - Medical Device Risk Management 4
W Software Tool for Medical Device Risk Analysis - Recommendations please ISO 14971 - Medical Device Risk Management 4
A Software Risk Analysis training - Recommendations wanted Training - Internal, External, Online and Distance Learning 1
silentmonkey Overall Benefit/Risk Analysis - Risk Management VS Clinical Evaluation ISO 14971 - Medical Device Risk Management 3
R The term "Benefit Risk Ratio" in EU MDR, do I need to present benefit risk analysis as a RATIO Risk Management Principles and Generic Guidelines 4
D Risk Analysis using Monte Carlo Simulation instead of Scoring and Heat Map Risk Management Principles and Generic Guidelines 2
E Normal Condition Hazards in Risk Analysis ISO 14971 - Medical Device Risk Management 3
M Risk Analysis Flow - Confusion between ISO 14971 and IEC 62304 IEC 62304 - Medical Device Software Life Cycle Processes 8
R ECG Risk Analysis Standards ISO 14971 - Medical Device Risk Management 2
adir88 Documenting Risk Control Option Analysis ISO 14971 - Medical Device Risk Management 8
M IATF 16949 (6.1.1 - Planning and Risk Analysis for a remote site) Process Maps, Process Mapping and Turtle Diagrams 5
D Risk Analysis & Technical File - What detail goes in the Risk Management Report ISO 14971 - Medical Device Risk Management 5
M An example of risk analysis of class I MD ISO 14971 - Medical Device Risk Management 36
B Grouping of Products for Risk Analysis ISO 14971 - Medical Device Risk Management 9
A Risk-benefit Analysis - Hazard Analysis (HA) and FMEAs ISO 14971 - Medical Device Risk Management 18
R The difference b/w FMEA & Risk analysis as per iso 14971 ISO 14971 - Medical Device Risk Management 8
K Risk Analysis Updates due to complaints ISO 14971 - Medical Device Risk Management 10
S The Severity of a Medical Device Hazard - Risk Analysis Clarification ISO 14971 - Medical Device Risk Management 6
Ed Panek Transition to IEC 60601 4th Edition - Risk Analysis and test submissions CE Marking (Conformité Européene) / CB Scheme 2
S In a risk analysis, how can we tie mobile app security breach to ISO 14971? 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 4
Q Risk / benefit Analysis in Risk Management Report CE Marking (Conformité Européene) / CB Scheme 12
R IATF 16949 Clause 6.1.2.1 - Lessons Learned and Risk Analysis IATF 16949 - Automotive Quality Systems Standard 6
S Risk analysis 6.1 and contingency plans 6.1.2.3, are they related? IATF 16949 - Automotive Quality Systems Standard 26
W Biocompatibility Risk Analysis for Clinical Practitioner 21 CFR Part 820 - US FDA Quality System Regulations (QSR) 4
F Risk Analysis of a Medical Device Accessory ISO 14971 - Medical Device Risk Management 4
S How we can use risk analysis for suppliers IATF 16949 - Automotive Quality Systems Standard 6
Q Risk Analysis - Same Risk Treatment for Context and Interested Parties ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 7
C Risk Analysis for COTS/OTS Risk Management Principles and Generic Guidelines 4
M IATF 16949 Cl. 8.7.1.4 - Risk analysis for decision making about rework IATF 16949 - Automotive Quality Systems Standard 2
E Risk Analysis - Events which may cause to Data Loss ISO 14971 - Medical Device Risk Management 12
W Risk Benefit Analysis - ISO 14971:2012 Requirements ISO 14971 - Medical Device Risk Management 27
F Medical Device HACCP (Hazard Analysis and Critical Control Point) Risk Management ISO 14971 - Medical Device Risk Management 2
Q Risk Tools in ISO 31010 - Root Cause Analysis vs. Cause-and-effect Analysis ISO 9000, ISO 9001, and ISO 9004 Quality Management Systems Standards 1
S Organizing Risk Analysis and Controls for a New Medical Device (ISO 14971) ISO 14971 - Medical Device Risk Management 4
S Please review my Risk Analysis Table ISO 14971 - Medical Device Risk Management 13
K Risk Analysis and "Information for Safety" / Labeling ISO 14971 - Medical Device Risk Management 10
M Risk analysis - ISO/TS 16949 clause 7.2.2.2 IATF 16949 - Automotive Quality Systems Standard 2
C Help with Risk/Benefit Analysis Self-help Device for Diabetics ISO 14971 - Medical Device Risk Management 3
A FTA-Top/Down approach to Risk Analysis ISO 14971 - Medical Device Risk Management 2
A Industry best practice about Post-Market Surveillance and Risk Analysis ISO 14971 - Medical Device Risk Management 6
T Risk Analysis help for CE Marking Class I Medical Device ISO 14971 - Medical Device Risk Management 10
T Risk Analysis for moving manufacturing equipment ISO 14971 - Medical Device Risk Management 17
D Different kinds of Risk Analysis for various Hazards ISO 14971 - Medical Device Risk Management 3
L GHTF/SG3/N15R8 - Process Validation and Risk Analysis ISO 13485:2016 - Medical Device Quality Management Systems 4
R Risk Analysis of Class IIb Disinfectant ISO 14971 - Medical Device Risk Management 6
J Does anyone have an example of Risk-Benefit Analysis per ISO 14971? Other ISO and International Standards and European Regulations 2
P FMEA Risk Analysis Recommended Action Priority FMEA and Control Plans 2

Similar threads

Top Bottom