Risk Reduction by Risk Control: IEC:62304-Class C

cwilburn

Starting to get Involved
What you state in your post is what is indicated in Section 6.2.2.6 "Process as a risk control measure" of IEC/TR 80002-1. Do you know anyone who has successfully done this and what did they use to show effectiveness?

I find it interesting that the standard suggests providing "evidence that there are no or very infrequent software failures." Most software people know that just because defects aren't found doesn't mean they aren't there and that it is good to find defects because they are always there!

Perhaps the number of defects remaining when the software is released could be used as finding/fixing defects is part of a rigorous software development process. It might also be good if you could show that the number of defects introduced decreases once you are finished adding new features, which is what you hope happens as you stabilize your software.
 

cwilburn

Starting to get Involved
What you state in your post is what is indicated in Section 6.2.2.6 "Process as a risk control measure" of IEC/TR 80002-1. Do you know anyone who has successfully done this and what did they use to show effectiveness?

I find it interesting that the standard suggests providing "evidence that there are no or very infrequent software failures." Most software people know that just because defects aren't found doesn't mean they aren't there and that it is good to find defects because they are always there!

Perhaps the number of defects remaining when the software is released could be used as finding/fixing defects is part of a rigorous software development process. It might also be good if you could show that the number of defects introduced decreases once you are finished adding new features, which is what you hope happens as you stabilize your software.

It's been 2 years since I asked if anyone has successfully used process as a risk control measure. Anyone?
 

Tidge

Trusted Information Resource
It's been 2 years since I asked if anyone has successfully used process as a risk control measure. Anyone?

In my practice, I offer "process as a risk control measure" in the analysis of Medical Device Software, but I have never had a customer that could adequately defend it as a risk control measure. The inadequacy arises typically because medical device software development has not historically leveraged specific, repeatable development processes... and without repeatability it is impossible to assess the adequacy of the "process as a risk control measure."

Here is an example (ultra-simplified, but hopefully illustrative).

Suppose that within the context of medical device there is an identified risk which is allocated to a specific element of software. The software requires the use of a function to perform sorting operations, and in this ersatz medical device the sorting algorithm is the mechanism by which the identified risk is made acceptable, assuming the algorithm run-time is no worse than n log n . If the development process explicitly requires the use of such a sort (e.g. Heapsort) by use of a library function, and that library function has been demonstrated to be an acceptable 'control' for the identified risk, then "Software Process" would be a defendable risk control.

In my experience there are two primary roadblocks:
1) Software developers tend to develop/modify software solutions in an ad hoc manner.
2a) It is time consuming to generate objective evidence that library functions can specifically address particular risks.
2b) It is not trivial to identify risks that can be directly addressed by such library functions.

When confronted with the required efforts of (2a, 2b) and the desire to (1), most software development teams default to direct testing of the software implementation.
 

Peter Selvey

Leader
Super Moderator
I think the use of a library function as a risk control is more in line "classic" risk management than using a process. In hardware it is the same as using an off the shelf part that has specifications that fit with the risk control needs.

A "process" refers to the support structure that sits outside the raw decision to use a library function or write a line of code. I think in the real world, the risk will always be a function of both parts, a literal "object" which is implemented (which can be lines of code in software, library functions or purchased controls), and quality of the the process that supports the decision, implementation and testing.

The situation is also confusing because sometimes there are very "small" discrete risk controls which can easily be referred to from the risk management file while other times we are talking about the failure of larger systems where there is no discrete risk control and really only the process to improve the quality and hence reduce the risk.

To give an (embarrassing) example from my own experience: in one of my testers, I used risk analysis and determined that when user sets the output off, there should be two independent methods to shut off the power, to protect against a hardware fault. In software this was done by setting the DAC to zero, and separately turning off the timer that drives pulses needed for an output. Later I found that when the user changed the mode, there was a line of code that set up the timer and turned the timer back on. So the risk control was not effective due to this software bug. But for a while it was hidden because as long as there were no faults in the system, everything worked normally. It only became apparent later when one device developed a fault in the power supply, and the output was on when it should not have been, which then triggered the analysis,

Now, the question is whether a 62304 system would have detected and fixed the problem earlier, which I think is true. As such it would have a role in risk reduction, and is therefore is a genuine risk control measure.

But again there are two parts: one is the risk control object (the double switch in software) and the process (systematic, controlled verification of correct reliable, implementation). So strictly speaking the risk control measure is both of these parts. But normally, in the case of simple risk controls like this, we just refer to that risk control object and assume that 62304 will be there in the background.

That's an example of a small, discrete risk control that can be easily referred to from a risk management file.

At the other extreme, consider complex software that analyses a 12 lead ECG and determines a diagnosis such as an arrhythmia or elevated ST segment. There is a risk that a software bug gives a misdiagnosis. But typically in that case, there is no simple discrete risk control to refer to. Instead, the process itself will contain a large number of small discrete activities which are in effect risk control measures but would be too numerous to refer to from the risk management file. So just referring to the process makes sense, for that situation.

Now at the risk of making this already too long post even longer: the example for the tester is important precisely because it was a hidden function, something that only gets used in an abnormal situation. The result is that it is easy to overlook whether this function is correctly implemented, as happened to me, and I have also found many similar cases in functional safety testing of dialysis, infusion pumps, surgical laser, NIBPs and so on: software bugs in protection systems that only get triggered in fault condition and hence don't get a lot of action in the normal design process.

In that sense, having a risk management system point to small discrete risk controls and then being forced to check they are implemented properly has a lot of value. On the other hand, having a risk management system vaguely point to another process provides virtually no value at all. So there is a possibility that in the future risk management might be restructured to eliminate such useless linkage. But the key point here is to highlight that risk management is most effective when it is helping in rarer, unusual or abnormal situations, rather than the main function of the device, which is better handled by general design processes as per 13485, 62304.
 
Last edited:

Tidge

Trusted Information Resource
I think the use of a library function as a risk control is more in line "classic" risk management than using a process. In hardware it is the same as using an off the shelf part that has specifications that fit with the risk control needs.

A "process" refers to the support structure that sits outside the raw decision to use a library function or write a line of code. I think in the real world, the risk will always be a function of both parts, a literal "object" which is implemented (which can be lines of code in software, library functions or purchased controls), and quality of the the process that supports the decision, implementation and testing.

I think Peter and I are aligned. The specific point I want to get across about "process as a risk control" that I try to reinforce is this: If someone claims "process as a risk control" then the process must have both pre-defined inputs and outputs, and there must be evidence that the process is always followed.

My experience is that most 62304-inspired development efforts have unique development and test plans, with custom solutions. This approach leads much more naturally to direct testing of the software solutions as the verification of effectiveness. If is much more straightforward to examine the evidence provided by direct testing of specific solutions than it is to examine the evidence that a given process is an effective risk control. One analogy I use is that this effort (claiming 'process' as a risk control) is akin to validating a one-off process versus verifying a specific output.
 

Tidge

Trusted Information Resource
What you state in your post is what is indicated in Section 6.2.2.6 "Process as a risk control measure" of IEC/TR 80002-1. Do you know anyone who has successfully done this and what did they use to show effectiveness?

I'm not sure this will be of much interest to most people, but I found some old notes on my attempts to understand where the concept of "(software) process as a risk control measure" in TR 80002-1 may have come from (or at least, share some DNA with), since this type of risk control clearly doesn't fit in the classic IBD/PMD/IFS framework.

In ISO/IEC Guide 63 section 4.6.2 there is this tidbit: (emphasis mine)

"Examples of risk control measures may include the following:
–rigour of applied processes in design development and manufacturing: it is usually assumed that the more rigorous the processes used in the design and development or manufacturing, the lower the probability of systematic faults being introduced or remaining undetected;"

...and from a a previous life, I know that "rigour of process" is a paraphrased risk control within the GAMP 5 step of Performing Functional Risk Assessments and Identify Risk Controls (in section 5.3), e.g.
–application of external procedures,
–increasing the extent or rigor of verification activities,
–Increasing the number and level of detail of design reviews, etc.

I want to note that the section of GAMP 5 concludes with "where possible, elimination of risk by design is the preferred approach."
 
Top Bottom