If I look through this thread, it seems the key point is the difference between "classification" and actual risk evaluation.
For the purpose of classification it is not unusual to assume something will fail with 100% probability. This is useful as it helps to highlight what is critical, what needs to be watched carefully in tests, and where risk controls should be applied.
However, once you move to the actual phase of adding risk controls and evaluating risks, you then need to switch to realistic values for probability of failure. Otherwise you get nonsensical results and infinite loops.
I believe that intention of IEC 60601-1 for essential performance. But it is poorly worded. In the classification phase it says to evaluate the "RISK from the loss or degradation of the identified performance beyond the limits specified".
Whereas after risk controls are implemented it is just "RISK from the loss or degradation of the identified performance".
So it seems the extra words "beyond the limits specified" are intended to mean an assumed loss/degradation with 100% probability.
But that is not clear.
IEC 62304 A1 has tied itself up in knots over the same issue, and it useful to look at that to help understand the same issue associated with essential performance.
In the pre-amendment version, in the initial classification phase you assume the software fails with 100% probability. Then, as a second stage if you have an external hardware risk control you could drop one class (e.g. Class C to Class B). It's not explicitly stated but it is obvious that in this second stage the software is no longer considered to fail 100%. Otherwise, a single external hardware risk control would not be enough to make the risk acceptable. With the external hardware risk control, it means that you now have two systems that both need to fail (2MOP structure) which then achieves very low probability as expected for high risk devices. For this to work though it's necessary to assume that both systems have fairly low probability of failure, including the software system. You can't assume software fails at 100%.
In A1 version, it starts out the same, initial classification assume software fails 100%. But they wanted to allow other risk controls, not just hardware so they said it can be any external risk control measure. But ... this means that weak risk controls could be used. So, it needs to be judged if the risk control is effective. Up to here all OK, but here comes the mistake ... they said just go back and do the classification again with the risk control in place. The problem is that we are still assuming the software fails with 100%, which means reasonable risk controls (like independent hardware protection) are no longer effective. For high severity applications, you would need two independent external protection systems in order achieve acceptable risk. Also if you did apply two external risk controls, the Class would drop from C to A, not C to B. For high severity harm, you can't get to Class B in the current version of IEC 62304.
All of this makes no sense, and it occurs because of the use of "100% failure" in the risk control phase, rather than just for initial classification.
I've already written a letter to the committee to fix this issue, and I proposed that for the initial phase it's OK to assume 100% failure, and then as a second stage say ... "if an external risk control measure is used, with an equivalent effectiveness as one means of protection in IEC 60601-1, then the software can be reduced one step in class."
It looks like something similar is needed for essential performance, i.e. to make it clear that to determine essential performance, assume 100% failure of the performance related function, but in the risk control phase use actual probability of harm.
For the purpose of classification it is not unusual to assume something will fail with 100% probability. This is useful as it helps to highlight what is critical, what needs to be watched carefully in tests, and where risk controls should be applied.
However, once you move to the actual phase of adding risk controls and evaluating risks, you then need to switch to realistic values for probability of failure. Otherwise you get nonsensical results and infinite loops.
I believe that intention of IEC 60601-1 for essential performance. But it is poorly worded. In the classification phase it says to evaluate the "RISK from the loss or degradation of the identified performance beyond the limits specified".
Whereas after risk controls are implemented it is just "RISK from the loss or degradation of the identified performance".
So it seems the extra words "beyond the limits specified" are intended to mean an assumed loss/degradation with 100% probability.
But that is not clear.
IEC 62304 A1 has tied itself up in knots over the same issue, and it useful to look at that to help understand the same issue associated with essential performance.
In the pre-amendment version, in the initial classification phase you assume the software fails with 100% probability. Then, as a second stage if you have an external hardware risk control you could drop one class (e.g. Class C to Class B). It's not explicitly stated but it is obvious that in this second stage the software is no longer considered to fail 100%. Otherwise, a single external hardware risk control would not be enough to make the risk acceptable. With the external hardware risk control, it means that you now have two systems that both need to fail (2MOP structure) which then achieves very low probability as expected for high risk devices. For this to work though it's necessary to assume that both systems have fairly low probability of failure, including the software system. You can't assume software fails at 100%.
In A1 version, it starts out the same, initial classification assume software fails 100%. But they wanted to allow other risk controls, not just hardware so they said it can be any external risk control measure. But ... this means that weak risk controls could be used. So, it needs to be judged if the risk control is effective. Up to here all OK, but here comes the mistake ... they said just go back and do the classification again with the risk control in place. The problem is that we are still assuming the software fails with 100%, which means reasonable risk controls (like independent hardware protection) are no longer effective. For high severity applications, you would need two independent external protection systems in order achieve acceptable risk. Also if you did apply two external risk controls, the Class would drop from C to A, not C to B. For high severity harm, you can't get to Class B in the current version of IEC 62304.
All of this makes no sense, and it occurs because of the use of "100% failure" in the risk control phase, rather than just for initial classification.
I've already written a letter to the committee to fix this issue, and I proposed that for the initial phase it's OK to assume 100% failure, and then as a second stage say ... "if an external risk control measure is used, with an equivalent effectiveness as one means of protection in IEC 60601-1, then the software can be reduced one step in class."
It looks like something similar is needed for essential performance, i.e. to make it clear that to determine essential performance, assume 100% failure of the performance related function, but in the risk control phase use actual probability of harm.