How to estimate the probability in AI RISK?

Al2772

Registered
NO, we are not considering the PCCP. actually we are in Europe. Here sth similar would be the "substantial modification" but still looks very risky and soon to go for that way.
No idea how will notified bodies might approach this topic of continuous learning SW
 
Elsmar Forum Sponsor

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
Thats the holy grail of AI. A model you deploy and it improves itself with experience. I don't think any regulators are there yet. Or at least without very tight real world controls.
 

d_addams

Involved In Discussions
I think what is missing is that for SW risk analysis it is common to assume the probability is 1, so your stratification only occurs based on severity. So 34971 isn't saying 'you are forbidden from including probability' it is more likely assuming that one is using a consistent probability score for SW failure modes and thus your stratification (i.e. scoring) will only show the stratification of the severities. Assuming a probability of 1 (or any constant probability) has the practical significance of not including probability at all.
 
Last edited:

Tidge

Trusted Information Resource
Assuming a probability of 1 (or any constant probability) has the practical significance of not including probability at all.
Better to say that the practical significance is to always assume that the defect will occur, and thus must be addressed. Medical Device Software is of course allowed to be released with anomalies, but the anomalies have to be disclosed. This is different than mechanical failure modes where no one has to disclose all the probabilities for things like gears wearing... these sorts of things get baked into the service life.

Staying away from AI models: it really is not possible to assign probabilities to software defects, even those that can manifest because of some interplay with a more physical mechanism (like buffers, signal latency, memory storage, register contents, whatever). Every software defect I've ever encountered (again, staying away from AI, LLM, hallucinations) was 100% repeatable, even those that were difficult to make occur. But once the conditions were established, the defect would always manifest. This is fundamentally different than a mechanical failure... sometimes parts break even though other parts don't break... the famous bathtub curve comes to mind. Software doesn't have that sort of failure paradigm.

It is of course possible that the conditions to manifest a software defect may be thought to be incredibly unlikely... but that isn't the sort of probability that goes into a software risk analysis, because the defect will be there until it is eliminated. Trying to argue that there is some less than 100% probability of a known defect to manifest will waste a lot of time trying to defend the mechanism by which the new lower probability was settled on... and now there are at least two things to have to explain to third parties (the risk analysis and the probability analysis).
 
Top Bottom