Assuming a probability of 1 (or any constant probability) has the practical significance of not including probability at all.
Better to say that the practical significance is to always
assume that the defect will occur, and thus must be addressed. Medical Device Software is of course allowed to be released with anomalies, but the anomalies have to be disclosed. This is different than mechanical failure modes where no one has to disclose all the probabilities for things like gears wearing... these sorts of things get baked into the service life.
Staying away from AI models: it really is not possible to assign probabilities to software
defects, even those that can manifest because of some interplay with a more physical mechanism (like buffers, signal latency, memory storage, register contents, whatever). Every software defect I've ever encountered (again, staying away from AI, LLM, hallucinations) was 100% repeatable, even those that were difficult to make occur. But once the conditions were established, the defect would always manifest. This is fundamentally different than a mechanical failure... sometimes parts break even though other parts don't break... the famous bathtub curve comes to mind. Software doesn't have that sort of failure paradigm.
It is of course possible that the conditions to manifest a software defect may be thought to be incredibly unlikely... but that isn't the sort of probability that goes into a software risk analysis, because the defect will be there until it is eliminated. Trying to argue that there is some less than 100% probability of a known defect to manifest will waste a lot of time trying to defend the mechanism by which the new lower probability was settled on... and now there are at least two things to have to explain to third parties (the risk analysis and the probability analysis).