In the question in post #25 (Apex Hao) is actually a good point to raise.
Single fault safety is built around simple math that if X is very small, then X² is really really tiny. Specifically, for high severity harm (death etc.) there should be two independent systems both of which have small probabilities of failure, such that probability of two faults is negligible. A reasonable threshold for this to be effective is each system having 0.001 dangerous faults per year. This is not difficult with modern electronics including software, keeping in mind that many if not most faults are benign. The X² approach has another benefit in making the system not sensitive to the precise probabilities. They just have to be very small.
In practice, the failure rate of a single system needs to be small anyhow to be economically viable. So, in effect we are saying ... take a single system that has "normal" reliability for economically viability, and then double it. In general that should be fine for high severity harm.
I think the point that Apex Hao is highlighting is that nowhere in the standard is there an explicit statement that for "single fault safe" to be effective, the "faults" have to be relatively rare. Failure rates of 0.1 are not even in the ball park.
The "hidden fault" scenario is often misunderstood. It does not mean that if a fault is hidden you have to assume a probability of 1. The correct view is to be aware that double fault probabilities increase with time squared, the impact of which is not intuitive. Let's say a system has a simple flat failure rate of 0.02 events / year (this number is just to illustrate the effect). In the first year of use it is 0.02 faults per year, and 10 years later it is still 0.02 events per year. Now, if you combined two systems, the X² effect makes it 0.0004 events per year, but only in the first year. Even if the individual rates are flat, for double faults the rate starts to climb: in the second year 0.0008, after 7 years it is 0.0028 per year (the formula is NX², where N is the number of years). That effect can push the rate above the criteria for acceptable risk. The typical solution is to perform periodic checking of protection system, which resets the cycle and brings it back to X². But it's important to note that this is just a extra step, a refinement. You have to start with a good X² in the first place before worrying about hidden faults.
Software is nothing special. It has to be reasonably reliable in order to be economically effective. As long as normal design controls are applied, and the systems are independent, the X² idea works fine even if there is software involved.
For software, one way to think about it is to consider option (A) two independent systems, each system has 100 hours used in formal software verification, and (B) a single system with 1000 hours in formal software verification. Although the verification time in option B is much higher, it's still likely to have more risk than option A. Having two independent systems is by far the most efficient way to make risk negligible.
Note that any reference to having two independent systems is usually only needed for high severity harm. And there's always special cases where it's not practical to apply. This is just discussing the general background behind "single fault safety".