@ Peter
Anyhow, you have to verify the effectiveness first, then record the results. But you are correct to point out that in most cases, this verification is subjective and most manufacturers do not have the supporting evidence. For example, the number of probability is changed from P=4 to P=2 due to the warnings in IFU (means 99% effective).
If we do the verification of mitigation(s) used for each risk, then heavy workload can be expected. So what is best practice in this regard?
@ Marcelo
I cannot get your point. The residual risk evaluation is related to risk acceptability. The effectiveness is the reason why risk can be reduced to defined acceptable range.
Anyhow, you have to verify the effectiveness first, then record the results. But you are correct to point out that in most cases, this verification is subjective and most manufacturers do not have the supporting evidence. For example, the number of probability is changed from P=4 to P=2 due to the warnings in IFU (means 99% effective).
If we do the verification of mitigation(s) used for each risk, then heavy workload can be expected. So what is best practice in this regard?
@ Marcelo
The verification of effectiveness is related to risk acceptability, so it's not related to the risk (probability x severity) per se. If you can show that the risk control measure is acceptable based on your risk acceptability criteria, you do not need to re-check the risk (*but obviously in this case you should not be using your risk acceptability criteria as the matrix (which unfortunately most people do :-/().
Last edited: