Reconciling FMEA RPN ratings with Risk Acceptability

Enternationalist

Involved In Discussions
Personally, the usefulness of "bright line" methods as we're discussing them more or less begins and ends and creating a consistent process across large numbers of people when all or most of them do not have good statistical knowledge. 14971 can be read to imply this approach in its ask for clear "criteria for acceptability", though in reality I think that's just a concession given to the lazy approach to these processes that is the norm.

Usually, if I'm going for that sort of method at all, I prefer to be extremely conservative - the bright line is for things that are definitely fine. Anything even remotely ambiguous should be subject to something deeper, looking at real values and real data. Basically, an easy method for the lazy to use to discard trivial cases.

In the case of estimating probability in particular, I find things get interesting. In most real-world cases, especially novel designs and in small teams, it's pretty much untenable to have robust data for every element in a sequence of events. I usually end up going for an estimate that is conservative and explained explicitly as an estimate (or if it's a guess, which sometimes they are, being upfront about that), with more realistic probabilities being contingent on getting data - but I still often find myself in an uncomfortable spot where analysis is either rigid and resistant to "common-sense" analysis and doesn't really help the design process; or underbaked such that it may as well have been someone saying "I reckon this one's pretty bad".

@Bev D , I'd be eager to hear if you have found a good consistent approach when it comes to the meat and potatoes of dredging through sequences of events in the context of initial design & development.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Initial design and development is not a place for thorough, rigorous statistically sound experimentation of every failure mode. BUT the focus should still be on exploring the high severity failure modes and that can be done with good engineering principles and directed (worst case) testing. As designs are matured more thorough testing can and must be done. In many cases I have found that logical scientific and engineering thought can be brought to bear to work through many marginal designs failure modes. (Process and product). A famous case that I use in my training is the flight of 232 that crash landed in Sioux City. A rear engine part broke apart in flight and the debris cut through the hydraulic lines in the one place where the redundant lines were all co-located. The engineers involved in the design guessed that he probability of such a failure was “one in a billion”. They didn’t say in a billion “what” becuase they had no data. Even when they did get data (3 previous flights experienced breakups that cut the co-located lines before 242 which all sold lost) they didn’t update their ‘beliefs’ nor their designs. This didn’t require experimentation (much less experimentation on the public) to avoid this highest severity failure mode. Just thought and respect for physics. Frankly in high severity use markets there is no excuse for anything but rigorous science and data based methods. (There will of course be some missed failure modes and ‘probabilities’. The crash of flight Usair 427 is a prime example. This is often used as an excuse to not even try. “since I can’t get everything. Why do I need to try to get most of them?”)

And here comes the flaw: It is never about “probability”. Humans are terrible at probability and probability is very tough even when a precise situation is drawn out (think the monte hall game example). In most designs the probability situation cannot be drawn out. The correct statistical method is the frequency of occurence. THIS can be established through sound experimention. And large sample sizes can be avoided - in many cases - by worst case testing…

You are correct that many people are woefully inadequate in practical statistically sound methods. So are many classically trained statisticians. (No offense intended but the situations classical statistical training is intended for are not the same as industrial manufacturing situations that we discuss here) But having researched and trained these methods for several decades there are many sources available and it really isn’t that hard. If you are capable of learning a science or engineering discipline you can learn these methods.
 

Tidge

Trusted Information Resource
I offer any, many agreements, especially with
And here comes the flaw: It is never about “probability”. Humans are terrible at probability and probability is very tough even when a precise situation is drawn out (think the monte hall game example).

I have two common frustrations with (different) groups of colleagues :
  1. The first is the group of folks that want to believe every failure mode is a "black swan", without recognizing that we work at an animal sanctuary that raises black swans. This group's rose-tinted glasses can occasionally be shifted from line-of-sight by demonstration; the more catastrophic the demonstration the more effective (unfortunately). In slightly more mathematical terms: a single occurrence is sometime enough to disprove the null hypothesis of "it'll never happen."
  2. The second is far worse... they (correctly) see the first group as problematic, but their baseline belief is that NO AMOUNT of analysis and data collection can establish degrees-of-belief for failure modes. This worst members of this group are those refuse to engage with the mathematics of hypothesis testing, even though their central tenet is often "I just don't trust the result." I repeatedly had the same statement from multiple "senior" quality engineers: "I don't care if we validated the process, I still don't believe it is going to work."
 
Top Bottom