FMEA Official Rankings S-O-D?

Aranxa.zh

Registered
Hi, im currently developing a process FMEA. I want to know if it exists an official chart for S-O-D criteria, I´ve found some many criterias but I don´t know which to use in my analysis. Thank you.
 

Tidge

Trusted Information Resource
My preference is to offer a qualitative description with only a small number of categories of ranking; 3 may be too few, with more than 5 everything becomes and argument about shades of colors. 10 seems common, but I think that is too many.

The alternative is to offer some sort of quantitative assessment of rankings, based on "powers of x" where typically x = 10. I distrust quantitative ranking scales for FMEA because then the arguments become all about the evidence behind the ratings. I have no personal allergy to either the math or collecting the data to support quantitative assessments, but I have not regularly encountered people who know how to digest such information. If an organization is "100% behind" quantitative analysis (at a confidence level of at least 80% *wink*) then I could support a larger number of discrete rankings if only to allow something other than a "powers of 10" scale. It's more typically been my experience that practical studies can show differences between 1-in-256 versus 1-in-1024 as opposed to trying to show an improvement from 1-in-100,000 to 1-in-1,000,000. YMMV.
 

Bev D

Heretical Statistician
Leader
Super Moderator
What Tidge said!

Remember that FMEA is intended to be an iterative process used during development of either a product design or a manufacturing process. If you are filling out an FMEA form just to check some auditor’s box you are wasting everyone’s time.

My experience confirms what Tidge says about rating categories greater 1-5. 1-10 only gives the toxic allusion of increased precision. Especially with Ocurrence and detection ratings that are not based on experimental assessment but are based on the biased opinions of the guessing committee. Occurence ratings are intended to be based on experiments during development that establish the tolerances that will guarantee no defects. (Or in low severity failures an acceptable defect rate.). Detection ratings should be based on MSA testing…
 

Matt's Quality Handle

Involved In Discussions
If you're automotive, check your AIAG core tools manual or your CSR's. I know that AIAG revised theirs to match up with VDA since I've been out of the industry, but your customers may expect you to be similar to that guidance.
 

Tidge

Trusted Information Resource
Occurence ratings are intended to be based on experiments during development that establish the tolerances that will guarantee no defects. (Or in low severity failures an acceptable defect rate.). Detection ratings should be based on MSA testing…

I have an intellectual curiosity if anyone (in any industry) aligns FMEA specifically with quantitative experiments (or testing). I can imagine that for specific failure modes (or alternatively: potential design/process choices) that someone like a "six sigma black belt" doing that sort of work to demonstrate some sort of return-on-investment... but for every FMEA exercise I've been part of the pre-controls side of the analysis has never been more than a "gut check" (literally: it is often "what the team can stomach"), and at best there is some sort effort based in "science" (often, it is the engineering science of process validation or design verification) that justifies an improved rating... again as a "gut check".

As a practical matter: Any reasonable FMEA effort will have enough discrete elements that it wouldn't be practical to insist on robust experimentation to justify specific ratings.

Some more targeted piece of advice for @Aranxa.zh (for qualitative rankings):
  • Treat any numeric representations in the SOD scales as ordinal, and with no rating having an absolute value.
  • When you settle on your SOD scales, try to make sure that the numeric rating nominally represent some raking (i.e. "1 is best", "5 is worst")
  • Try to make it so that "Worst" and "Best" mean the same thing for each category S/O/D, so that if you do (the common) multiplication of SOD assignments to decide on "action limits" you don't get tripped up.
 

Ed Panek

QA RA Small Med Dev Company
Leader
Super Moderator
We just reviewed this internally or at least our severity criteria. 1-4

1) mild - does not interfere with day to day activity - a rash or congested sinuses
2) moderate- impaired day to day activity but no medical treatment is urgently required. eg - sprained ankle, bad cold or flu.
3) Serious - Requires medical treatment - broken bone, deep cut, dizzy, serious COVID
4) Death - Serious risk of death with or without medical treatment - Car crash at high speed, etc.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Tidge my experience is that that many companies treat FMEA as just one more form to fill out with a goal to game the system to have the fewest failure modes that need to be addressed. When that happens FMEA is nothing more than a waste of time. Fill it out, file it and forget it. But most companies that I’ve worked do experiment (some well some not so much) to establish their spec limits. But they don’t use the fmea to guide the process; they use the FMEA to document their guesses and submit to their Customer if required or the filing cabinet…

I do know of a few companies that treat FMEA as an integral part of development - it guides where the teams must focus. The guidance is to focus on severity failures. Low severity failure modes are typically not addressed until validation or actual high occurrences in use.

Here’s the big question, if you don’t perform experiments to establish specifications during development how do you do it? Rule of thumb, biggest ego wins? If that is a company’s approach then how do they avoid a random mess?

:soap: Pejoratively, my experience is that the engineers that say “that‘s too much work” are lazy, not very good developers, and take the FMEA as an insult to their superior ego expertise…. Sorry to be so critical but I’m old and I’m just so tired of the excuses for not doing the right thing In development and then blaming manufacturing for not doing their jobs correctly.

By the way of course, the FDA has been pushing “quality by design“ that outlines robust experimentation instead of just robust validation (verification). Part of OQ validation is test at the extremes which at least validates that the spec limits will result in few to no defects. The FDA and other bodies have also moved away from the ‘detection’ rating and simply requires solid MSAs.
 

Matt's Quality Handle

Involved In Discussions
Tidge my experience is that that many companies treat FMEA as just one more form to fill out with a goal to game the system to have the fewest failure modes that need to be addressed. When that happens FMEA is nothing more than a waste of time. Fill it out, file it and forget it.

It's slightly more complex than that. Customers and their SQE's play the same game at PPAP time, which incentivizes the cheating and cheapens the process. An SQE will demand action plans for any RPN >100 (despite the fact that AIAG specifically advised against it in the 4th or maybe 3rd edition). At one point, it was official policy of one of the Big 3 (don't remember which one) that all RPN's >100 must have action plans before full PPAP approval was granted.

Of course there was a launch that I was involved with where the Customer marked certain characteristics as severity 9 and 10 on the DFMEA. Their policy was to require automatic poke-yoke on everything that has a severity 9 or 10. Basically, that made the feature a safety significant feature without marking it as such on the print.

When that happens FMEA is nothing more than a waste of time. Fill it out, file it and forget it.

Good organizations will faithfully do the work on the quality tools, then dumb them down for the SQE when required. Bad organizations treat these documents like a check in the box.

:soap:
 
Top Bottom