T
Jim,
It seems to me that there are three things that need to be measured when looking at the FMEA work. The three are, did the teams follow a disciplined approach (follow a standard like the AIAG standard), Does the product have proper FMEA coverage (was the approach systematic? Were FMEAs completed wherever there was something new, changed or there was a new application or location?), And how effective were the FMEAs (Measured in terms of NPI unreliability spike and MTTF. Here is how I go about it. First recognize that the measurement of RPN alone is a poor measure of FMEA effectiveness. RPN treats Severity, Occurrence and Detection as equal and they are not.
Consider the case where S=10 (Hazardous without Warning), O=10 (Occurs 10% of the time or more) and D=1 (almost certain to detect) ==> RPN = 100
Now consider the case where S=1 (No discernible effect), O=10 (Occurs 10% of the time or more) and D=10 (Can not detect or is not checked) ==> RPN = 100.
These two cases surely are not equal. One is a safety hazard and the other no one cares about. However RPN says they are of equal risk. NO WAY!
Risk is defined as the possibility of something bad happening. So if you are to quantify risk in an FMEA then you need to quantify two things...how bad is bad and just how possible it is (probability). Therefore Risk is really defined as Severity x Occurrence.
Severity and Occurrence can only be changed through a design or process change. (There are other debates raging on this but this is where I stand.)
So lets look at risk on a chart. See the attachment. (sorry I can't add the attachment for some reason, maybe someone else has a copy they can post. It is a 2D matrix that maps Severity vs. Occurrence and shades in cells as red, yellow and green to show where action must be taken) This is called the Criticality Matrix, and it defines when the design or process team needs to make changes to the design or process because the risk is too high. Now that we have defined risk and when to make design or process improvements it is easier to use the following SAE survey questions to understand if the FMEA is considered good or not. Here is a list of those questions...
For Design:
1. The FMEA drives Design Improvements as primary objective.
2. The FMEA addresses all high risk Failure Modes, as identified by the FMEA team, with executable Action Plans. All other failure modes are considered. (Enter none where no action is required)
3. The Analysis/Development/Validation (A/D/V) and/or Design Verification Plan and Report (DVP&R) considers the failure modes from the Design FMEA.
4. The FMEA scope includes integration and interface failure modes in both block diagram and analysis.
5.The FMEA considers all major "lessons learned" (such as high warranty, campaigns, etc.) as input to failure mode identification.
6. The FMEA identifies appropriate Special Characteristic candidates, as input to the Special Characteristics selection process.
7. The FMEA is completed during the "window of opportunity" where it could most efficiently impact the product design.
8. The right people participate as part of the FMEA team throughout the analysis, and are adequately trained in the procedure. As appropriate, a facilitator should be utilized.
9. The FMEA document is completely filled out "by the book," including "Action Taken" and new RPN values.
10. Time spent by the FMEA team, as early as possible, is an effective and efficient use of time, with a value -added result. This assumes Recommended Actions are identified as required and the actions are implemented.
This means you will have to audit some of your FMEAs to understand if the teams are following this common disciplined approach.
For coverage you will have to understand what was new or changed in the product or process and then evaluate if those new or changed conditions were systematically covered with FMEA work. This assumes a previous FMEA or FMEAs were done on the previous product or process.
Last of all you will have to do a postmortem once the product and/or process has been released. That means you will have to gather your quality data from production launch through the product life. You will have to enter those failures into the FMEA through your CPI projects (or whatever your corrective action process is). This is what is meant by keeping the FMEAs as living documents. Metrics should show your progress over time as your teams become more disciplined and effective with FMEAs.
That's my
I hope it helps.
It seems to me that there are three things that need to be measured when looking at the FMEA work. The three are, did the teams follow a disciplined approach (follow a standard like the AIAG standard), Does the product have proper FMEA coverage (was the approach systematic? Were FMEAs completed wherever there was something new, changed or there was a new application or location?), And how effective were the FMEAs (Measured in terms of NPI unreliability spike and MTTF. Here is how I go about it. First recognize that the measurement of RPN alone is a poor measure of FMEA effectiveness. RPN treats Severity, Occurrence and Detection as equal and they are not.
Consider the case where S=10 (Hazardous without Warning), O=10 (Occurs 10% of the time or more) and D=1 (almost certain to detect) ==> RPN = 100
Now consider the case where S=1 (No discernible effect), O=10 (Occurs 10% of the time or more) and D=10 (Can not detect or is not checked) ==> RPN = 100.
These two cases surely are not equal. One is a safety hazard and the other no one cares about. However RPN says they are of equal risk. NO WAY!

Risk is defined as the possibility of something bad happening. So if you are to quantify risk in an FMEA then you need to quantify two things...how bad is bad and just how possible it is (probability). Therefore Risk is really defined as Severity x Occurrence.
Severity and Occurrence can only be changed through a design or process change. (There are other debates raging on this but this is where I stand.)
So lets look at risk on a chart. See the attachment. (sorry I can't add the attachment for some reason, maybe someone else has a copy they can post. It is a 2D matrix that maps Severity vs. Occurrence and shades in cells as red, yellow and green to show where action must be taken) This is called the Criticality Matrix, and it defines when the design or process team needs to make changes to the design or process because the risk is too high. Now that we have defined risk and when to make design or process improvements it is easier to use the following SAE survey questions to understand if the FMEA is considered good or not. Here is a list of those questions...
For Design:
1. The FMEA drives Design Improvements as primary objective.
2. The FMEA addresses all high risk Failure Modes, as identified by the FMEA team, with executable Action Plans. All other failure modes are considered. (Enter none where no action is required)
3. The Analysis/Development/Validation (A/D/V) and/or Design Verification Plan and Report (DVP&R) considers the failure modes from the Design FMEA.
4. The FMEA scope includes integration and interface failure modes in both block diagram and analysis.
5.The FMEA considers all major "lessons learned" (such as high warranty, campaigns, etc.) as input to failure mode identification.
6. The FMEA identifies appropriate Special Characteristic candidates, as input to the Special Characteristics selection process.
7. The FMEA is completed during the "window of opportunity" where it could most efficiently impact the product design.
8. The right people participate as part of the FMEA team throughout the analysis, and are adequately trained in the procedure. As appropriate, a facilitator should be utilized.
9. The FMEA document is completely filled out "by the book," including "Action Taken" and new RPN values.
10. Time spent by the FMEA team, as early as possible, is an effective and efficient use of time, with a value -added result. This assumes Recommended Actions are identified as required and the actions are implemented.
This means you will have to audit some of your FMEAs to understand if the teams are following this common disciplined approach.
For coverage you will have to understand what was new or changed in the product or process and then evaluate if those new or changed conditions were systematically covered with FMEA work. This assumes a previous FMEA or FMEAs were done on the previous product or process.
Last of all you will have to do a postmortem once the product and/or process has been released. That means you will have to gather your quality data from production launch through the product life. You will have to enter those failures into the FMEA through your CPI projects (or whatever your corrective action process is). This is what is meant by keeping the FMEAs as living documents. Metrics should show your progress over time as your teams become more disciplined and effective with FMEAs.
That's my

I hope it helps.