What should the FMEA Occurence Rating be based upon?



Sub : Occurence rating
Should occurrence rating be based on the no. of times a failure occurs or no. of non-conforming parts i.e if a particular undersize problem had occurred twice when running a batch of 10,000 components and if on segregation we found 2000 components non-conforming then should we take the failure rate as 2/10,000 thereby assigning occurrence rating of 5 or should we take the failure rate as 2000/10000 ,thereby getting an occurrence rating of 10.Likewise an undersize problem could have occurred 10 times when running 10000 nos. but on segregation found only 100 pieces.In this case should the failure rate be taken as 10 / 10000 i.e occurrence rating of 7 or take failure rate as 100/10000 nos. i.e occurrence rating of 10.

Atul Khandekar



Is it 2 out of 10000 or 2 out of some sample size? In any case I think you should take the maximum number that you found (in this case after segregation) to decide the occurence rating. You would get an occurence rating of 10.

Occurence must always be seen in conjunction with detection. Segregation or 100% inspection would be an inefficient and expensive method of detection. If you are doing sampling or SPC, you should be able to get a close enough estimate (PPM) of non-conforming.

Randy Benedict


Occurance happans before detection, therefore detection can't be assumed when rating occurance?

The detection may be the best in the world, but it only detects what has already occurred.

To reduce an occurance rating don't we need to implement improvement on the front end as opposed to the rear end (detection?):)


What do you want to solve?

Andrews, I think you are mixing data (if I understand you properly). Here is the data as I see it:

10000 components
2000 nonconforming parts

2 process deviations (which lead to the 2000 n/c parts)
? possible process deviations

If you use the 10000 components, then your comparison must also be in components. If you are going to compare the number of process deviations, then you must have something to compare it with, for example the number of possible deviations. Let me explain.

Let’s say you produced 10 million parts. You had one process deviation that resulted in 100% of the parts being rejected. Would you say you had 1 deviation in 10 million parts? Probably not, it just would not make sense.

If your FMEA is looking for the number of possible deviations, then use the 2 versus what ever the number of runs.

One last thing. Normally, FMEAs are developed prior to the manufacture. The historical data can then be used to test the validity of the FMEA


Quality Assurance Supervisor

I would like to know details about
ISO 9000 , 90001, 9002, 9003, 90004 and 9005
Is that depends on type of industries ? If yes let me know which standards are using for which type of industies.




"Occurrence is the likelihood that a specific cause /mechanismof failure will occur".

The fact that that you had two failures during the run and 2000 after sorting only indicates that you chose the wrong occurrence number.

You chose a particular Occ. no. based on something, past history, detection process in place or a "gut feeling". This was found to be incorrect during the sorting process.
The information you gathered during sorting is not used to change the Occ. no., it should be used to improve the process.


I feel the thread is going in a different direction. Maybe because I did not project my problem properly.Let me explain my problem with a hypothetical example.

Example :Let us assume that we work from 8.30 a.m to 5.00p.m (8hrs with an 1/2 hr lunch break) and that 'X' job was running from 8.30 a.m on 21.03.2002 after setting approval and we had detected a hole undersize problem during the hourly inspection (DETECTION method) that was conducted at 11.30 a.m whereas during 10.30 inspection we did not have this failure.Since we keep the items that ran between the last hourly check and the present check separately,we were able to quarantine the quantity between 10.30 inspection and 11.30 inspection.So maximum number of non-conforming products is the quantity that will run between 10.30 check and 11.30 check (say 1000 nos.).After correcting the problem we ran the machine till 2.30 p.m without a problem.But at 2.30 hourly inspection we detected the SAME problem.We quarantine the quantity between 1.30 inspection and 2.30 inspection.Let us say we again got 1000 defective pieces.We correct the problem and run this machine till 9.00 next day without this or any other problem.Totally we have got the same failure (hole undersize) twice during the production run of 10000 pieces and the total no. of defectives is 1000+1000=2000nos.
Based on this case what is the occurence rating I should give and why?



The Potential Failure Mode and Effects Analysis (FMEA Third Edition) Reference Manual gives a pretty straight forward evaluation criteria on page 49.

Based on the numbers you give, you show a 1 in 5 occurrance (number of failed pieces in the total run). This would be classed as "very high" on the chart and result in a ranking of 10. You can also determine the ranking based on Ppk values which is explained on page 71.

Now to the question.

If you determine a failure occurs every 3 hours, why not put a preventive maintenance program into effect which changes the (assumption here) drill every 2 hours? This would slow down the process for the change but you would gain on the number of parts made without a nonconformance.

One puzzle though. You say you can run from 2:30 until 9:00 the next day with no problem. I would be very interested in learning the off shift secret on how they keep running with no problem. Either they are not measuring the same way to find failures or they have developed a way to run the equipment that my day shift needs to learn. There is a definate pattern shown on days. I would spend a whole lot of time finding out why there is such a big difference.


Al Dyer

D. Scott,

Very astute, why does it seem that if there are 3 shifts, there are 3 different "companies" and opinions. Like you say, there is a need to root out the problem between the shifts.

In doing this I would also look at the total fallout rate reported by the customer. If you have no customer complaints from any shift, maybe the first shift is being over critical?

This situation just cries for a mean time between failure study and have all shifts adhere tio the resulting data.

Any traceability data?

Internally, if you hold an hours worth of production for sorting and find 2 bad pieces, what is your PPM? The hours worth of production or the 3 bad pieces?

Look at the real world when calculating PPM. If the big 3 find one bad part and put an entire shioment of 10,000 pieces on hold, they will count 10,000 pieces against your PPM.

Is this right? Not in my opinion.

Don't bash yourself too much, it is your company and you do as you wish while staying within the guidelines you have chosen. If you hold an hourly batch, inspect it and report only those actual defects to your internal PPM.
Hypothetical from left field:

We find 1 bad part laying on a table after production has been shut down for the day.

The part is not traceable.

We produce 100,000 parts per day.

Do we hold all production from that day and do a sort? I think not.

Since it is not traceable, was it even from today, or could it be from maybe 3 weeks ago? Possibly

Maybe call the customer and see how everything is doing before going into kamakazi mode? Sure

Think on a broad scope and control that which you can? Yes

Just some thoughts to invigorate the thought process!!!:bigwave:
Top Bottom