Using RPN to Confirm Risk Reduced to an Acceptable Level

Rou|Model

Registered
Hi all,

Just wanting to survey the professional crowd if your organizations utilize RPN (S x O x D) value for Nonconformances this way:
1) To quantity initial risk (characterized by a Cause --> Failure Mode --> Effect).
2) Then develop mitigation actions that address the SOD values, and
3) Then project the new RPN value (assuming the actions have been taken).
Purpose for this is to confirm if the proposed actions will reduce risk to the acceptable level (designated threshold).

This is not part of an FMEA project, but supposedly to integrate quality risk management into the NC process. But this risk assessment is solely being used to confirm that proposed risk mitigation actions would sufficiently reduce risk to an acceptable level. I haven't come across RPN being used this way, and eager to hear what others have to share.

Thanks
 

Bev D

Heretical Statistician
Leader
Super Moderator
In my last organization (I’m retired :cool:) we did use severity, actual frequency of occurrence and the actual effectiveness of all detection methods to determine the current risk level and we used actual data from actual testing of proposed solutions to determine the effectiveness. Using actual data (not guesses or ordinal ratings) was quite effective.

We never calculated the RPN as it is a bogus numerical calculation that violates almost every rule of mathematics.
 

John Predmore

Trusted Information Resource
To build on what @Bev D said, S, O, D scores are often very subjective, and while the resultant RPN appears quantitative, it is no more objective than the subjective factors.

Actual testing is always a more solid predictor of effectiveness, rather than judgement, even by so-called experts. If the experts were so wise, there wouldn't be as many problems in the first place. Many times, it is possible to demonstrate experimentally, or on a pilot scale, how well the improvement might work. before full implementation. If the risk mitigation actions you mention reduce variation, let's say, you should be able to demonstrate improvement in variation without waiting for a system failure to occur before you judge effectiveness. If the risk mitigation actions are error-proofing, a trial of artificially induced mistakes can show that the error-proofing prevents X% of induced errors, and that is an improvement over not having the error-proofing in place. Actual testing also does a better job finding unintended consequences, that is, new failure modes we did not anticipate.
 

Tagin

Trusted Information Resource
...if your organizations utilize RPN (S x O x D) value...

Multiplying SxOxD is gibberish: it confuses the use of a number as an ordinal for a value usable in multiplication.

If we have an SOD ranking options of 1, 2, or 3, we can replace that with equally valid rankings of red, yellow, or green. Clearly multiplying "RedxGreenxYellow" is gibberish. It is the same gibberish math with the use of ordinal numbers, its just that we allow ourselves to be sloppy and confused, and so RPN is a commonly used logical and mathematical falsehood.

I refer you to this excellent article, which I quote from below:


When we place a series of categories in order in some continuum such as severity, occurrence, or detectability, we may represent this ordering with numbers. Such numbers are rankings. If we assign the value of 1 to the lowest ranked category in the continuum, then 1 is below 2, 2 is below 3, 3 is below 4, and so on. Values with this property of order are called “ordinal-scale data.” The rankings on severity, occurrence, and detectability are intended to be ordinal-scale data.​
However, before the operations of addition and subtraction are meaningful, you absolutely and positively must have interval-scale data. Interval-scale data are data that possess both ordering and distance—not only is 1 less than 2, and 2 is less than 3, but also the distance from 1 to 2 is exactly the same as the distance from 2 to 3. It is this notion of distance that gives meaning to addition and subtraction. Without the metric imposed by distance, you are operating in Wonderland, where 1 + 2 is equal to whatever the Red Queen wants it to be today.​
Before the operations of multiplication and division can be meaningful, you must have ratio-scale data. Ratio-scale data are data that posses ordering, distance, and an absolute zero point. A classic example of data that are interval-scale but not ratio-scale are temperatures in degrees Fahrenheit or Celsius. Since both of these scales use an arbitrary zero point, multiplication and division do not make sense. However, addition and subtraction do result in meaningful numbers. For example, in either system, the following is a true statement: 60° + 10° = 70° But in either system the following equation is nonsense: 60°/80° = 0.75​
 

Zero_yield

"You can observe a lot by just watching."
Just wanting to survey the professional crowd if your organizations utilize RPN (S x O x D) value for Nonconformances this way:
1) To quantity initial risk (characterized by a Cause --> Failure Mode --> Effect).
2) Then develop mitigation actions that address the SOD values, and
3) Then project the new RPN value (assuming the actions have been taken).
Purpose for this is to confirm if the proposed actions will reduce risk to the acceptable level (designated threshold).

This is exactly the way I've seen RPN used, which concerns me given the amount of high quality, reasonable criticism of that approach I'm seeing in this thread.

I will say the outcomes I've seen from this method are decent, as usually it ends up pointing out the highest-severity, least effective human inspection and gets it replaced with a better control.

However, it is almost trivial to game the outcome of the RPN table to make it say the issue you want to pursue is the most critical issue, and the issue you don't care about doesn't matter. It works best as a way of leading discussions and exploring / explaining what issues to tackle next.
 

Tidge

Trusted Information Resource
I would not use an RPN "Risk Prioritization Number" (I wish we could just drop the 'R', as it confuses discussions) for QMS nonconformances:
  • I find the implication that QS nonconformances to be "tolerable" to be distasteful, especially based on an arbitrary multiplicative number.
  • I don't want to argue with anyone over the values assigned to S, O, D... it isn't as if there will be an authority to appeal to!
  • RPN approach in FMEA implies a pre-controls and post-controls assessment of ratings, and I don't want to argue with people about the ratings on either side of the analysis.
If you must assign categorization to QMS nonconformances, I recommend simply having three levels (e.g. minor/moderate/major) and make it clear what belongs in each category. Take actions based on the category.

I don't feel obligated to make any comments on RPN methodology as there are plenty of threads elsewhere.
 

Bev D

Heretical Statistician
Leader
Super Moderator
I didn’t interpret the original post as applying only to “QMS nonconformances” which *I* interpret to be procedural nonconformances. (Perhaps you have a different intent?). I interpreted the OP as referring to product nonconformances (defects and failures). Could the OP clarify.
 

Tidge

Trusted Information Resource
Could be, but if the question is about product non-conformances I'd suggest updating the ratings (if necessary, or adding a line) in the product/process FMEA and consider implementing new controls, as appropriate. I'm not sure why another FMEA would come into the picture to do an assessment.
 

Zero_yield

"You can observe a lot by just watching."
I missed in the original post that this was about justifying nonconformances. I've mostly heard about this methodology being used for a change control process. For nonconformances, I'd say severity, occurrence, and detection are all very important to defining true impact of a deviation.

I have heard about this methodology being used for the "Case for Quality / Make CAPA Cool" initiative. My understanding was that was around finding ways to minimize documentation of low-risk, low-impact deviations (not try to justify mitigation of the impact of an existing deviation).
 

Rou|Model

Registered
Thanks everyone for your insights!


Uhmm, actually I am not sure if the original post explained well what the RPN value (or alternatively, some other sort of risk level indicator) is being used for.

Our NC's are already categorized (minor, major or critical) based on a set of predetermined criteria (e.g., little to no potential to cause A, B, or C, has potential to cause A, B or C, or already resulted or will clearly result in A, B or C). So the risk assessment here is not to categorize the NC (nor to determine what rigor of investigation or if a CAPA is needed).

The risk assessment is used to lay out the current S, O and D ratings, based on the NC event - and then identify the mitigation actions that address each of these ratings (where possible), and re-assess the ratings assuming the actions have been taken. More or less to "confirm" the proposed actions should be effective enough to reduce the risk level.
(Also, we are given a threshold value -- if you get to this value, it's acceptable. I know this is a problem)

In the NCs, there is no comparison or prioritization of risks. The risk assessment is just for getting that risk level indicator value (evaluating a single event risk, so it is typically just one number (or indicator value). There can be more, but we are not really comparing between numbers, but comparing the before and after numbers (for each risk).

So, practically what I see in most NCs is that the new RPN (or risk level indicator) is always lower than the initial RPN ( ... why wouldn't it be, if we are using it to justify the action plans, right?)

We do have an NC Effectiveness Verification step after the mitigation actions have been implemented to confirm that they are in place, effective and did not introduce new risk. The re-evaluation of the RPN (or risk level) surely would be more appropriate at this step.

So, long story short - I have been having a hard time understanding the value of this "risk assessment" in the NC process (or how it is being used for NCs).

I am also not sure if this inclusion of a risk assessment in the NC process is some misplaced means to integrate quality risk management into the NC process??

Further thoughts?

Thanks
 
Top Bottom