My company has 5 suppliers engaged in manufacturing semi-finished mechanical products and 4 coating suppliers.
At the end of production, I would like to rank my suppliers.
Now I have data on actual delivery time compared to the contract (D), quality inspection results (Q), product cost (C).
But, I don't know the standard or the template or the method for scoring the above criteria to quantify the results.
The result I want is in the form of s score (for example, supplier A gets 80/100 points, supplier B gets 75/100 points...) for easy ranking and comparison.
Please help me!
1) You need to determine which criteria are most important, second importance, etc.
2) I am against the idea of combining into a single score. I know it is common, but it is mathematically incorrect because it treats the ratings numbers, which are ordinal values, as if they were cardinal values by performing add/divide/average operations on them. Ordinal values are just rankings, not size values. You could use green/yellow/red instead of 3/2/1 rankings, for example. It is just more obvious with the color rankings that it is wrong to try to perform add/divide/average operations on the colors.
Combining rankings also causes a problem with lost information:
- Is a company with a D=90 Q=80 C=20 really the same as a company with D=20 Q=80 C=90?
What good is a better cost (C) if they cannot reliably deliver(D)?
- Or the same as a company with D=90 Q=20 C=80?
What good is a better ontime (D) and cost (C) if the quality (Q) is garbage?
But in the single score system you are proposing, all three of those scenarios would get the same rating.
See this article on why this type of ranking is
Problems With Risk Priority Numbers
(it requires a login or just hit the
Print button to see the full article)
So, instead of a single score value, consider using your prioritization of criteria to combine the three rankings. For example, lets say Quality is most important to us, Delivery next, and Cost is third. So, our supplier score is now Q-D-C, in that order. Then, use a scale of 0-9 instead of 0-100 for each criteria. Now the three scenarios above look like:
D=90 Q=80 C=20: S=63.3 QDC=892
D=20 Q=80 C=90: S=63.3 QDC=829
D=90 Q=20 C=80: S=63.3 QDC=298
So, our QDC rating gives us a much more realistic and informative rating than the simple averaging for the S score.
You can use weighted averages to simulate some of our prioritization, although it still loses information. For example, lets use the same prioritization as above (Quality then Delivery then Cost) for a weighted average score W=(Q*0.5 + D*0.3+C*0.2)/3
D=90 Q=80 C=20: S=63.3 QDC=892 W=23.7
D=20 Q=80 C=90: S=63.3 QDC=829 W=21.3
D=90 Q=20 C=80: S=63.3 QDC=298 W=17.7
So the weighted average W score sorts out these three suppliers better than the S score, according to the prioritization of our criteria.