We are ISO 17025 accredited, but I thought I'd post here to a larger audience, since I am looking for feedback. We are a small government test and research lab so our sole product is test reports.
As a govt lab we are subject to a lot of scrutiny and many aspects of our work is monitored and measured, including our QMS. The boss recently asked me to develop a better rating scale for our QMS so after much thought and browsing around the Elsmar forums (great resource!) I developed a spreadsheet (screen capture attached). Previously we simply were rated by management as 5 out of 5 if our accreditation was reaffirmed, and zero if not (the latter thankfully has never happened!)
Since I am an engineer, the spreadsheet I developed is very simple so I and other engineers can understand it. It rates a number of factors with weights that I have guesstimated. The first two are the biggest ones: 50 points for having the accreditation reaffirmed by SCC (Standards Council of Canada), but minus 50 if there are "serious or critical" nonconformities (as found by the SCC auditors). If you mouse over these cells (not in the screen cap), a balloon will appear which provides further information to management. And by the way we have never had nonconformities of the "serious or critical" type.
After that, the rating can be whittled away by a number of negative factors as listed, with two areas in which we can gain some additional points. My rationale here is that if we do our job, it will be possible to get a 5 out of 5. If we start to slip in various areas (being late on delivery as you know is the most likely negative outcome) we get docked accordingly, but not too much in any one area. I didn't feel that the fate of the overall rating should necessarily hinge on any one factor. The weight factors are more or less based on our own previous performance, so with this scale I wouldn't expect us to get a 5 out of 5 any time soon.
Because I haven't bothered to normalize all the factors, it is technically possible in extreme cases for the 0 to 5 scale to go over or under range, but I have added automatic limits in the spreadsheet logic to prevent this from being reflected in the final score. Negatives will get a zero and superlative performance will max out at 5. In our case, we will be occupying the middle ground anyway.
Preventive actions are given positive points because this relates to preventing problems before they occur, which is a good thing.
I didn't feel that any rating should be assigned to such things as number of reports issued. This is not under our control. Likewise revenue generation: although our performance financially is extremely good, we're not here to make a profit, it is to deliver a public service. All reports are peer-reviewed, so the quality of our reports is handled under a separate banner.
So - any comments, suggestions? And thank you in advance for your time.