I'm working in a company with many sites and lots of products. QMR processes are in place for all GxP-compliant products, sites and divisions, as well as on senior management level. Problem is, these processes have emerged from different practices and standards and are not harmonized. The result is some inconsistency in the metrics and measures and inefficiency in the process (double-work and some misaligned timing across levels).
I'm currently trying to harmonize this a bit, though I can only use persuasion and borrowed power to do this.
My idea is to let the lower levels continue to review detailed quality metrics and measures on divisional/site level, and only "review the reviews" on senior level. I.e., we would have to
1. align the overall framework,
2. define a set of minimum contents for all levels, and
3. mainly use the executive summaries of the divisional QMRs (plus a few company-wide quality KPIs, e.g., for processes that are aligned across the whole company) as the inputs for the top-level QMR.
Yes, that's part of the reason, acquisitions as well as divestments are frequent (several per year). QMR was not managed globally. The resulting new or merged business units rather developed their processes according to individual needs. This approach is fine from a system point of view. IMO it is more effective to have local, specific review processes wrapped around key processes and sub-systems rather than having a single standard top-down process that is detached from operations. But from top decision maker's point of view it is not the best optoin, because they only get to see a patchwork of topics rather than the whole picture.
I have two or three ideas how to improve the situation, so I'm interested in sharing experiences.
Your challenge is common in this type of scenario. To establish and flow down enterprise-wide metrics has to be limited to a few, mostly regulatory-driven indicators, such as reportable complaints, product recalls, NB reports, etc.
I agree with you that at a site level, they should have the latitude to establish their own indicators.
But one thing you can do, when you have so many sites and so much product portfolio diversity is to assess which sites have very similar product lines and system maturity and (corporate) impose common metrics between them, so you can benchmark performance, while potentially identifying best practices and improving low performers.
From my experience, since the top makes the decisions, it is important that you communicate in their language (and this includes their desired metrics). This also promotes standardization amongst the various sites as they end up calculating and reporting metrics in a similar fashion. However, these metrics need to be defined in order to ensure consistency. For example, "Truck Loading Time" could be calculated from the time the truck enters the building or from the time the truck is in the designated loading spot...if this is not defined by the top, each site could still be reporting results that cannot be truly compared.
That said, the top is not always right nor do they always have insight into the local nuances. I've experienced this, as well, and we reported what we needed to AND kept the local, more applicable, metrics actioned by local leadership.
Doing this allowed for some meaningful discussions at process and leadership meetings and eventually led to changes made to the metrics desired by the top.
We also created tree diagrams that showed how our existing processes and metrics influenced the higher-up processes and metrics. For example, if the top wants to see fewer customer complaints, the local level could use "nonconformances detected in-house", "complaints over $x", "complaints/x product shipped", etc.
agree - always use language and measures that are understandable and in context of the discussion. Also, aggregating metrics bottom-up may result in nonsense. E.g., a maker of apples and software should not compare (or even average) complaint rates across the portfolio. Etc...
So the approach I'm using is to keep the detail metrics for the backup or references and only cite conclusions and exceptions (such as remarkable trends and signals) in the top QMR. It is also an Option to calculate composite scores or measures, e.g. for process compliance and customer satisfaction. In that case, they need to be well introduced and used repeatedly before they will be useful. Cost is a different story, it is usually immediately understandable to top mgmt. Alas, many Q employees may be unwilling or unable to map Quality to top line or bottom line results, so good luck with that...
Also, when aggregating or relating stats, I'd propose to use units familiar to C-suite managers. Generally, Output-oriented references such as brands, product families and markets are better candidates than input-oriented ones such as sites, supply chains and even batch count. I'd also advise to include success stories, such as effective CAPAs and not just dwell on problems. The top managers should leave the meeting thinking that the Quality dept. is part of the solution rather than the problem. Just my 5 cent