KPIs or Metrics to Measure a New Complaint Handling Process

jlhaney63

Registered
We are working on developing and implementing a new medical device complaint handling process to assure that complaints are collected at the manufacturing sites, evaluated by them for the potential need to submit an MDR, and then forwarding the information to a centralized team to make the final MDR decision and to submit the MDR if needed.

I'm trying to define some KPIs or metrics that would help assure that the manufacturing sites continue to process complaints as required, and in a timely fashion. For example, if a staff member left who was responsible for reviewing and forwarding the complaint to the central office, is that still happening or is there a sudden drop-off in submissions.

Some thoughts I've had follow below. Can you suggest others that would help monitor the process?
  • Total number of Open complaints at month-end. Monitor average - maybe trigger points if we fall below a "normal" lower limit.
  • Break down by state of complaint record. (Received, Waiting Return, Under Investigation, etc.)
  • Total number of Closed complaints at month-end. Again, based on average volume is level of closed records "normal".
  • Average days to closure for records closed in the month.
  • Number of days from Aware Date to date the complaint was opened.
  • Number of days from Date Opened to 1st, 2nd, and 3rd attempts to have product returned for evaluation.
  • Number of days from date product received to date the investigation was completed. Are investigators, R&D, Engineering being timely with their evaluations?
  • Number of complaints closed without investigation – were rationales to do so valid?
  • Number of complaints miss-categorized upon initial intake – meaning how many were thought to be a minor issue (no harm) but further investigation revealed more serious situation and potential need for MDR.
 

John Predmore

Trusted Information Resource
One approach which might generate fresh insights - consider a "value stream map" of how a complaint moves through your process from generation to resolution. Decide what aspects of the sequence you are interested in and how you might measure and monitor those in meaningful ways. Lean practitioners think about systems using two types of measures - speed by which items move through the sequence, and buffers (inventory) where items get held up. With complaints, do you always want FIFO or LIFO, or how is that best handled? Are there reject/rework loops for complaints? - you brought up mis-classification of majors as minors, for example.

Depending on how far upstream you want to go, you might also track aspects of the sequence like time between sale and problem noticed, or from date product is put into service to when complaint is received. Do your results vary significantly by the source of complaint, call center versus website?

I often find thinking in analogies is helpful for creative thinking, and a 10,000-foot view might lead you to consider aspects of the system which are important but less obvious.
 

RoxaneB

Change Agent and Data Storyteller
Super Moderator
My concern with metrics that are "number of..." is that they give little to no context. What if the volume of product being shipped increases? Odds are the "number of complaints" will increase. To be truly meaningful, I like to show both the number and the %...that's the story behind the process metrics.

Your list is rather long, in my opinion, and many of the items are more like control-items, not KPIs. For example, the "# of days..." - this speaks more to process flow and responsibility and maybe even resources. But are they truly KEY performance indicators.

Ask yourself...rather, ask the team...what are you trying to achieve with this new complaint process? Why is it new? Why was the old way considered obsolete? From that discussion, you may discover some true KPIs, while everything else is a driver towards those KEY metrics.
 

Ron Rompen

Trusted Information Resource
I agree with Roxane, particularly about showing metrics as a percentage of ........, rather than just a number. Makes it much more relevant to what is actually happening.
I also agree that your list is far too long - pick out the 3 most important (to YOU and your company) metrics, and monitor those. After 6 months, revisit the list, and see if it needs to change. Your metrics don't need to be static - they should reflect what is really happening, and what you identify as important.
 

jlhaney63

Registered
Thanks for the input - great suggestions! I, too, was worried about the number of measurements (paralysis by analysis) and will work to pare them down from the team's first brainstorming session.
 

Jen Kirley

Quality and Auditing Expert
Leader
Admin
What matters to your customer? What would effectiveness and efficiency look like in customer complaint handling? What does failure look like?

Customer satisfaction can be maddeningly hard to measure, possibly more so while even more importantly so during complaint handling because the customer is already dissatisfied. This is your chance to keep them or else. Focus on what is needed to do that.

Then have a follow up contact with the customer to learn if it worked and how they feel about the interaction and problem solving process. Use that feedback to understand opportunities for improvement and make them.
 

Big_Wheelz

Registered
We are working on developing and implementing a new medical device complaint handling process to assure that complaints are collected at the manufacturing sites, evaluated by them for the potential need to submit an MDR, and then forwarding the information to a centralized team to make the final MDR decision and to submit the MDR if needed.

I'm trying to define some KPIs or metrics that would help assure that the manufacturing sites continue to process complaints as required, and in a timely fashion. For example, if a staff member left who was responsible for reviewing and forwarding the complaint to the central office, is that still happening or is there a sudden drop-off in submissions.

Some thoughts I've had follow below. Can you suggest others that would help monitor the process?
  • Total number of Open complaints at month-end. Monitor average - maybe trigger points if we fall below a "normal" lower limit.
  • Break down by state of complaint record. (Received, Waiting Return, Under Investigation, etc.)
  • Total number of Closed complaints at month-end. Again, based on average volume is level of closed records "normal".
  • Average days to closure for records closed in the month.
  • Number of days from Aware Date to date the complaint was opened.
  • Number of days from Date Opened to 1st, 2nd, and 3rd attempts to have product returned for evaluation.
  • Number of days from date product received to date the investigation was completed. Are investigators, R&D, Engineering being timely with their evaluations?
  • Number of complaints closed without investigation – were rationales to do so valid?
  • Number of complaints miss-categorized upon initial intake – meaning how many were thought to be a minor issue (no harm) but further investigation revealed more serious situation and potential need for MDR.
I measure #complaints/#shipments <- by site/region/globally, so its a ratio (or you can convert to a %). What is interesting is when you treat this statistically: a complaint = a defect The entire process of taking the order through billing can be viewed as a "manufacturing process". We finished the year at 0.7% which is a dpmo of 7,000 or ~4 sigma. Some of our complaints can be resolved quickly others require the customer to send a sample of the product back for analysis. I set a target of 21 working day average for all complaints AND no complaints >45 days. From previous data, I saw we could achieve the 21 day target if we eliminated the very long (forgotten....) complaints.
 

a.arjunvasan

Registered
There are two major metrics to consider, namely Wait Time and Cycle Time. Wait Time or Age is how long an action is pending, which if not met might lead to a non-compliance. Cycle Time involves how productive the team is and how well the team is utilized for the task (including mapping and getting things done where there are dependencies). There are lot of metrics that will need to implemented in complaint handling. However, I may not be able to post them in public forums. Hope the information I shared might be useful to you to some extent, think along these lines.
 

RoxaneB

Change Agent and Data Storyteller
Super Moderator
There are two major metrics to consider, namely Wait Time and Cycle Time. Wait Time or Age is how long an action is pending, which if not met might lead to a non-compliance. Cycle Time involves how productive the team is and how well the team is utilized for the task (including mapping and getting things done where there are dependencies). There are lot of metrics that will need to implemented in complaint handling. However, I may not be able to post them in public forums. Hope the information I shared might be useful to you to some extent, think along these lines.

I disagree with both of these metrics. Maybe they have not been explained properly, but:

- Wait time | What do you mean "might lead to a non-compliance"? There is already a customer complaint...there is nothing to "lead" to.
- Cycle time | Data that is too subject to human intervention. Receive complaint, immediately open, hit resolve button, complaint closed in under 10 seconds. An over-exaggeration, perhaps, of what could occur, but the cycle time is not a KEY indicator when it comes to the complaint process.

Again, it goes back to what is the intent of the complaint process? Is it to process as many complaints as possible in as short a time window as possible? ... OR ... is it to help identify trends with customer experiences in order to increase the overall satisfaction with the product/service? It cannot be both. One (I'd offer the 2nd option) is the WHY while the 1st is more the WHAT or HOW.

Using the complaint process to offer insight into items such the nature and cause behind the complaints is where the meaningful data lies.
 

Tagin

Trusted Information Resource
Restructuring your sentences as below (which is how my mind reads what you wrote), it seems like you are attempting to compensate for weaknesses in the process being created with an abundance of metrics. Many of those metrics would be unnecessary if the process were more robust.

We are working on developing and implementing a new medical device complaint handling process to assure that complaints are:
  • collected at the manufacturing sites,
  • evaluated by them for the potential need to submit an MDR, and then
  • forwarding the information to a centralized team to
  • make the final MDR decision and to
  • submit the MDR if needed.

I'm trying to define some KPIs or metrics that would help:
  • assure that the manufacturing sites continue to process complaints as required, and
  • in a timely fashion.
Is it not possible in this day and age to create a central system that all the sites could enter their information into directly, and then the centralized team can take action on, once they are marked as 'evaluated'? This also gives the central team immediate visibility to newly-entered complaints, and gives someone the ability to monitor & report on time in each phase (entry>evaluate, evaluate>makedecision, makedecision> submit)...if ever needed for diagnostic purposes; however, monitoring overall time from entry>makedecision should suffice for a higher-level metric of responsiveness, to meet the goal of "in a timely fashion".

As for "For example, if a staff member left who was responsible for reviewing and forwarding the complaint to the central office, is that still happening or is there a sudden drop-off in submissions", this is a symptom of a weakness in the process that metrics won't solve. You could, as a crude example, require that the sites report a weekly summary of the # of complaints entered that week. If no complaints and no summary is reported for a site, then it will raise a flag to the central team that the site is unresponsive and they need to check that maybe that person has left or is sick, etc.; it also acts as a sanity check of the number the site thinks they submitted vs. the number the central team received. Build those controls into the process, rather than trying to control-by-measurement a weak process after the fact. :) Since this is a new process, I think there is an opportunity here for your group to think big, and think how the process design itself can create confidence that activities are being performed as intended.

As for KPIs (rather than metrics), I agree with @RoxaneB - if you really want KPIs, then you really need to look at is what the purpose and value of this process.
 
Top Bottom