Can we standardise our reporting of Customer Complaints to a % of sales?

  • Thread starter Thread starter peterd
  • Start date Start date
P

peterd

Hi all,

A question has been asked by our new chief exec of whether we can standardise our reporting of customer compaints to a % of sales carried out. This is so we can compare between our different sites.

At present each site measures the number of complaints receieved on a weekly basis. However the mistake/problem may have occured any time in the past, last week or last year.

To compare it to the sales in the week it was recieved would not be reflective of the level of non-conformance either at the time of the problem or at the time of reporting.

This may be a 101 question but how do people deal with this type of reporting. How do people deal with this type of reporting? I fully believe it's ideal to measure the complaints created in a week but how do we ensure all of them have been reported? How do we deal with the delay in reporting from the time of the error occuring?

Hope this makes some sense. :read:
 
Elsmar Forum Sponsor
peterd said:
Hope this makes some sense.
Hullo Peter,

Yes, I think it makes sense. This is a common subject of discussion in many companies.

There is of course nothing preventing you from reporting customer compaints as % of sales carried out, but I would advice against using it to compare between different sites. Surely the different sites have different conditions, making a comparison dicey?

I agree with you that it's better to just measure the complaints created in a week or whatever time period is suitable. That way each site can see its own trends and act accordingly.

How to ensure all customer compaints have been reported? I'm tempted to say that you can't. You will have to create as good a reporting process as possible... One that is easy to use.

How do we deal with the delay in reporting from the time of the error occuring? Due to the fact that a certain delay is unavoidable, you will have to accept it and deal with individual complaints as you receive them.

/Claes
 
peterd said:
Hi all,

A question has been asked by our new chief exec of whether we can standardise our reporting of customer compaints to a % of sales carried out. This is so we can compare between our different sites.

At present each site measures the number of complaints receieved on a weekly basis. However the mistake/problem may have occured any time in the past, last week or last year.

To compare it to the sales in the week it was recieved would not be reflective of the level of non-conformance either at the time of the problem or at the time of reporting.

This may be a 101 question but how do people deal with this type of reporting. How do people deal with this type of reporting? I fully believe it's ideal to measure the complaints created in a week but how do we ensure all of them have been reported? How do we deal with the delay in reporting from the time of the error occuring?

Hope this makes some sense. :read:

Welcome to the Cove. . .

I have done what you are looking at in several ways in the past . . .

By applying the returns to the week/month shipped, and charting the data over time, you will get a picture of what was the outgoing quality level (Assuming that only rejects or errors are returned), was at each time period. You will have a living chart. The main benefit of this type of analysis, is that you will be able to correlate what events were occurring in the company vs. the return rate during a given time period, and see the effect of changes. A Percent of sales was what I chose to plot. I think you would be better to look at it on a monthly basis rather than on a weekly basis. Less noise. This is also a financial indicator.

I programmed an Access database to do all the grunt work. A spreadsheet application will work, but it is more time coinsuming and requires a higher level of maintenance and coordination.

Tracking the number of complaints received per week/month/etc. will give you some indicators, but I think you would be better served by categorizing the types of returns, and utilizing problem solving techniques to address them (After you Paretoize them). Then, setup your application to also look at individual categories of complaints. You can monitor the individuals as well as the aggregate. This is the 2000' view, and the 100' view. The charting will, as I stated earlier, indicate the effectiveness of actions taken. You should use a timeline on the chart to indicate key events. You can also use a Paynter Chart to track the key issues i.e. the progress of correction and also recurrence.

You have a three-fold issue here. . . tracking, analyzing and correcting. . . and of course tracking further. One caution, you should get historic data, say at aleast a years worth to get you started. Your chart will continue on as long as you need it. You can start eliminating history when you have reached a point where returns in a given time period in the past are unlikely. I'd look at a 13 month time base at the get-go.

Hope this helps a bit.
 
Last edited:
Claes/Taz,

Thanks for the prompt responses

Claes - As a company we have a 'centres of excellence approach' where we concentrate on a patriculr product. We are a metal stockist and my site specialises in Tube and Bar products, the other sites on sheet and plate. We work as a group so the processes are pretty similar and the customers and contracts cover the group.

This should mean that comparing contract review or inspection errors across the sites sould give some reflection of the management in place. I think this is what our new boss is after.

I agree that sticking with the received report is better but how do we compare my site - doing 80,000 sales a year with the other sites, doing 10-20,000/yr? Also if the sales vary or grow this has a an impact on the % errors that occur.

Taz - If you did delay the report and reported the complaints against when they occured you would get a more accurate reflection of improvement/degradation of the system. The downside is that during the delay you may miss trends that are developing. In our business some problems will be highlighted at the customer goods in but many will take months to come to light as they go to stores before use. This would either mean that looking at week shipped would inherently miss some of the problems you have created and give a misleading indication or that the report would be too delayed to be meaningful.

We already use the recieved complaints for trending and to trigger corrective actions etc

Would a twin track approach be appropriate? Report the received complaints on a weekly basis and use this as the trend indicator especially by area of complaint. Then use a second historic measure during management review that reflects when the problem actually occured?

:)
 
The Taz! said:
By applying the returns to the week/month shipped, and charting the data over time, you will get a picture of what was the outgoing quality level (Assuming that only rejects or errors are returned), was at each time period. You will have a living chart. The main benefit of this type of analysis, is that you will be able to correlate what events were occurring in the company vs. the return rate during a given time period, and see the effect of changes.
Ah yes, that would be a way to handle the delay in reporting, wouldn't it? Good one, Taz...:agree1:

Peterd said:
Would a twin track approach be appropriate? Report the received complaints on a weekly basis and use this as the trend indicator especially by area of complaint. Then use a second historic measure during management review that reflects when the problem actually occured?
I can't see why not. One does not exclude the other. By all means, use both.


/Claes
 
peterd said:
Would a twin track approach be appropriate? Report the received complaints on a weekly basis and use this as the trend indicator especially by area of complaint. Then use a second historic measure during management review that reflects when the problem actually occured? :)

Absolutely. . .maybe I wasn't clear. . . ACT on the issues immediately of course. . . but track, trend and correlate over time.

This approach allowed me to add $100,000 / Month to the bottom line. I got canned for doing it though. . . you figure. . .
 
Here's our system...the executive summary of it.

First off, we are a multi-site steel mill company, with locations throughout the eastern US and in "central" Canada. Our Sales departments, however, are located in two different sites. All of our Canadian Customers deal with our centralized Canadian Sales office. All of our America Customers deal with our centralized American Sales office.

Complaints can be either Sales-related or Mill-related. We can not fix Sales complaints and they can not fix Mill complaints (they can, however, issue credits).

Complaints are broken down into 4 types:

  • Sales
    • Invoice - pricing, miscommunications, paperwork
    • Service-Sales - wrong quantity/grade/length ordered, wrong Customer location entered
  • Mill
    • Service-Mill - wrong quantity shipped, shipped to wrong location
    • Quality - bent bars, out of spec dimensionally, wrong chemistry

When Sales receives a complaint, it is entered into the system and directed to the appropriate person. Each Mill has a Gateway who is responsible for assigning mill-related complaints to the suitable personnel for resolution.

Upon resolution of a mill-complaint, Gateway contacts Sales to inform them of closure and hopefully the Customer is notified.

So, complaints are resolved immediately (or as timely as possible).

The analysis of complaints is done on a monthly and yearly basis. It is tracked not only by the number of complaints received, but by tons shipped.

Example: 1 complaint / 5,000 tons shipped is worse than 1 complaint / 10,000 shipped.

We can not, however, compare our mill complaint numbers to those of other mills as each mill has a different product mix. Some mills make just rebar, others make 5 products within a small range. Some, like us, have over 200 products that we can make and all on the smaller sizes...those products that no one else can or will make.

What we can compare are the Sales complaint numbers...especially as for each of the mills, we have noticed that 2/3 of all complaints are Sales-based. The more a mill ships, the higher the number of Sales complaints is...but the complaint/tons shipped appears to be steady between the mills and this have allowed to recognize that more attention needs to be paid to our Customer service side.

On the mill side, we analzye our complaints in two ways. Classifications...were there more bent bars on a particular product?....were there more rust complaints with a particular carrier?...that sort of thing. We also look at the credits issued per complaint and per ton. And track the validity of the credit. We have noticed that Sales often times issues the credit for a mill-based complaint without giving us time to analyze it and sometimes we have found the complaint to be not valid...wasted money.

A weekly review of Customer Complaints, for us, would be overkill First off (and thankfully) there are some weeks were are there are no complaints....Sales or Mill based. But, secondly, it doesn't allow us to see much. A monthly review is much better for us. It allows us to see the details of the month's performance, without making us feel like we're banging our heads against the wall.
 
What do you all think about complaints per gross profit dollar?

Jaime
 
jaimezepeda said:
What do you all think about complaints per gross profit dollar? Jaime

Apples and Oranges. . . one complaint could be for $100 and another for $100,000. . .

I could see Return $'s or Credits / Grs Profit $
 
I used to track complaints and complaint dollars as a percent to sales dollars for each month. Additionally, did Pareto's on complaint causes,product and another by customer.(some complain more than others, or to reduce their annual inventories before the counts)
 
Back
Top Bottom