SPC in Insurance Back Office Operations

T

The Loper

Hi -
Our insurance industry, back-office operation processes customer submitted requests. We currently inspect critical attributes of these processed transactions in order to detect and correct errors, and report finding in terms of proportion accurate. Depending on the risk associated with a particular transaction type, we may inspect up to 100 percent of the work.
As we improve processes, we would like to implement statistical tools that would allow us to reduce inspection.
For the sake of this discussion, let's say that an individual process is expected to be more than 99.5% accurate.
1. What method should we use to determine if the process is meeting that specification?
2. What triggers should we use to reduce sampling?
3. Using process control charts (p-charts is what I have been using for analysis purposes), how do I determine the sample size and frequency?

Thank you in advance for your help. I am more than happy to read and study, so I would appreciate references if possible.
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
1. What method should we use to determine if the process is meeting that specification?

SSP: Make the SPC chart(s) for the process. If the SPC chart is stable (no signals) then compare the average and control limits to your goal to determine if you are routinely achieving the goal, or if the process is incapable of achieving the goal.

2. What triggers should we use to reduce sampling?
SSP: If you have 25 points that are stable, and you are achieving the goal, I'd support reduction of sampling.

3. Using process control charts (p-charts is what I have been using for analysis purposes), how do I determine the sample size and frequency?

SSP: First, we do want enough in the sample set that I am not plotting 100% for 1 of 1 and 0% for 0 of 1. Generally I go with at least 12 items sampled per interval. Then comes down to what makes sense for the process and your review capability. Such as, if you do a monthly management review of the charts, and there are more than 12 things sampled per month, then plot monthly. If you do batch processing of work, use the batch size.
 
T

The Loper

Thank you, Mr. Prevette.
Would the following information change the Sample Size answer?

  • Mean daily batch size is greater than 1200
  • Process control charts would be reviewed daily
  • Customer specifies less than 0.5 percent defective
  • Current state process yields approximately 1.0 percent defective
  • Cost of failure to meet specification is significant
 

Steve Prevette

Deming Disciple
Leader
Super Moderator
Sounds like the first step is to determine if the process is stable and making 1.0 percent defective. How do you know the process makes 1.0 percent defective? How many items were sampled to determine that, and can you plot that past data on a p-chart of groups of 100 to 200 to determine if it is stable, or if there are times with special causes at work?

If there are significant changes in the rate, then do need to look at what happened during periods of high defect rate.

If the process is stable at 1.0 percent and since that is above goal, I'd start doing data collection about WHAT attribute caused the failure and see if some attributes are more common than others, and see what can be done to improve the process.

Once you then get the process under 0.5 percent, we can talk about verifying that it is staying there. But first sounds like you need to fix either the special causes (if they exist) or work on the common causes if stable. And make use of data you've already paid for.
 
Top Bottom