Getting started with statistically based process improvement

S

Scott Thor

Here's an article I wrote to explain the basics about statistically based process improvements. Feel free to give me any comments you may have.

Scott

Attachments

• Getting Started With Statistically Based Process Improvement, executive version.pdf
37.3 KB · Views: 661
C

Craig H.

Scott,

Nice work! Thanks for sharing!

Jim Wynne

Leader
Admin
Here's an article I wrote to explain the basics about statistically based process improvements. Feel free to give me any comments you may have.

Scott

Thanks for sharing. A few observations:
• In Step 2 you encourage determining the current state of the process through use of control charts, which is sound advice, but you also suggest that "A basic condition that must be satisfied before any improvements can be made is a stable and predictable process." There's a bit of redundancy there, because in this context "stable" and "predictable" are synonymous. Also, the problem might be instability, so in curing that issue, the problem might be solved. In other words, you can make improvements to an unstable process, contrary to your assertion that stability must be achieved as a prerequisite.
• The charts in figures 1 and 2 show signs of instability other than points beyond the control limits; you might want to point out the fact that there is more than one test for statistical control.
• You say, "Any points outside of the limits represent what are known as special causes..." Not necessarily; when using the normal curve as a model (which might not be advisable) we can predict that some points will naturally fall outside the ±3-sigma limits. It might be good to point this out, and advise against tampering.
• You say, "Typically most improvement projects identify that the process is unstable which then begins the task of identifying why that is?" I'm not at all sure that the statement is true. In many cases, analysis will reveal a predictable but incapable process. Also, the sentence should end with a period and not a question mark.
• You say, "If after getting you process under control the upper and/or lower control limits are outside of the specification limits you have a process that is near the brink of chaos." Clearly, the condition you describe is undesirable in most cases, but "near the brink of chaos" seems a bit strong. Although the instances are rare, there are times when a certain level of nonconforming output is acceptable and economically necessary.
• In addressing "Sustaining the Improvement," you say, "Without the proper controls all processes will tend to work back to where they began before the project." This is certainly true, but the object of the project should be installation and monitoring of controls. That is to say, the object should be to identify the process controls which, if maintained, will result in conforming output, and then monitor and measure those, rather than focusing on part/output measurement. The entire idea of process improvement should be making the process predictable by controlling the process variables that have been proven to contribute to conforming output.

Steve Prevette

Deming Disciple
Leader
Super Moderator
My suggestions would be:

Yes, I can improve a process that is not stable and predictable. I look at the special causes associated with the statistical signals and deal with them (corrective action when in the bad direction, reinforcing action when in the good direction). Actually, I'd much rather have this situation as compared to a stable and predictable process, as these special causes are usually easy to deal with, and fit with the paradigm of most managers that I must "do something" with the specific results.

It may be worth pointing out that changing a stable process is HARD!

On Figure 1, where did the control limits come from? They certainly do not fit the current data. Is it from some older data? If so, it is worth showing the older data (which should have been in control to get the control limits from) and then show the changing condition related to the trends (and yes, there are more than one) on Figure 1. In showing examples, they should be rigorous, good examples that you would want others to follow.

In figure 2, I'd show the example with at least 25 points. Dr. Shewhart stated do not declare a process stable without 25 points. It sets a good example to show your example chart with 25 points. So many folks want to throw out old data prematurely.

On the statement "Many projects make significant improvements only to fall back into the [previous] state". I would disagree with that. What I see more often is that people declare success on a "lucky" result, not a statistically significant trend on the control chart in the improving direction. Generally, once the control chart shows a significant trend, the improvement has stuck. At least, that is my empirical experience, and it is supported by theory.

Also, I'd suggest being careful with the "gut feeling" discussion. All data are flawed, Dr. Deming stated that there is no true value of any measurement. The best state is where you can reconcile your gut feelings with the data. I'd much rather have a doctor operate on me when their gut feeling and the data are in synch. How many times has that little red flag gone off in your head that something is amiss, but you ignore it due to the numbers, and come to find out . . .

Good luck, and Happy New Year

Steve Prevette

Deming Disciple
Leader
Super Moderator
You say, "Any points outside of the limits represent what are known as special causes..." Not necessarily; when using the normal curve as a model (which might not be advisable) we can predict that some points will naturally fall outside the ±3-sigma limits. It might be good to point this out, and advise against tampering.

I disagree with this disagreement. We use the 3 sigma limits as the operational definition of a signal. Just as we evacuate a building when the fire alarm sounds, and then check back to see if it was a false alarm if we find no indication of a fire, we should still take immediate action on a point outside the control limits. Yes, it may be a false alarm, but we do take a good-faith effort on taking action. I would not consider taking action on a 3 sigma limit outlier to be "tampering".

Jim Wynne

Leader
Admin
I disagree with this disagreement. We use the 3 sigma limits as the operational definition of a signal. Just as we evacuate a building when the fire alarm sounds, and then check back to see if it was a false alarm if we find no indication of a fire, we should still take immediate action on a point outside the control limits. Yes, it may be a false alarm, but we do take a good-faith effort on taking action. I would not consider taking action on a 3 sigma limit outlier to be "tampering".

If you define "taking action" as looking to see what (if anything) happened, then I agree. I didn't mean to suggest that the potential signal should just be ignored. The statement in the article said that "Any points outside of the limits represent what are known as special causes..." And I said, "not necessarily," which is correct.

Steve Prevette

Deming Disciple
Leader
Super Moderator
If you define "taking action" as looking to see what (if anything) happened, then I agree. I didn't mean to suggest that the potential signal should just be ignored. The statement in the article said that "Any points outside of the limits represent what are known as special causes..." And I said, "not necessarily," which is correct.

Ah, okay. Yes, perhaps something like "theory and experience tells us that we likely will be able to find a special cause for any point outside of the limits".

Jim Wynne

Leader
Admin
Ah, okay. Yes, perhaps something like "theory and experience tells us that we likely will be able to find a special cause for any point outside of the limits".

BradM

Leader
Admin
Very nice summary.

Jim/ Steve: thanks for the insightful comments.

All I might suggest is a list of books/references for each section. Not only would that give some credence to your suggestions, but will give the casual reader an additional source of information should they be so inclined.

N

nitejava

oh my, what have I done?

We didn't do a Root Cause. Was that a mistake?

From the Article, Preventive Action is being talked about as a problem. I understood that a problem required a Corrective Action not a PA.
In our situation, in preparing our system, one area was using uncontrolled samples and personal handwritten work instructions, these were first written up as CA's by an Auditor from a sister site.
I came along and decided that the best way to continue to control the different methods these operators were using to build must be married so that any changes would force the revision of everything at the same time.
The Tribal knowledge that these operators have are no less then 10 yrs. The potential problem I saw with the way things were being done was the incredible amounts of errors that someone new would absolutely be making.
My recommendations were immediately accepted and a special PA Team and funds were set for this project, it took three months to implement, and the operators are pleased with their new process method. BUT, I never root caused anything. I just knew that there was going to be a lot of problems when retirements started to come up. (Between you and me, I did become privy to a lot of past and current problems during the PA process that the PA has and will continue to fix)
Does anyone have any idea how the Auditor might view the lack of information we had in determining potential problems before implementing Action?
Maybe this wasn't a PA; maybe it was developing an effective training tool?

I've been beating my chest about this PA project; I would hate to have the External Auditor write a CAR on it.

Replies
20
Views
2K
Replies
4
Views
527