George Box and the 1.5-Sigma Shift

Jim Wynne

Super Moderator
#1
This was prompted by a post by Miner, and in particular a paper he linked to on the 1.5-sigma shift by George Box and Alberto Luceño. It should be noted that the paper in question is dated 1999, and there is no immediate way of knowing whether or not the views expressed therein represent the current views of the authors. Much has been written and argued about the 1.5-sigma shift in the interim.

My issues with the paper have nothing to do with the mathematics, but rather the fact that the authors base their conclusions on a strawman representation of SPC and a basic misrepresentation of the Second Law of Thermodynamics used in support of the general concept of process drift.

A note about referenced page numbers: In total, the file consists of seven unnumbered pages, so when I refer to page numbers my count begins with the first page of the file. Thus as I reference page numbers, the article itself begins on page three, and so on.

We get off to a bad start in the third paragraph of the paper (page 3), wherein the authors uncritically accept a typical questionable claim of the financial rewards of Six Sigma (hereafter "SS"). They say:
It is not surprising that [SS] principles rigorously applied...have produced impressive results--for example, the 1997 annual report of Allied Signal attributes a savings of about 1.5 billion dollars to their Six Sigma initiative.
Quite simply, there is no support in evidence for the claim, other than corporate hyperbole. We have no way of knowing (nor do the authors, I suspect) anything about the veracity of how much, if anything, Allied Signal saved by implementing SS.

The strawman that the authors prop up basically consists of conflating theoretical concepts with common observation and practice. The basis for their contentions, and indeed the claims of SS devotees, is that SPC begins with "...the assumption that the process distribution is normal and has a fixed mean value that is on target..." (Page 3; emphasis is in the original in all quotes unless otherwise stated)

The authors go on to say, starting on page 3:
It is refreshing to see, at last, this acknowledgment that, even when best efforts are made using standard quality control methods, the process mean can be expected to drift. In the evolution of ideas about quality this new provision for process drift comes much closer to reality.
Well, no, it doesn't reflect reality. Those of us who've actually gotten our hands dirty on a production floor understand perfectly well that assumptions aren't based on a perpetually-fixed, unwavering mean. We know that processes drift, which is one of the prime reasons for doing SPC in the first place.

The authors further muddy the waters by alleging that standard SPC practice assumes that a drifting mean is always the result of special causes (from page 4):
The ideas of Shewart and Deming are based on the approximation that in the absence of "special causes" the process is in a state of control with fixed mean.
The authors say, on page 3,:
Conventional wisdom would say that [process] drift must occur from special causes and that these ought to be tracked down and eliminated. But in practice such drift, although detectable, may be impossible to assign or to eliminate economically.
(My emphasis)
In other words, we might be able to see a process drifting, but there may be nothing we can do about it in a practical sense. Is it not logical to ask, at this point, how SS can help us to solve a problem that we can't eliminate economically? Answer: it can't. The whole paper assumes some functionally small area of drift that's immune to "conventional wisdom" and can only be dealt with by use of what the authors refer to (and support mathematically) "feedback adjustment." The problem is that the authors, and SS practitioners in general, have never shown that such adjustment is likely to be practical or helpful. The argument for feedback adjustment also assumes that when judiciously using standard SPC, operators won't be familiar enough with their processes to understand its inherent variation and tendencies towards instability and adjust when (and only when) adjustment is called for.

In the beginning I mentioned a characterization by the authors of the Second Law of Thermodynamics that's fundamentally in error. This is common among supporters of the 1.5σ shift; I recall in particular a paper by Keke Bhote I read some years ago that made the same mistake. The authors, on page 3, say thusly:
...the second law of thermodynamics ensures that no process could never be in a state of control about a fixed mean.
It ensures nothing of the kind. The second law states, in its simplest form, that in a closed thermodynamic system, entropy will never increase, where "entropy" is defined as the amount of energy in the closed system that is not available to do work. A "closed thermodynamic system" is a theoretical construct in which (a) heat energy is present and (b) there is no replenishing source of energy from outside the system. In other words, it describes how, in a closed system, heat energy will always transfer from a warmer source to a cooler one, and while transfer is taking place, the energy in motion can be harnessed to do work. But without a replenishing source of energy, a state of equilibrium will be reached at which point the transfer will stop--thus the idea that entropy will never increase.

Unfortunately for the authors' thesis, a manufacturing process is not a closed thermodynamic system. The concept of "entropy" is sometimes carelessly used in reference to the Second Law as a synonym for "disorder" (e.g., processes left to their own devices will "devolve" into a state of reckless abandon) but that use of the term is more apt in information theory, and has little or nothing to do with manufacturing processes. Have a look here for a good basic treatment of the idea. The takeway should be the last line of that article:
Statistical mechanics, and by extension thermodynamics, has exactly nothing to say about the kind of order we think about intuitively in everyday life.
Moving right along, on page 4 the authors, quite surprisingly, don't seem to understand what "optimum" means, and use their erroneous definition to support their thesis. From page 4:
[It is an]obvious fact that models must be treated as approximations: all models are wrong but some models are useful. Notice that this implies with probability 1, that no "optimal" scheme is ever in practice optimal. What we should aim at therefore are schemes which are robust and good over a wide range of circumstances.
A bit further down the same page, the authors observe:
Using feedback adjustment it ought to be possible to remove a considerable part of the systematic drift which is allowed for in the Six Sigma specification. This could make possible tighter specification limits and production of an even better product.
First, about "optimal": That which has been optimized is, by definition, as close to ideal as possible, given known constraints. Thus the claim of the authors that something which has been optimized can't really be optimal makes no sense on any level. This is a critical concept because it highlights a fundamental problem in quality today (and which is especially present in SS implementations) the problem being the idea that we should always chase after an idyllic state, and not understand that the objective in all cases should be to get processes as close to ideal as we can economically get them. Contrary to what seems to be popular belief, the concept of optimization--or leaving well enough alone--does not contravene the idea of continual improvement. "Optimum" means "as good as it can be now," not necessarily "as good as it can be, ever."

Finally, the bit about tighter specifications and "...an even better product": There is not necessarily a favorable relationship between reducing specification limits and having, as a result, a "better product." In fact, I would go so far as to say that in most cases finagling with specification limits will have no effect at all on making a product "better," whatever that means. If it's shown that adjustment of spec limits will have a salutary effect on the utility (or saleability) of the product, the spec limits should be adjusted, if it's economically responsible to do so. I don't see how SS in general, or the techniques explained in the subject paper will help much in that regard, however.

In summary, the problems I find in the Box/Luceño paper are mainly as follows:

  • The paper is more than ten years old and we have no way of knowing at this point whether it reflects the current views of the authors. Thus, use of the article as an appeal to authority fails, at least in the absence of access to the authors' current thinking on the subject.
  • The authors' thesis is built on a very shaky foundation and makes broad assumptions regarding facts not in evidence. Practical experience tells us that operators using conventional SPC won't wait for a significant shift in the mean to occur before something is done about the drift observed.
  • The authors uncritically repeat a common misconception regarding the second law of thermodynamics as a source of drift in manufacturing processes.
  • The authors appear to not understand the concept of optimization, and why it's important to understand it.
  • The authors erroneously connect reduced specification limits with "better product" without defining what "better" means in that context.
 

BradM

Staff member
Admin
#3
Jim, thank you. That was a great write-up. Not sure where the article was published, but there was either poor peer-review, or no peer-review.:agree1:
 

bobdoering

Stop X-bar/R Madness!!
Trusted
#4
We now know that the following is NOT TRUE for all processes - only those that are normal or near normal (as in set to a target and stay near it without operator intervention):
The basis for their contentions, and indeed the claims of SS devotees, is that SPC begins with "...the assumption that the process distribution is normal and has a fixed mean value that is on target..."

But I do like the statement:
[It is an]obvious fact that models must be treated as approximations: all models are wrong but some models are useful.

Models, or distributions fit to data, really are not perfect. But, just like I say about gages, every model is a bad model, but it might be good enough. The point is you need to determine if it is "good enough" and recognize the risk of the decisions made with the model based on how much it diverges from perfection.
 

bobdoering

Stop X-bar/R Madness!!
Trusted
#5
BradM;bt430 said:
Jim, thank you. That was a great write-up. Not sure where the article was published, but there was either poor peer-review, or no peer-review.:agree1:
This is a common problem with the peer review - if the "peers" are smitten with the idea, it will slide right by them.
 

Jim Wynne

Super Moderator
#6
BradM;bt430 said:
Jim, thank you. That was a great write-up. Not sure where the article was published, but there was either poor peer-review, or no peer-review.:agree1:
Thanks. Actually it's a little sloppy, but it was done in a hurry. I just noticed that I misspelled "George" in the post title (which I apparently can't correct), which also testifies to the haste.

As far as peer review is concerned, I don't think this was a journal paper, and probably wasn't subject to peer review beyond whatever the authors might have done informally.

ETA: I just noticed that the misspelling of "George" was corrected--thank you, whoever you are.
 

Top Bottom