It has been suggested my previous comment was a "rant" with no information to back up the content." I will provide details as requested.
The six sigma of Six Sigma is based on a 1.5 sigma drift, shift otherwise known as a "fudge factor", "correction", "adjustment", "operating window". This factor produces a claim of 3.4 dpmo, or "3.4 bugs per million lines of code" in this instance.
Such Six Sigma claims are utter nonsense. People should take the time to investigate Six Sigma's farcical source.
The +/-1.5 shift was introduced by Mikel Harry as most people are aware. Where did he get it? Harry refers to a paper written in 1975 by Evans, ?Statistical Tolerancing: The State of the Art. Part 3. Shifts and Drifts?. The paper is about tolerancing. That is how the overall error in an assembly is effected by the errors in components. Evans refers to a paper by Bender in 1962, ?Benderizing Tolerances ? A Simple Practical Probablity Method for Handling Tolerances for Limit Stack Ups?. He looked at the classical situation with a stack of disks and how the overall error in the size of the stack, relates to errors in the individual disks. Based on ?probability, approximations and experience?, he suggests:
What has this got to do with monitoring the myriad of processes that people are concerned about? NOTHING ! Harry then takes things a step further. Imagine a process where 5 samples are taken every half hour and plotted on a control chart. Harry considered the ?instantaneous? initial 5 samples as being ?short term? (Harry?s n=5) and the samples throughout the day as being ?long term? (Harry?s g=50 points). Because of random variation in the first 5 points, the mean of the initial sample is different to the overall mean. Harry derived a relationship between the short term and long term capability, using the equation above, to produce a capability shift or ?Z shift? of 1.5 ! Over time, the original meaning of ?short term? and ?long term? has been changed to result in ?long term? drifting means.
Harry has clung tenaciously to the ?1.5? but over the years, it?s derivation has been modified. In a recent note from Harry ?We employed the value of 1.5 since no other empirical information was available at the time of reporting.? In other words, 1.5 has now become an empirical rather than theoretical value. A further softening from Harry: ?? the 1.5 constant would not be needed as an approximation?.
Despite this, industry has fixed on the idea that it is impossible to keep processes on target. No matter what is done, process means will drift by +/-1.5 sigma. In other words, suppose a process has a target value of 10.0, and control limits work out to be, say, 13.0 and 7.0. "Long term" the mean will drift to 11.5 (or 8.5), with control limits changing to 14.5 and 8.5. This is nonsense.
The simple truth is that any process where the mean changes by 1.5 sigma or any other amount, is not in statistical control. Such a change can often be detected by a trend on a control chart. A process that is not in control is not predictable. It may begin to produce defects, no matter where specification limits have been set.
World Class Quality means ?On target with minimum variation?