Search the Elsmar Cove!
**Search ALL of Elsmar.com** with DuckDuckGo including content not in the forum - Search results with No ads.

Cpk Calculation - I analyse a double seam cans

Bill Levinson

Involved In Discussions
#11
It looks like you have only 3 measurements per position, if I am reading your data correctly. There also looks like a possible resolution issue as the measurements range from 0.94 to 1.02; there are only eight possible measurements.

If the spec limits are 0.90 and 1.03, AND we assume that the measurements are independently distributed, i.e. not correlated, Minitab gives me a Ppk of 1.16. (I cannot really estimate short-term variation because the measurements are not in the order in which they were made). Minitab also identifies the 1.02 as an outlier.

This again assumes I am dealing with 24 completely independent and uncorrelated measurements.
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#12
.
Crucial - Demonstrate homogeneity of the process data stream by plotting the data in time series order on an individual moving range chart. If the data demonstrates a lack of stability then the process is unpredictable and any characterisation is wishful thinking.
A word of caution here: While control charts are designed to detect nonhomogeneity, a nonhomogenous process can be quite stable and predictable. Rational Subgrouping was developed to handle stable nonhomogeneity. Again all of this statistical alchemy comes down to a deep understanding of the physics of your process as well as a deep understanding of statistics and probability. Quality ‘statistics’ are simply not about plugging some data into a stats package and getting some magical answer...
 

Welshwizard

Starting to get Involved
#13
A word of caution here: While control charts are designed to detect nonhomogeneity, a nonhomogenous process can be quite stable and predictable. Rational Subgrouping was developed to handle stable nonhomogeneity. Again all of this statistical alchemy comes down to a deep understanding of the physics of your process as well as a deep understanding of statistics and probability. Quality ‘statistics’ are simply not about plugging some data into a stats package and getting some magical answer...

Could you please send me an example where a non-homogenous process as demonstrated by an appropriate control chart is deemed to be predictable and your reasoning around this? Thanks.
I have not come across the term "stable non-homogeneity" and your implied property of predictability as you have put it but I am open to be educated.

I do not encourage anyone to plug some data into a stats package to get a magical answer, but I do encourage a very simple check of homogeneity which underpins any assumption of predictability according to my training and experience before going on to publish some statistic, this is the thrust of my message.
 

Bev D

Heretical Statistician
Staff member
Super Moderator
#15
On non-homogeneity and predictability: when statistics and SPC are taught and for many of the inferential statistical tests, the ideal state is a homogeneous random population. (think about the math for a minute: a homogeneous random population - which is redundant - is one where the average (location) and the standard deviation (spread) are primarily caused by the same factor(s). so the mathematical tests that predict the variation of the average form the standard deviation works. and when an "assignable cause" occurs to move the average (location) but not the standard deviation, the formulas will 'detect' that he variation in averages is not within the expected range (standard error of the mean). However, and this is a huge however, many processes are simply not homogenous. There are two basic types one might think of:
  1. multiple process streams are combined and there is a systemic difference between the streams (such as with cavities in a mold or across a web of material etc.)
  2. where there are multiple natural components of variation such as lot to lot or within piece in addition to the 'ideal' piece to piece variation.
Rational subgrouping was developed to handle this. I have many examples of this - see my resource on 'profound statistical concepts' for two such examples. sometimes these processes are unstable and some times they are stable; sometimes they are capable and sometimes they are not...

(by the way my comment regarding plugging numbers into statistical software was not aimed at you - it was a general reminder to the many poor souls who are given a job where statistics is required but they get no critical training...)

some articles from Donald Wheeler:

Wheeler, Donald, “What is a Rational Subgroup?”, Quality Digest, October 1997 SPCTool

DEAD LINK REMOVED

Wheeler, Donald, “Good Limits from Bad Data Part III”, Quality Digest, May 1997 SPCTool

Wheeler, Donald, “Can I Have Sloping Limits?”, Quality Magazine, May 1999

Wheeler, Donald, “The Three-Way Chart”, Quality Digest, March 2017
The Three-Way Chart | Quality Digest
 
Last edited by a moderator:

Welshwizard

Starting to get Involved
#16
On non-homogeneity and predictability: when statistics and SPC are taught and for many of the inferential statistical tests, the ideal state is a homogeneous random population. (think about the math for a minute: a homogeneous random population - which is redundant - is one where the average (location) and the standard deviation (spread) are primarily caused by the same factor(s). so the mathematical tests that predict the variation of the average form the standard deviation works. and when an "assignable cause" occurs to move the average (location) but not the standard deviation, the formulas will 'detect' that he variation in averages is not within the expected range (standard error of the mean). However, and this is a huge however, many processes are simply not homogenous. There are two basic types one might think of:
  1. multiple process streams are combined and there is a systemic difference between the streams (such as with cavities in a mold or across a web of material etc.)
  2. where there are multiple natural components of variation such as lot to lot or within piece in addition to the 'ideal' piece to piece variation.
Rational subgrouping was developed to handle this. I have many examples of this - see my resource on 'profound statistical concepts' for two such examples. sometimes these processes are unstable and some times they are stable; sometimes they are capable and sometimes they are not...

(by the way my comment regarding plugging numbers into statistical software was not aimed at you - it was a general reminder to the many poor souls who are given a job where statistics is required but they get no critical training...)

some articles from Donald Wheeler:

Wheeler, Donald, “What is a Rational Subgroup?”, Quality Digest, October 1997 SPCTool

Wheeler, Donald, “Good Limits from Bad Data Part III”, Quality Digest, May 1997 SPCTool

Wheeler, Donald, “Can I Have Sloping Limits?”, Quality Magazine, May 1999

Wheeler, Donald, “The Three-Way Chart”, Quality Digest, March 2017
The Three-Way Chart | Quality Digest

Thanks for taking the time to reply.

On your examples of non homogeneity;
- The two examples you have given are of course not rational in the sense that you wouldn't expect them to demonstrate homogeneity for the reasons you have outlined, due to this it would be futile to plot this data on a process behaviour chart.

Even if the notion of homogeneity is satisfied in our minds for the organisation of the data it doesn't mean that the process will display predictability "straight out of the box". We will usually have to work on this by chasing down the causes of exceptional variation and in this way constantly improve the process.

I think we differ on the definitions for homogeneity, I find it easier to explain without the possibility of confusing people with standard deviations and process averages thus:

Stability and Predictability are descriptions of the underlying process.

Homogeneity is the property of the data produced by a predictable process

Thinking about variation and how it works;

The factors we control do not create the variation in the product stream. We choose the levels for the control factors to obtain the desired process average.

The uncontrolled factors create virtually all the variation in the product stream but there will be a pareto effect among these uncontrolled factors. There will be a few dominant factors and a large number of trivial factors.

The uncontrolled factors can be classified in two groups according to the type of variation present.....dominant cause and effect relationships will be assignable causes, these will move the process around to create exceptional variation. The large number of trivial uncontrolled factors will be the common causes of routine variation.

Thanks for your references of Don, I have had the privilige of studying with him for the last 15 years or so I am very aware of all these references. A few years ago i asked similar questions about homogeneity as I asked you and this reply contains much of this thinking.

The confusion from me surrounded your initial statement below

"While control charts are designed to detect nonhomogeneity, a nonhomogenous process can be quite stable and predictable. Rational Subgrouping was developed to handle stable nonhomogeneity"

A non homogenous process can be worked upon to be made homogenous and thus stable and predictable but only if the data is rationally subgrouped in the first instance i.e. the data is arranged in a rational common sense way.

Cheers
 
Last edited by a moderator:
Top Bottom