Non Normal Data in a historically normal process

R

Reme101

Hi Everyone,

I have a few questions where I would like to see what the industry consensus is for working with these problems.

Working with validations, sometimes a distribution can be identified and accepted at DOE and used through to PQ successfully.
However on occasion, we get 2 runs in a PQ normal and one run for example non-normal which is not following the historical data.
Usually we perform an investigation, has anything changed between runs, set up of the machine, material, inspection equipment etc. If we don't find anything unusual we will repeat the run.

I'm wondering if there is an alternative accepted approach for working with non-normal data when the process is typically normal?
Looking forward to hearing the input.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Normal and Non-Normal is not a valid indicator of difference or change. Many processes are naturally non-normal because they are not homogenous. Sampling can create the allusion on normality but remember that all distributional models are just that - man made ‘modesl’ That approximate natural distributions. These modesl can be useful at times and mis-leading at others.
Can you elaborate on what you mean? What test are you performing? (For example, are you using SPC or ANOVA as a test of differences in means) Can you provide example data?

I would caution you on ‘Consensus’’ - or majority opinion - it is also not a reliable indicator of the validity of an approach; science is not a democracy.
 
R

Reme101

Thanks Bev D,
The test is just standard capability analysis. We perform 3 runs during PQ and analyse 30 parts from each run using capability analysis.
The distribution identified at DOE was normal, OQ returned normal. For the PQ - 2 runs are normal with P values >0.05 but run 3 is non normal <0.05.
Usually the response is to investigate the non-normality for industry and determine the cause. If there is no cause identified we perform a re-run. But I'm wondering is there a better way to address this.
 

Miner

Forum Moderator
Leader
Admin
Listen to Bev D's input. I have seen the following: Normally distributed processes that appear non-normal due to the process drifting, due to a mixture of process streams, and due to the wrong rational sub-grouping scheme. Likewise, a non-normal process may appear normal for the same reasons.
 

Miner

Forum Moderator
Leader
Admin
What are you measuring? And on what type of equipment? We may be able to advise whether this should be normal or non-normal.
 

Bev D

Heretical Statistician
Leader
Super Moderator
beyond the fact that Normality is not a measure of process goodness or badness or capability, remember that small sample sizes can pass or fail the "normality check" and still be normal - or non normal.

what really matters is whether or not your process is actually capable and if it is drifting or shifting out of control. SC is the best approach to monitoring this. isolated capability studies is not the best approach.
 

Miner

Forum Moderator
Leader
Admin
In addition, very large sample sizes will often fail a normality test by making the test overly sensitive. What are your samples sizes?
 
Top Bottom