CPK with a P value less than 0.005

O

OmarEn

Hello everyone,

I am a bit confused about a cpk study I am analysing (Attached).
The cpk and ppk are very good. this is a tube cutter and we collected the data with a caliper. in the length 508.0mm the P value is 0.177 which means a normal distribution. But in the 156.0mm lenght P value is less than 0.005.
indices are pretty good and there are not points out of control.
does anyone know why my data is not normal? and what can I do with this information so I can share it with my customer?

Thanks in Advance,
Regards
 

Attachments

  • CPK Cortadora Haven T1 Brake LD.xlsx
    89.3 KB · Views: 309

Miner

Forum Moderator
Leader
Admin
You may have a combination of two different issues. Your data appear "chunky" meaning the gage resolution results in a lot of data points with the same reading with gaps in between. With larger sample sizes this can trigger a false positive on a normality test.

In addition, a histogram of the data appears slightly right skewed. While the control chart does not trigger any out of control tests, there is a perception of a long term oscillation that might explain the skewness.

Add these two together, and it trips the normality test. If you are confident the process is stable, I see no concerns with the results. However, it will be difficult to explain to the typical customer.
 

Bev D

Heretical Statistician
Leader
Super Moderator
Difficult yes but it can and should be done. a reasonable Customer should be able to understand the explanation. We need to do more of this. blind acceptance of a statistical output or score is not good engineering, science or business...
 
O

OmarEn

Thanks a lot for your answer. So, do you suggest me to transform the data so it can be normal?
 

Miner

Forum Moderator
Leader
Admin
I am always reluctant to transform data for a number of reasons. The biggest reason is that you will lose a lot of information contained in the data. In my role as a Master Black Belt, I see a lot of Black Belts transforming non normal data without understanding why it is non normal. In almost every case, the data were non normal because the process was unstable or was a mixture of multiple process streams. In cases such as that, it is wrong to transform the data. The focus should be on stabilizing the process or bringing the process streams closer together.

In your situation, I would evaluate the process over a longer time period to determine whether it was truly stable, and would evaluate the measurement system to determine whether improvements were warranted. If you find that the process is indeed stable and the measurement system is adequate, then I might consider transforming the data though I prefer using a non normal capability analysis if a rational non normal distribution is appropriate.
 
O

OmarEn

You may have a combination of two different issues. Your data appear "chunky" meaning the gage resolution results in a lot of data points with the same reading with gaps in between. With larger sample sizes this can trigger a false positive on a normality test.

In addition, a histogram of the data appears slightly right skewed. While the control chart does not trigger any out of control tests, there is a perception of a long term oscillation that might explain the skewness.

Add these two together, and it trips the normality test. If you are confident the process is stable, I see no concerns with the results. However, it will be difficult to explain to the typical customer.
Hello Miner.

Thank you very much for your answer. It was very useful for me.
Now I have an idea of what I have to do.

Regards.
 
O

OmarEn

Difficult yes but it can and should be done. a reasonable Customer should be able to understand the explanation. We need to do more of this. blind acceptance of a statistical output or score is not good engineering, science or business...
Great words, very useful..
 

isolytical

Involved In Discussions
Agree that the reasonable customer should be able to understand the suggested explanation, but i see tailing at the high end, so suggest a range test for data elimination.
 

bobdoering

Stop X-bar/R Madness!!
Trusted Information Resource
My first analysis is always best fit curve fitting. The assumption of normality rather than an evaluation of the most correct model is weak to me. Then, when I have an adequate model, I start to consider what model I should be expecting. Remember, for a process to be stable and capable as a normal distribution, it must be the expected distribution! As in tool wear, where it is not the expected distribution, it is actually evidence of the process is unstable - probably from overcontrol or incapable processes. The "rubber stamp" of the normal assumption is beyond its usefulness these days. You need to apply some critical thinking.
 

jmfv2791

Registered
I appreciate a lot Miner´s comments, now is better understanding why we could have good cpk with p value less than 0.05, however. I want to trying to understand mathematically this issue. If we have height measure for example, with LSL=0.950 & USL=1.05, N=100, Mean=0.99896, std dev=0.0066832, how we can calculate P value???
Thank you
 
Top Bottom