Search the Elsmar Cove!
**Search ALL of Elsmar.com** with DuckDuckGo Especially for content not in the forum
Such as files in the Cove "Members" Directory

P-Value less than 0.05. but everything else in control

bobdoering

Stop X-bar/R Madness!!
Trusted
#11
OK, lets put a little more gasoline on the fire.

First, examine the time ordered sequence to see if the data appears to be random or has a function. (See attached chart) In this case, there does not appear to be a function BUT it could be that the tool wear is so slight that it takes 600 data points to see its affect. So, just because it appears random, it could be sampling error. How often do you have to adjust the process? How often do you have to change the cutter? How do you know it is time to change the cutter?

Now, let's force the data into a normal distribution (see attached). The p value truly does show that it is not a great model of the data. Does it mean it is not normal? No, because even though it is a bad model, it may be the best model...making it normal. Before losing our minds, note that the Ppk is 4.37. That means if the model was even close, it uses up so little of the tolerance that your process has little risk of making out of specification parts. How do we really know? Look at the confidence data.

So, let's take the next, more accurate step (which, actually is usually my first step!), and find the best fitting model and see how it compares to the normal distribution. We find the Johnson Family is a better fit (see attached). Well over that .05 p-value (although a perfect fit is 1.00) Using a statistically more accurate model for the decision on your process, is it capable? Ppk says yes - 1.83. Better yet, the confidence data confirms it.

So, what makes your data non-normal? There is a slight skew to the high side. How does the device locate the tube to cut? Against at stop? If so, a skew away from the stop is both expected and physically supported. The tube cannot go past the stop (physical limit), but can readily bounce or otherwise not nestle right up against the stop, causing a skewed distribution. In that case, a normal distribution is never expected. Much like the 0 as a physical limit to runout (also, only normal if your process is horrible and have very high runout), your stop is a physical limit that will created an expected skewed distribution. I don't know that this is the case. You have to go back to the true first step of the capability analysis - developing the total variance equation. It will help explain multimodality or skewness, if it is complete enough. You also may have tool wear, but it may be masked by location error.

But, as you see, there are many reasons why you might not expect the process to be "normal", which makes the customer's expectation 100% statistically incorrect. In fact, Ford in its customer specific requirements addresses expected distributions and non-normal distributions. They are not all correct, but, they get partial credit for at least realizing not everything should be normal!


Unfortunately, not all suppliers have the statistical juice to call their customer's bluff on poor application of statistics, and that makes for a long, long day.....:bonk:
 

Attachments

bobdoering

Stop X-bar/R Madness!!
Trusted
#13
Here is my favorite normality test article. I ALWAYS use best fit curve fitting instead of normality test. It gives you the most accurate model for your data. That is what you really want to know!!! That beats guessing if the data might not not be normal....yes, that is exactly what the normality test tells you!!! However, statistics are not plug and chug like you have been led to believe. Data and statistics do not TELL you anything, they CONFIRM what you need to know! You have to be clever enough to figure out if the data should be normal or not normal. Then, you need to determine if any of the variables in the total variance equation are masking the true process variation! Sorry, folks, that is real life!
 
Last edited:

Miner

Forum Moderator
Staff member
Admin
#15
In my experience every time someone transformed the data, the reason the data were non normal in the first place was either due to mixed process streams or an unstable process. They can't seem to understand that you need to understand why the data are distributed the way they are before you use the data.
 

bobdoering

Stop X-bar/R Madness!!
Trusted
#16
In my experience every time someone transformed the data, the reason the data were non normal in the first place was either due to mixed process streams or an unstable process. They can't seem to understand that you need to understand why the data are distributed the way they are before you use the data.
That can happen, if the process has an expected output of true random and independent output with no natural or physical barriers. But, as you know, you can have a very stable process that is non-normal. So, yes, understanding why the data exhibits the distribution - and what distribution is expected - is an art most practitioners do not have, or if they have do not use enough.
 

Matt33

Starting to get Involved
#17
You stated: "My two control charts (Mean/Range) are in control. No points beyond control limits, my histogram looks well, similar to a normal bell, symmetrical. Cpk and Ppk indices meet my customer's expectations too."
I would make a minor correction and state "... Cpk and Ppk indices EXCEED my customer's expectations ..."
If your process is stable, which it is …
And your process is expected to be reasonable bell-shaped, which it is …
And you Mean is close to the Target, which it is …
And you are nowhere near the USL or LSL, which is the case …
Is your management really concerned about the p-value? Don’t they have ‘bigger fish to fry?’
Your customer wants their parts to be consistent and at the target value. Your chart shows an exceptional system. Could it be improved? Sure. Is it worth your time to focus on this? I doubt it.

If those are the correct specs, I would be delighted to have the product you are providing.
 

bobdoering

Stop X-bar/R Madness!!
Trusted
#18
In my experience every time someone transformed the data, the reason the data were non normal in the first place was either due to mixed process streams or an unstable process.
Many times ( I will certainly avoid the "every" time) I see people that have normal distributions is because their measurement error which is often normal and can be very large, masking a predictable, expected, stable non-normal underlying distribution. Once the underlying non-normal distribution is exposed, its capability is often easier understood with transformation. A little different experience on my behalf.
 

stevegyro

Involved In Discussions
#19
Please, can you state a null hypothesis? P values are used to (only) disprove a null hypothesis at a certain level of confidence.

Thanks for an excellent post!
 

stevegyro

Involved In Discussions
#20
My guess is that the customer wants a way to prove the new material is not different (not unequal) to an earlier sample lot.
If this is true, and (pls bear in mind ... equality can not be proven in a null hypothesis), the correct way is to prove greater than a low limit (where N.Hyp. Is <=), then prove less than a high limit (where N.Hyp. Is >=). I would trust MiniTab or jump more than doing by this by hand, but you may.
Please, can you state a null hypothesis? P values are used to (only) disprove a null hypothesis at a certain level of confidence.

Thanks for an excellent post!
Was a GR&R performed? To Bobs point above, measurement error is significant, most of the time.
No pun intended, but ... How’s that for an aberration of statistical terms?



FWIW and earlier post (BobD.) is very true, machining does not exhibit any central tendancy, so best (imho) that a person can do is use appropriate subgroups. Sub-group sizes is a whole topic in itself!

“May the force be with you ...”

-Steve
 
Top Bottom