# Interpreting Normal vs Weibull Capabilities

D

#### drew88

Hello,

New to the forum. I tried reading up on related topics but I can't seem to wrap my head around this. Hope someone can help me! Sorry if I have butchered the world of statistics because I am trying to learn as I go and it's just too confusing...

I have 6 sets of pull force data of plastic heat staked posts (destructive tensile testing). Understanding that I will have a lot of difficulty justifying that the measurement system is reliable, we needed to go ahead and test this in some way to establish some level of process/product compliance to specification. Each set of data represents a specific location on the same processed part (6 posts per part).

Started by running basic capability/histograms and found 3 & 5 are not normal. Seems that the best fit (non-transformation) is 3 parameter weibull. Since all 6 posts should be the same, I thought I would perform the non-normal analysis uni formally.

This generally makes all the Ppk values go up. In 3 cases, the p-value goes up considerably which means it is a better fit? In 3 other cases, normal distribution has a better p-value. Does this mean that Weibull can not be used or that it also fits with a lower confidence?

As far as interpretation of what these capability indices mean, does it matter that I have a single process represented by Weibull distributions with different parameters? I feel like this defeats the whole purpose of characterizing the statistical model.

Also, for my understanding and curiosity, why would I not always characterize a model (even if it is close to normal) as Weibull if the data can be more closely characterized by the 3 parameters?

Data attached for reference. Any help would be greatly appreciated.

Andrew

#### Attachments

• pull test data.xlsx
9.7 KB · Views: 258

#### Bev D

##### Heretical Statistician
Super Moderator
I'd like to understand the purpose of the capability study...are you doing this because of a Customer reporting requirement (if so which standard) or are you doing it to understand the capability for good of the product? The answer we would give you is quite different.

With destruct testing one approach to understanding measurement error in relationship to part variation is to perform a capability study on the parts. if the observed variation is capable (stable and within specification over a period of time) then your measurement system is also 'good enough' because the observed variation has both the measurement error and the part variation captured.

My more important question is how you collected the samples and how many components of variation you included. this is far more critical to a good capability study than the underlying distribution (which is really just an exercise in statistical math nor an informative quality study...)

I can see from your data that you have captured within piece but how are you capturing piece to piece (for example, is it 8 sets 5 sequential parts spread out over time or is it one set of 40 sequential parts?). Other components of variation that you should be interested in: set-up to set up of the staking operation, lot to lot of the plastic assemblies, lot to lot of the resin, and even operator to operator and equipment to equipment...

This data should also be plotted on a control chart - or multi-vari chart - to assess stability before doing any capability assessment....

It's not about the statistical math, it's about the performance of the process...

D

#### drew88

Hi BevD,

Thanks for the feedback. This is to understand the capability of the part relative to our design specification (functional requirement).

For destructive testing error, how do we run a capability study on the parts without a system that we trust to do the measuring?

As far as the data shown, they are sequential parts from an assembly dial table (no operator influence, but several nests). Same batch of plastic parts as far as we have resolution to see.

In the I-MR charts, if a few are stable and a few are not stable, how should I interpret that (relative to normal distribution)? If by placing the same data into a non-normal Weibull distribution, they fit the I-MR model, does that mean it is stable?

Thanks.

#### optomist1

##### A Sea of Statistics
Super Moderator
are the parts from just one piece of equipment? Are the parts assembled during one operation or shift? Depending on the goal of this endeavor the question Bev alluded to earlier, you might want to ensure that the "parts" subjected to the destructive tests are indicative of the process as you know it; are there other machines/tools that are used to assemble the parts, are there other sources of components/materials.....etc.

D

#### drew88

are the parts from just one piece of equipment? Are the parts assembled during one operation or shift? Depending on the goal of this endeavor the question Bev alluded to earlier, you might want to ensure that the "parts" subjected to the destructive tests are indicative of the process as you know it; are there other machines/tools that are used to assemble the parts, are there other sources of components/materials.....etc.

Parts were assembled during one shift through one production stream/machine stream. There are other assembly steps that occur prior to it, but this is as limited in variation as I can hope for.

The only known variation between the parts would be different nests and mold cavities. Never tried looking at the mold cavity route, but these samples are as production representative as they come.

Is it agreed that if I see generally normalized data on 4/6 data points, I should expect to continue seeing normalized trends from similar testing? Back to my original point, it doesn't make sense to analyze every stream with independent non-normal distributions (and weibull parameters) to 'force them to fit' a distribution?