Hard to change factors in DOE (Split Plot Designs) - Injection Molded Parts

D

davis007

All:

I have been asked to put together a DOE to study the effects of some machine settings on some injection molded parts. At the same time we would like to obtain information on the differnces between two suppliers of the compound we run.

I think that I should set this up as a split plot (?) design. Changing from one batch of compound to another is very difficult and time consuming.

My plan is to take two lots of material from each manufacture and run a full factorial (or central composite DOE see question below) of the three machine settings for each lot.

When analyzing I will consider the batches nested within within the manufacture for analysis. And treat the batch as a random factor.

Now for my questions.

1. I am able to set this up in Minitab using some tricks, for a factorial design on the machine settings, but I would really rather do a central composite design on the machine settings. Does anyone know if this will cause an issue? I do not think so but just want to check.

2. Does anyone know of a good reference for split plot designs, or how to treat hard to cahnge variables?

3. The response will be the number of defects of a specific type. Does the response being limited to inter values cause any concerns? I think that I should report this as a percent. Does limiting the range to 0 to 1 raise any issues.


Thanks in advance.
 

Miner

Forum Moderator
Leader
Admin
davis007 said:
Now for my questions.

1. I am able to set this up in Minitab using some tricks, for a factorial design on the machine settings, but I would really rather do a central composite design on the machine settings. Does anyone know if this will cause an issue? I do not think so but just want to check.

2. Does anyone know of a good reference for split plot designs, or how to treat hard to cahnge variables?

3. The response will be the number of defects of a specific type. Does the response being limited to inter values cause any concerns? I think that I should report this as a percent. Does limiting the range to 0 to 1 raise any issues.

1. Minitab is capable of creating central composite designs. Search Minitab Help for information and examples. It should not cause any issues if properly set up.

2. Split plot designs are the correct approach for this scenario. Try these links http://www.itl.nist.gov/div898/handbook/pri/section5/pri55.htm and http://ansc.umd.edu/wwwfaculty/Douglass/Lecture Notes/08SplitPlot S04 Lec.pdf

3. DOEs are normally analyzed using ANOVA, which requires discrete levels of the x-variable and a continuous y-variable. If you must use a discrete y-variable, you may be forced to use a chi-square analysis, or a technique such as Taguchi's accumulation analysis.

Can you use defect size, or defect density (# / unit area)? If you can somehow transform your response into a continuous variable, your chances of success are much greater.
 
D

davis007

Thank you Miner

The defect ether is or is not present in the part. So the best i can do is a continuous variable between 0 and 1, frequency. I will look into transforming the response.


New information.

We have agreed on 3 factors on the equipment side, and to only test one manufacturer. Unfortunately the lot size of material will only allow us to test 4 set points per lot.

I am thinking to run a full factorial on the 3 machine settings and replicating it. I would then randomize these runs and do the testing across 4 lots of material. I will lose randomization of the lots as a factor but will limit the # of changes of the lots. When I analyze I will treat the lots as a fourth factor.
 

Miner

Forum Moderator
Leader
Admin
davis007 said:
The defect ether is or is not present in the part. So the best i can do is a continuous variable between 0 and 1, frequency. I will look into transforming the response.

What is the defect that you are evaluating?
 
D

davis007

The part has a tendancy to stick to part of the mold. When the mold pull apart the plastic in that region can tear creating a kind of rough patch on the inside of the part. This seems to happen more frequently with certian raw material, and on one type of mold vs. others. We have learned that the type of mold the problem occurs on more frequently is made from a different alloy, thus our thought that thermal control may be an issue.
 

Tim Folkerts

Trusted Information Resource
I agree with what Miner has been saying so far. Let me throw out a few more ideas.

The two designs that came to mind first for me are the two you have mentioned: a 2^3 full factorial or a 3 factor CCD. I kind of lean toward the CCD.

For one thing, the CCD design is a 2^3 factorial + extras. Rather than replicating the 2^3 design, you could run the 2^3 design once (using 2 lots), and see how it looks. Then you could add the extra points - the center and the 6 axial points (using another 3 lots). It is a few more runs, but it give additional info. The repeated center points would tell you about variation within a lot and the variations between lots. The axial points would allow you to estimate the "curvature" of the dependence of the process on these variables. This might allow you to optimize the settings, rather than just choosing between "high" and "low".

CCD requires that the process can be set to multiple levels. Ideally the variables would be continuously adjustable to meet 5 unevenly spaced values. If that is a problem, you can adjust "alpha" so that 5 evenly spaces values would work. (You can even do 3 evenly spaced values, but that looses some of the power of the design.)

It would be slightly better experimentally to run all the trials in a more random order, but I don't think it would be a major concern here, unless you believe that the variations from one lot to the next are fairly significant.


You say that the defect is a rough patch left on some parts. Would it be possible to measure the area of the defect rather than just the presence of the defect? Or measure the surface finish? Those could give a continuous variable.

Tim F
 

Miner

Forum Moderator
Leader
Admin
Tim Folkerts said:
You say that the defect is a rough patch left on some parts. Would it be possible to measure the area of the defect rather than just the presence of the defect? Or measure the surface finish? Those could give a continuous variable.

Tim F

I agree.
 
D

davis007

Not sure I understand. The defect is present in maybe 1 out of 250 parts on average using current processing conditions. If I could measure the size of the defect I would takethe average size using 0 as the size of the defect in the parts with no defect. To me the numbers would be biased by all the parts with no defects.

For example the below two cases would give identical numerical response even though the first is far worse for my company.

1. 100 parts 1 has the defect covvering 10 sqmm. - average size 0.1 sqmm
2. 100 parts 10 have the defect but only 1 sq mm each - average size 0.1 sqmm

If factor setting A give result 1 and factor setting B give result 2 the analysis would not show a difference when using the average size of the defect.

Have I mixed thing up?

It seems that if a small defect is just a "Bad" as a larger defect, then I should use the number of defect per 1000 as my basis. But this is not an unbounded value limit. On the low side 0 is seen on occasion (and I hope to identify the conditions that cause this). While the range is bounded at 100% on the high side in practice the value does not go above 5%. So I think I can treat this data and bounded on only the low side.

I have looked for a transform, albeit not that hard, that would convert a one sided bouned response into a pseudo unbounded value, without success.

Do you have any ideas?
 

Miner

Forum Moderator
Leader
Admin
I understand your predicament better now.

If your sample size (number of repeats) is large enough that percentage of defects is continuous, and such that some level of defects are seen in most of the runs, you can use the percentage of defects.

Be very careful on your sample size per run because your biggest risk is that the number of defects seen varies by pure chance. Your best defense against this is a large number of repeats.

Another consideration is to make sure that you are using machine setting levels that use the full range that you would expect to use when developing a machine setup sheet.
 

Tim Folkerts

Trusted Information Resource
The defect is present in maybe 1 out of 250 parts on average using current processing conditions.
This could be a challenge. If you are hoping to use %defective as a pseudo-continuous variable, then you need a good number of defects to get the discrimination you want, and to overcome random variations.

I was just playin with the "Power and Sample Size" feature of Minitab. Suppose you run a 2^3 full factorial. With the above defect rate, if you ran 1,000 parts, you would expect 4 defects per run. Since these should be distributed about binomially, then the standard deviation would be about 4^0.5 = 2. If you want to be 95% certain to detect changes of 4 (i.e you'd like to know if a change to 0 or 8 defects is real), then two replicates won't quite do it - you need three reps. (Or you could add 4 center points!)

Obviously, the answers will change as you change the size of the run, the % defective, the size of change you hope to detect, or the power you hope to acheive.

Actually, that's not as bad as my original intiution. I was thinking you might need ~30 defects in each trial. Instead you only need about 30 defects for each SET of trials (e.g the trials with Factor 1 high and the trials with Factor 1 low).


If I could measure the size of the defect I would take the average size using 0 as the size of the defect in the parts with no defect. To me the numbers would be biased by all the parts with no defects.

For example the below two cases would give identical numerical response even though the first is far worse for my company.

1. 100 parts 1 has the defect covvering 10 sqmm. - average size 0.1 sqmm
2. 100 parts 10 have the defect but only 1 sq mm each - average size 0.1 sqmm

That is certainly a concern. The area method would work better in a situation where either most parts have some defect. Or where the size of the defects doesn't change a lot.

I was trying to brainstorm some alternate variable that might correlate to what you want to know. That's where the ideas of surface finish came in. Perhaps as you went from low to medium to high on some setting you went from no defects to no defects, but a hint that defects are coming, to some defects. That hint might be surface finish, discoloration, change in some other dimension, etc.

Minitab also has an ability to try to optimize several responses. You could try to minimize BOTH the number of defects AND the average size of the defects.


It seems that if a small defect is just a "Bad" as a larger defect, then I should use the number of defect per 1000 as my basis.
My one question would be whether a small defect is indeed as bad as a big defect. That of course is a judgement call. (Perhaps small defects are worse because they are harder to spot. :rolleyes:)


I have looked for a transform, albeit not that hard, that would convert a one sided bouned response into a pseudo unbounded value, without success.

Do you have any ideas?
Have you considered the log-normal transformation?

And actually, if you run enough samples to get a reasonable number of defects, then the results ought to be close enough to normal that it would not make a big difference.


Tim Folkerts
 
Top Bottom