Re: Some help on Minitab - 4 Factor Optimization
- we r doing three factors optimization, all of them are numeric
Sorry for being confused
, but didn't you stated in the title of this thread that you're doing a 4 Factor Optimization? Or are you referring to 3 responses which should be optimized in a process where 4 factors should be evaluated?
- we dont know what levels to use, i ll assume 2 levels for factorial, 3 levels for RSD, (if we choose 3 levels for factorial, isnt it more like RSD now?)
2 levels are required for a factorial design. If you want to test for curvature (and this is highly recommended to avoid misinterpretations of the results) you have to run additional center points. In a center point run every numeric factor is set (exactly!) to the midth between the 2 levels (e.g. temperature low=30°C, high=90°C -> center point setting=60°C).
For a response surface design the number of levels required depends on your type of design. Box Behnken and face-centered central composite designs (CCD) require 3 levels (which are identical to the levels from a factorial design plus center points). In- or exscribed CCDs require 5 different levels for every numeric factor (see e.g.
Engineering Statistics Handbook, chapter 5.3.3. How do you select an experimental design? for details).
A common approach if a curvature might be present (but isn't yet confirmed to be there) is to start with a factorial design with center points. If the analysis shows a significant curvature, the factorial design could be augmented with further runs into a CCD. If there is no vital curvature present, the results could be taken to build a process model which could be used for optimization.
- regarding the linearity, some choose 1 factor as main, the other choose another one, but we want to determine if there interaction between them, or maybe just like other researchers 1 is dominating
Factorial and response surface allow different questions about your process to be answered:
- Full or fractional factorial design]
- Resolution III: Only main effects could be evaluated, 2-factor and higher interactions are confounded with the main effects. Therefore these designs should be avoided if possible (red cells in Minitab), because the risk for misleading results is simply too high.
- Resolution IV: Main effects and some of the 2-way interactions could be evaluated. Resolution IV (yellow cells in Minitab) is recommended for screening designs to find the vital factors among all factors.
- Resolution V: Main effects, 2-way interactions and some of the 3-factor interactions could be analyzed. Resolution V and above (green cells in Minitab) is recommended for optimization.
- Response surface designs: main effects, 2-way interactions and quadratic effects could be evaluated. (Minitab does not provide methods for analyzing higher interactions than 2-way in the response surface design.)
- Accurate result would be more preferable. the number of experiment is not a major concern, we wish the result to be as accurate as possible, but if the experiment runs can be reduced and still effective, that would be great.
What's your definition of "accurate"?
- regarding the measurement, the machine we use is not a problem, but i think errors are introduced usually by experiment conductor
Measurement uncertainty is not covering human errors in the first way, but evaluates natural or common variation which you simply get if a part is measured several times or by different operators or with different set-ups, etc. It is highly recommended to assure a sufficient measurement certainty before running experiments like those in a DoE, to ascertain that you can trust the numbers in the table and therefore work with the results of the analysis (e.g. in optimization).
One of the worst cases with experiments is a huge measurement uncertainty which isn't previously known. The results simply do not provide any information about your process, because all effects are eaten away by the measurement process, not because there aren't any effects which could be assigned to the factor settings. But you simply can't see them, if the measurement uncertainty is too high. (Just try to see the tree in front of you if you're in a sand storm.) Another problem is a bias or a non-linearity in the measurement system which could lead to wrong "optimal" settings.
A good point to start an accurate and effective DoE is therefore a measurement system analysis. If a small uncertainty could be confirmed (with a negligible bias), you can rely on your experimental results with a small number of runs and/or replicates and get interesting informations about your process from the results as well as reliable optimization settings. The higher the uncertainty is, the more runs have to be made to find vital effects and sometimes the preparation of a DoE reveals the real problem of a process: an incapable and highly volatile measurement process.
Regards,
Barbara