Page 458 - Design for Six Sigma a Roadmap for Product Development
P. 458

Fundamentals of Experimental Design  417


           may indicate that we may have missed important factors in the experi-
           ment. DOE data analysis can identify significant and insignificant
           factors by using analysis of variance.
             2. Ranking of relative importance of factor effects and interactions.
           Analysis of variance (ANOVA) can identify the relative importance of
           each factor by giving a numerical score.
             3. Empirical mathematical model of response versus experimental
           factors. DOE data analysis is able to provide an empirical mathe-
           matical model relating the output  y to experimental factors. The
           form of the mathematical model could be linear or polynomial, plus
           interactions. DOE data analysis can also provide graphical presen-
           tations of the mathematical relationship between experimental fac-
           tors and output, in the form of main-effects charts and interaction
           charts.
             4. Identification of best factor level settings and optimal output per-
           formance level. If there is an ideal goal for the output, for example, if y
           is the yield in an agricultural experiment, then the ideal goal for  y
           would be “the larger, the better.” By using the mathematical model
           provided in paragraph 3, DOE data analysis is able to identify the best
           setting of experimental factors which will achieve the best possible
           result for the output.

           Step 7: Conclusions and recommendations
           Once the data analysis is completed, the experimenter can draw prac-
           tical conclusions about the project. If the data analysis provides
           enough information, we might be able to recommend some changes to
           the process to improve its performance. Sometimes, the data analysis
           cannot provide enough information, in which case we may have to do
           more experiments.
             When the analysis of the experiment is complete, we must verify
           whether the conclusions are good. These are called confirmation runs.
             The interpretation and conclusions from an experiment may
           include a “best” setting to use to meet the goals of the experiment.
           Even if this “best” setting were included in the design, you should
           run it again as part of the confirmation runs to make sure that noth-
           ing has changed and that the response values are close to their pre-
           dicted values.
             In an industrial setting, it is very desirable to have a stable process.
           Therefore, one should run more than one test at the “best” settings. A
           minimum of three runs should be conducted. If the time between actu-
           ally running the experiments and conducting the confirmation runs is
           more than a few hours, the experimenter must be careful to ensure
           that nothing else has changed since the original data collection.
   453   454   455   456   457   458   459   460   461   462   463