Page 202 - Statistics II for Dummies
P. 202

186        Part III: Analyzing Variance with ANOVA



                                Suppose α is chosen to be 0.05. The researcher then has a 5 percent chance
                                of being wrong in finding a significant conclusion, just by chance. So if he
                                does 100 tests, each with a 5 percent chance of an error, on average 5 of
                                those 100 tests will result in a statistically significant result, just by chance.
                                However, researchers who don’t know that (or who know and go ahead
                                regardless) find results that they claim are significant even though they’re
                                really bogus.

                                An Italian mathematician named Carlo Emilio Bonferroni (1892–1960) said
                                “enough already” and created something statisticians call the Bonferroni
                                adjustment in 1950 to control the madness. The Bonferroni adjustment simply
                                says that if you’re doing k tests of your data, you can’t do each one at level α =
                                0.05, you need to have an α level for each test equal to 0.05 ÷ k.
                                For example, someone who conducts 20 tests on one data set needs to do
                                each one at level α = 0.05 ÷ 20 = 0.0025. This adjustment makes it harder to
                                find a conclusion that’s significant because the p-value for any test must be
                                less than 0.0025. The Bonferroni adjustment curbs the chance of data snoop-
                                ing until you find something bogus.

                                The downside of Bonferroni’s adjustment is that it’s very conservative.
                                Although it reduces the chance of concluding two means differ when they
                                really don’t, it fails to catch some differences that really are there. In statistical
                                terms, Bonferroni has power issues. (See your Stats I text or Statistics For
                                Dummies for a discussion on power.)


                                Comparing combinations by

                                using Scheffe’s method


                                Scheffe’s method was developed in 1953 by Henry Scheffe (1907–1977). This
                                method doesn’t just compare two means at time, like Tukey’s and Fisher’s
                                tests do; it compares all different combinations (called contrasts) of the
                                means. For example, if you have the means from four populations, you may
                                want to test to see if their sum equals a certain value, or if the average of two
                                of them equals the average of the two others.


                                Finding out whodunit with Dunnett’s test

                                Dunnett’s test was developed in 1955 by Charles Dunnett (1921–1977).
                                Dunnett’s test is a special multiple comparison procedure used in a designed
                                experiment that contains a control group. The test compares each treatment
                                group to the control group and determines which treatments do better than
                                others.









          16_466469-ch10.indd   186                                                                   7/24/09   9:41:43 AM
   197   198   199   200   201   202   203   204   205   206   207