Page 78 - Statistics for Dummies
P. 78
62
Part I: Vital Statistics about Statistics
headlines.) Statisticians measure the amount by which a result is out of the
ordinary using hypothesis tests (see Chapter 14). They define a statistically
significant result as a result with a very small probability of happening just by
chance, and provide a number called a p-value to reflect that probability (see
the previous section on p-values).
For example, if a drug is found to be more effective at treating breast cancer
than the current treatment is, researchers say that the new drug shows a
statistically significant improvement in the survival rate of patients with
breast cancer. That means that based on their data, the difference in the
overall results from patients on the new drug compared to those using the
old treatment is so big that it would be hard to say it was just a coincidence.
However, proceed with caution: You can’t say that these results necessarily
apply to each individual or to each individual in the same way. For full details
on statistical significance, see Chapter 14.
When you hear that a study’s results are statistically significant, don’t auto-
matically assume that the study’s results are important. Statistically significant
means the results were unusual, but unusual doesn’t always mean important.
For example, would you be excited to learn that cats move their tails more
often when lying in the sun than when lying in the shade, and that those
results are statistically significant? This result may not even be important to
the cat, much less anyone else!
Sometimes statisticians make the wrong conclusion about the null hypoth-
esis because a sample doesn’t represent the population (just by chance).
For example, a positive effect that’s experienced by a sample of people who
took the new treatment may have just been a fluke; or in the example in the
preceding section, the pizza company really was delivering those pizzas on
time and you just got an unlucky sample of slow ones. However, the beauty
of research is that as soon as someone gives a press release saying that she
found something significant, the rush is on to try to replicate the results,
and if the results can’t be replicated, this probably means that the original
results were wrong for some reason (including being wrong just by chance).
Unfortunately, a press release announcing a “major breakthrough” tends
to get a lot of play in the media, but follow-up studies refuting those results
often don’t show up on the front page.
One statistically significant result shouldn’t lead to quick decisions on any-
one’s part. In science, what most often counts is not a single remarkable
study, but a body of evidence that is built up over time, along with a variety
of well-designed follow-up studies. Take any major breakthroughs you hear
about with a grain of salt and wait until the follow-up work has been done
before using the information from a single study to make important decisions
in your life. The results may not be replicable, and even if they are, you can’t
know if they necessarily apply to each individual.
3/25/11 8:17 PM
08_9780470911082-ch04.indd 62 3/25/11 8:17 PM
08_9780470911082-ch04.indd 62