Page 62 - Statistics II for Dummies
P. 62
46
Part I: Tackling Data Analysis and Model-Building Basics
4. If your Ha is “not equal to,” double the percentage that you got in step
three because your test statistic could have gone either way before
the data was collected. (See your Stats I textbook or Statistics For
Dummies for full details on obtaining p-values for hypothesis tests.)
Your friend α is the cutoff for your p-value. (α is typically set at 0.05, but
sometimes it’s 0.10.) If your p-value is less than your predetermined value of α,
reject Ho because you have sufficient evidence against it. If your p-value is
greater than or equal to α, you can’t reject Ho.
For example, if your p-value is 0.002, your test statistic is so far away from
Ho that the chance of getting this result by chance is only 2 out of 1,000. So,
you conclude that Ho is very likely to be false. If your p-value turns out to be
0.30, this same result is expected to happen 30 percent of the time anyway,
so you see no red flags there, and you can’t reject Ho. You don’t have enough
evidence against it. If your p-value is close to the cutoff line, say p = 0.049 or
0.51, you say the result is marginal and let the reader make her own conclu-
sions. That’s the main advantage of the p-value: It lets other folks determine
whether your evidence is strong enough to reject Ho in their minds.
False alarms and missed opportunities:
Type I and II errors
Any technique you use in statistics to make a conclusion about a population
based on a sample of data has the chance of making an error. The errors I am
talking about, Type I and Type II errors, are due to random chance.
The way you set up your test can help to reduce these kinds of errors, but
they’re always out there. As a data analyst, you need to know how to measure
and understand the impact of the errors that can occur with a hypothesis test
and what you can do to possibly make those errors smaller. In the following
sections, I show you how you can do just that.
Making false alarms with Type I errors
A Type I error is the conditional probability of rejecting Ho, given that Ho is
true. I think of a Type I error as a false alarm: You blew the whistle when you
shouldn’t have.
The chance of making a Type I error is equal to α, which is predetermined
before you begin collecting your data. This α is the same α that represents
the chance of missing the boat in a confidence interval. It makes some sense
that these two probabilities are both equal because the probability of reject-
ing Ho when you shouldn’t (a Type I error) is the same as the chance that
the true population parameter falls out of the range of likely values when it
shouldn’t. That chance is α.
7/23/09 9:23:26 PM
07_466469-ch03.indd 46 7/23/09 9:23:26 PM
07_466469-ch03.indd 46