Page 82 - Intermediate Statistics for Dummies
P. 82
07_045206 ch03.qxd 2/1/07 9:47 AM Page 61
Chapter 3: Building Confidence and Testing Models
that these two probabilities are both equal, because the probability of reject-
ing Ho when you shouldn’t (Type I error) is the same as the chance that
the true population parameter falls out of the range of likely values when it
shouldn’t. That chance is α.
Say someone claims that the mean time to deliver packages for a company is
3.0 days on average (so Ho is µ = 3.0), but you believe it’s not equal to that (so
Ha is µ≠ 3.0). Your alpha level is 0.05, and because you have a two-sided test,
this means you have 0.025 on each side. Your sample of 100 packages has a
mean of 3.5 days with a standard deviation of 1.5 days. You find the test
x -
. 35 -
. 30
µ 0
, which equals 3.33. This value falls beyond
=
statistic t n 1 =
s
. 15
-
n
100
1.96 (the value on the last row and the 0.025 column of the t-distribution,
Table A-1 in the Appendix). So you don’t think 3.0 is a likely value for the mean
time of delivery, over all possible packages, and you reject Ho. Your data led
you to that decision and you stick to it.
But suppose your sample just by chance contained some longer than normal 61
delivery times, and that in reality, the company’s claim is right. You just made
a Type I error. You made a false alarm about the company’s claim.
To reduce the chance of a Type I error, reduce your value of α. However I
wouldn’t recommend reducing α too far. On the positive side, this reduction
makes it harder to reject Ho, because you need more evidence in your data to
do so. On the negative side, by reducing your chance of a Type I error, you
increase the chance of another type of error — the Type II error. To tackle
Type II errors, keep reading!
Missing an opportunity with a Type II error
A Type II error represents the situation where (continuing with the coin
example) the coin was actually unfair, but your data didn’t have enough evi-
dence to catch it, just by chance. You can think of a Type II error as a missed
opportunity — you didn’t blow the whistle when you should have. In statisti-
cal terms, a Type II error is the conditional probability of not rejecting Ho,
given that Ho is false. I call it a missed opportunity, because you were sup-
posed to be able to find a problem with Ho and reject it, but you didn’t.
The chance of making a Type II error depends on a couple of things:
Sample size: If you have more data, you’re less likely to miss something
that’s going on. For example, if a coin actually is unfair (and you don’t
know it), flipping the coin only ten times may not reveal the problem,
because results can go all over the place when the sample size is small.
But if you flip the coin 1,000 times, you have a good chance of seeing a
pattern that favors heads over tails or vice versa.