Page 153 - Statistics for Environmental Engineers
P. 153
L1592_frame_C17 Page 150 Tuesday, December 18, 2001 1:51 PM
For the particular values of this example:
–0.326 – 0.132(2.160) < δ < –0.326 + 0.132(2.160)
–0.61 mg/L < δ < –0.04 mg/L
We are highly confident that the difference between the two methods is not zero because the confidence
interval does not include the difference of zero. The methods give different results and, furthermore, the
electrode method has given higher readings than the Winkler method.
If the confidence interval had included zero, the interpretation would be that we cannot say with a
high degree of confidence that the methods are different. We should be reluctant to report that the methods
are the same or that the difference between the methods is zero because what we know about chemical
measurements makes it unlikely that these statements are strictly correct. We may decide that the difference
is small enough to have no practical importance. Or the range of the confidence interval might be large
enough that the difference, if real, would be important, in which case additional tests should be done to
resolve the matter.
An alternate but equivalent evaluation of the results is to test the null hypothesis that the difference
between the two averages is zero. The way of stating the conclusion when the 95% confidence interval
does not include zero is to say that “the difference was significant at the 95% confidence level.”
Significant, in this context, has a purely statistical meaning. It conveys nothing about how interesting
or important the difference is to an engineer or chemist. Rather than reporting that the difference was
significant (or not), communicate the conclusion more simply and directly by giving the confidence
interval. Some reasons for preferring to look at the confidence interval instead of doing a significance
test are given at the end of this chapter.
Why Pairing Eliminates Uncontrolled Disturbances
Paired experiments are used when it is difficult to control all the factors that might influence the outcome.
A paired experimental design ensures that the uncontrolled factors contribute equally to both of the
paired observations. The difference between the paired values is unaffected by the uncontrolled distur-
bances, whereas the differences of unpaired tests would reflect the additional component of experimental
error. The following example shows how a large seasonal effect can be blocked out by the paired design.
Block out means that the effect of seasonal and day-to-day variations are removed from the comparison.
Blocking works like this. Suppose we wish to test for differences in two specimens, A and B, that are
to be collected on Monday, Wednesday, and Friday (M, W, F). It happens, perhaps because of differences
in production rate, that Wednesday is always two (2) units higher than Monday, and Friday is always
three (3) units higher than Monday. The data are:
Method
Day A B Difference
M 5 3 2
W 7 5 2
F 8 6 2
Averages 6.67 4.67 2
Variances 2.3 2.3 0
This day-to-day variation is blocked out if the analysis is done on (A – B) M , (A – B) W , and (A – B) F
instead of the alternate (A M + A W + A F )/3 = A and (B M + B W + B F )/3 = B . The difference between A
and B is two (2) units. This is true whether we calculate the average of the differences [(2 + 2 + 2)/3 = 2]
or the difference of the averages [6.67 – 4.67 = 2]. The variance of the differences is zero, so it is clear
that the difference between A and B is 2.0.
© 2002 By CRC Press LLC