Page 11 - Statistics for Environmental Engineers
P. 11

L1592_frame_CH-01  Page 2  Tuesday, December 18, 2001  1:39 PM









                       include ANOVA, tolerance units, prediction intervals, control charts, confidence intervals, Cohen’s adjust-
                       ment, nonparametric ANOVA, test of proportions, alpha error, power curves, and serial correlation. Air
                       pollution standards and regulations also rely heavily on statistical concepts and methods.
                        One burden of these environmental laws is a huge investment in collecting environmental data. No
                       nation can afford to invest huge amounts of money in programs and designs that are generated from
                       badly designed sampling plans or by laboratories that have insufficient quality control. The cost of poor
                       data is not only the price of collecting the sample and making the laboratory analyses, but is also
                       investments wasted on remedies for non-problems and in damage to the environment when real problems
                       are not detected. One way to eliminate these inefficiencies in the environmental measurement system is
                       to learn more about statistics.




                       Truth and Statistics
                       Intelligent decisions about the quality of our environment, how it should be used, and how it should be
                       protected can be made only when information in suitable form is put before the decision makers. They,
                       of course, want facts. They want truth. They may grow impatient when we explain that at best we can
                       only make inferences about the truth. “Each piece, or part, of the whole of nature is always merely an
                       approximation to the complete truth, or the complete truth so far as we know it.…Therefore, things
                       must be learned only to be unlearned again or, more likely, to be corrected” (Feynman, 1995).
                        By making carefully planned measurements and using them properly, our level of knowledge is
                       gradually elevated. Unfortunately, regardless of how carefully experiments are planned and conducted,
                       the data produced will be imperfect and incomplete. The imperfections are due to unavoidable random
                       variation in the measurements. The data are incomplete because we seldom know, let alone measure,
                       all the influential variables. These difficulties, and others, prevent us from ever observing the truth exactly.
                        The relation between truth and inference in science is similar to that between guilty and not guilty in
                       criminal law. A verdict of not guilty does not mean that innocence has been proven; it means only that
                       guilt has not been proven. Likewise the truth of a hypothesis cannot be firmly established. We can only
                       test to see whether the data dispute its likelihood of being true. If the hypothesis seems plausible, in light
                       of the available data, we must make decisions based on the likelihood of the hypothesis being true. Also,
                       we assess the consequences of judging a true, but unproven, hypothesis to be false. If the consequences
                       are serious, action may be taken even when the scientific facts have not been established. Decisions to
                       act without scientific agreement fall into the realm of mega-tradeoffs, otherwise known as politics.
                        Statistics are numerical values that are calculated from imperfect observations. A statistic estimates a
                       quantity that we need to know about but cannot observe directly. Using statistics should help us move
                       toward the truth, but it cannot guarantee that we will reach it, nor will it tell us whether we have done so.
                       It can help us make scientifically honest statements about the likelihood of certain hypotheses being true.



                       The Learning Process

                       Richard Feynman said (1995), “ The principle of science, the definition almost, is the following. The
                       test of all knowledge is experiment. Experiment is the sole judge of scientific truth. But what is the
                       course of knowledge? Where do the laws that are to be tested come from? Experiment itself helps to
                       produce these laws, in the sense that it gives us hints. But also needed is imagination to create from
                       these hints the great generalizations — to guess at the wonderful, simple, but very strange patterns beneath
                       them all, and then to experiment again to check whether we have made the right guess.”
                        An experiment is like a window through which we view nature (Box, 1974). Our view is never perfect.
                       The observations that we make are distorted. The imperfections that are included in observations are
                       “noise.” A statistically efficient design reveals the magnitude and characteristics of the noise. It increases
                       the size and improves the clarity of the experimental window. Using a poor design is like seeing blurred
                       shadows behind the window curtains or, even worse, like looking out the wrong window.



                      © 2002 By CRC Press LLC
   6   7   8   9   10   11   12   13   14   15   16