Page 262 - Applied statistics and probability for engineers
P. 262
240 Chapter 7/Point Estimation of Parameters and Sampling Distributions
Now let’s consider a different type of question. Suppose that two different reaction tem-
peratures t 1 and t 2 can be used in a chemical process. The engineer conjectures that t 1 will result
in higher yields than t 2 . If the engineers can demonstrate that t 1 results in higher yields, then
a process change can probably be justiied. Statistical hypothesis testing is the framework for
solving problems of this type. In this example, the engineer would be interested in formulating
hypotheses that allow him or her to demonstrate that the mean yield using t 1 is higher than the
mean yield using t 2 . Notice that there is no emphasis on estimating yields; instead, the focus is
on drawing conclusions about a hypothesis that is relevant to the engineering decision.
This chapter and Chapter 8 discuss parameter estimation. Chapters 9 and 10 focus on
hypothesis testing.
Learning Objectives
After careful study of this chapter, you should be able to do the following:
1. Explain the general concepts of estimating the parameters of a population or a probability
distribution
2. Explain the important role of the normal distribution as a sampling distribution
3. Understand the central limit theorem
4. Explain important properties of point estimators, including bias, variance, and mean square error
5. Know how to construct point estimators using the method of moments and the method of maximum
likelihood
6. Know how to compute and explain the precision with which a parameter is estimated
7. Know how to construct a point estimator using the Bayesian approach
7-1 Point Estimation
Statistical inference always focuses on drawing conclusions about one or more parameters of
a population. An important part of this process is obtaining estimates of the parameters. Sup-
pose that we want to obtain a point estimate (a reasonable value) of a population parameter.
We know that before the data are collected, the observations are considered to be random vari-
…
ables, say, X , X , , X n1 2 . Therefore, any function of the observation, or any statistic, is also
2
a random variable. For example, the sample mean X and the sample variance S are statistics
and random variables.
Another way to visualize this is as follows. Suppose we take a sample of n = 10 observa-
.
tions from a population and compute the sample average, getting the result x = 10 2. Now we
repeat this process, taking a second sample of n = 10 observations from the same population
and the resulting sample average is 10.4. The sample average depends on the observations in
the sample, which differ from sample to sample because they are random variables. Conse-
quently, the sample average (or any other function of the sample data) is a random variable.
Because a statistic is a random variable, it has a probability distribution. We call the prob-
ability distribution of a statistic a sampling distribution. The notion of a sampling distribution
is very important and will be discussed and illustrated later in the chapter.
When discussing inference problems, it is convenient to have a general symbol to represent
the parameter of interest. We will use the Greek symbol θ (theta) to represent the parameter.
2
The symbol θ can represent the mean μ, the variance σ , or any parameter of interest to us. The
objective of point estimation is to select a single number based on sample data that is the most
plausible value for θ. A numerical value of a sample statistic will be used as the point estimate.
In general, if X is a random variable with probability distribution f x( ), characterized by the
…
unknown parameter θ, and if X , X , , X n1 2 is a random sample of size n from X, the statistic