Page 273 - Applied statistics and probability for engineers
P. 273
Section 7-3/General Concepts of Point Estimation 251
We can show that all of these are unbiased estimates of μ. Because there is not a unique unbiased
estimator, we cannot rely on the property of unbiasedness alone to select our estimator. We need
a method to select among unbiased estimators. We suggest a method in the following section.
7-3.2 Variance of a Point Estimator
ˆ
ˆ
Suppose that Θ 1 and Θ 2 are unbiased estimators of θ. This indicates that the distribution of
each estimator is centered at the true value of zero. However, the variance of these distribu-
ˆ
tions may be different. Figure 7-7 illustrates the situation. Because Θ has a smaller variance
ˆ ˆ 1
than Θ 2 , the estimator Θ 1 is more likely to produce an estimate close to the true value of θ.
A logical principle of estimation when selecting among several unbiased estimators is to
choose the estimator that has minimum variance.
Minimum Variance
Unbiased Estimator If we consider all unbiased estimators of θ, the one with the smallest variance is
called the minimum variance unbiased estimator (MVUE).
ˆ
In a sense, the MVUE is most likely among all unbiased estimators to produce an estimate θ
that is close to the true value of θ. It has been possible to develop methodology to identify the
MVUE in many practical situations. Although this methodology is beyond the scope of this
book, we give one very important result concerning the normal distribution.
… is a random sample of size n from a normal distribution with mean μ
If X , X , , X n1 2
2
and variance σ , the sample mean X is the MVUE for μ.
When we do not know whether an MVUE exists, we could still use a minimum variance
principle to choose among competing estimators. Suppose, for example, we wish to estimate
the mean of a population (not necessarily a normal population). We have a random sample
… , and we wish to compare two possible estimators for μ: the
of n observations X , X , , X n1 2
sample mean X and a single observation from the sample, say, X i . Note that both X and X i are
unbiased estimators of μ; for the sample mean, we have V X( ) = σ 2 /n from Chapter 5 and the
2
( )
(
variance of any observation is V X i ( ) = σ . Because V X <V X i ) for sample sizes n ≥ 2 , we
would conclude that the sample mean is a better estimator of μ than a single observation X i .
7-3.3 Standard Error: Reporting a Point Estimate
When the numerical value or point estimate of a parameter is reported, it is usually desirable
to give some idea of the precision of estimation. The measure of precision usually employed
is the standard error of the estimator that has been used.
^
Distribution of 1
Q
^
Distribution of Q 2
u
ˆ ˆ
FIGURE 7-7 The sampling distributions of two unbiased estimators Θ 1 and Θ 2 .