Page 282 - Fundamentals of Probability and Statistics for Engineers
P. 282
Parameter Estimation 265
Before proceeding, a remark is in order regarding the notation to be used. As seen
in Equation (9.2), our objective in parameter estimation is to determine a statistic
^
h
X 1 ; X 2 ; ... ; X n ;
9:20
which gives a good estimate of parameter . This statistic will be called an
estimator for , for which properties, such as mean, variance, or distribution,
provide a measure of quality of this estimator. Once we have observed sample
values x 1 , x 2 ,..., x n , the observed estimator,
^
h
x 1 ; x 2 ; ... ; x n ;
9:21
has a numerical value and will be called an estimate of parameter .
9.2.1 UNBIASEDNESS
^
An estimator is said to be an unbiased estimator for if
^
Ef g ;
9:22
^
for all . This is clearly a desirable property for , which states that, on average,
^
we expect to be close to true parameter value . Let us note here that the
requirement of unbiasedness may lead to other undesirable consequences.
Hence, the overall quality of an estimator does not rest on any single criterion
but on a set of criteria.
2
We have studied two statistics, X and S , in Sections 9.1.1 and 9.1.2. It is seen
2
from Equations (9.5) and (9.8) that, if X and S are used as estimators for the
population mean m and population variance 2 , respectively, they are unbiased
2
estimators. This nice property for S suggests that the sample variance defined
by Equation (9.7) is preferred over the more natural choice obtained by repla-
cing 1/(n 1) by 1/n in Equation (9.7). Indeed, if we let
n
1 X 2
2
S
X i X ;
9:23
n
i1
its mean is
n 1 2
2
EfS g ;
n
and estimator S 2 has a bias indicated by the coefficient (n 1)/n.
TLFeBOOK