Page 380 - Probability and Statistical Inference
P. 380

7. Point Estimation  357

                           For any T ∈ D we have:
                           for all θθ θθ θ which shows that any estimator T chosen from D is an unbiased
                           estimator of  T(θθ θθ θ). Hence, it is obvious that D  ⊆  C. Now we wish to address
                           the following question.
                                 What is the best estimator of  T(θθ θθ θ) within the smaller class D?
                           That is, which estimator from class D has the smallest variance? From the
                           following theorem one will see that the answer is indeed very simple.
                              Theorem 7.3.2 Within the class of estimators D, the one which has the
                                                            -1
                           smallest variance corresponds to α  = k , i = 1, ..., k. That is, the best unbi-
                                                         i
                           ased estimator of  T(θθ θθ θ) within the class D turns out to be       which
                           is referred to as the best linear (in T , ..., T ) unbiased estimator (BLUE) of
                                                          1     k
                           T(θθ θθ θ).
                              Proof Since the T ’s are pairwise uncorrelated, for any typical estimator T
                                             i
                           from the class D, we have


                           From (7.3.6) it is now clear that we need to
                              minimize       subjected to the restriction that
                           But, observe that


                           Hence,             for all choices of α , i = 1, ..., k such that
                                                              i
                           But, from (7.3.7) we see that          the smallest possible value, if

                           and only if                   That is, V (T) would be minimized if and
                                                                θ
                                      -1
                           only if α  = k , i = 1, ..., k. !
                                  i
                              Example 7.3.4 Suppose that X , ..., X  are iid N(µ, σ ) where µ, σ are both
                                                                         2
                                                       1     n
                           unknown, −∞ < µ < ∞, 0 < σ < ∞. Among all linear (in X , ..., X ) unbiased
                                                                            1     n
                           estimators of µ, the BLUE turns out to be   , the sample mean. This follows
                           immediately from the Theorem 7.3.2.!
   375   376   377   378   379   380   381   382   383   384   385