Page 379 - Probability and Statistical Inference
P. 379

356    7. Point Estimation

                                 One may run the experiment a little differently as follows. Let N = the number
                                 of runs needed to observe the first success so that the pmf of the random
                                                                    y-1
                                 variable N is given by P(N = y) = p(1 - p) , y = 1, 2, 3, ... which means that
                                 N is a Geometric(p) random variable. One can verify that E [N] = p  for all p
                                                                                         -1
                                                                                   p
                                                                                     -1
                                 ∈ (0, 1). That is the sample size N is an unbiased estimator of p . This method
                                 of data collection, known as the inverse binomial sampling, is widely used in
                                 applications mentioned before. In his landmark paper, Petersen (1896) gave
                                 the foundation of capture-recapture sampling. The 1956 paper of the famous
                                 geneticist, J. B. S. Haldane, is cited frequently. Look at the closely related
                                 Exercise 7.3.8. !

                                           How should we go about comparing performances of
                                                  any two unbiased estimators of  T(θ)?

                                    This is done by comparing the variances of the rival estimators. Since the
                                 rival estimators are assumed unbiased for  T(θ), it is clear that a smaller vari-
                                 ance will indicate a smaller average (squared) error. So, if T , T  are two
                                                                                       1
                                                                                          2
                                 unbiased estimators of  T(θθ θθ θ), then T  is preferable to (or better than) T  if
                                                                1
                                                                                               2
                                 V (T ) ≤ V (T ) for all θθ θθ θ ∈ Θ but V (T ) < V (T ) for some θθ θθ θ ∈ Θ. Now, in the
                                            2
                                     1
                                          θ
                                                              θ
                                                                        2
                                                                     θ
                                                                1
                                  θ
                                 class of unbiased estimators of  T(θθ θθ θ), the one having the smallest variance is
                                 called the best unbiased estimator of  T(θθ θθ θ). A formal definition is given shortly.
                                    Using such a general principle, by looking at (7.3.2), it becomes apparent
                                 that among the unbiased estimators T , T , T  and T  for the unknown mean µ,
                                                                           6
                                                                2
                                                                      5
                                                                   3
                                 the estimator T  is the best one to use because it has the smallest variance.
                                              3
                                    Definition 7.3.4 Assume that there is at least one unbiased estimator of
                                 the unknown real valued parametric function  T(θθ θθ θ). Consider the class C of all
                                 unbiased estimators of  T(θθ θθ θ). An estimator T ∈ C is called the best unbiased
                                 estimator or the uniformly minimum variance unbiased estimator (UMVUE)
                                 of  T(θθ θθ θ) if and only if for all estimators T* ∈ C, we have
                                    In Section 7.4 we will introduce several approaches to locate the UMVUE.
                                 But first let us focus on a smaller subset of C for simplicity. Suppose that we
                                 have located estimators T , ..., T  which are all unbiased for  T(θθ θθ θ) such that
                                                       1     k
                                          2
                                 V (T ) = δ  and the T ’s are pairwise uncorrelated, 0 < δ < ∞, i = 1, ..., k.
                                  ?
                                                   i
                                     i
                                 Denote a new subclass of estimators
   374   375   376   377   378   379   380   381   382   383   384