Page 374 - Probability and Statistical Inference
P. 374

7. Point Estimation  351

                           7.3 Criteria to Compare Estimators

                           Let us assume that a population’s pmf or pdf depends on some unknown
                                                         k
                           vector valued parameter θθ θθ θ ∈ Θ (⊆ ℜ ). In many instances, after observing the
                           random variables X , ..., X , we will often come up with several competing
                                                 n
                                           1
                           estimators for  T(θθ θθ θ), a real valued parametric function of interest. How should
                           we then proceed to compare the performances of rival estimators and then
                           ultimately decide which one is perhaps the “best”? The first idea is introduced
                           in Section 7.3.1 and the second one is formalized in Sections 7.3.2-7.3.3.



                           7.3.1 Unbiasedness, Variance, and Mean Squared Error
                           In order to set the stage, right away we start with two simple definitions.
                           These are followed by some examples as usual. Recall that  T(θθ θθ θ) is a real
                           valued parametric function of θθ θθ θ.
                              Definition 7.3.1 A real valued statistic T ≡ T(X , ..., X ) is called an
                                                                               n
                                                                        1
                           unbiased estimator of  T(θθ θθ θ) if and only if E (T) =  T(θθ θθ θ) for all θθ θθ θ ∈ Θ. A statistic
                                                              θ
                           T ≡ T(X , ..., X ) is called a biased estimator of  T(θθ θθ θ) if and only if T is not
                                        n
                                  1
                           unbiased for  T(θθ θθ θ).
                              Definition 7.3.2 For a real valued estimator T of  (θθ θθ θ), the amount of bias
                                                                        T
                           or simply the bias is given by



                              Gauss (1821) introduced originally the concept of an unbiased estimator in
                           the context of his theory of least squares. Intuitively speaking, an unbiased
                           estimator of  T(θθ θθ θ) hits its target  T(θθ θθ θ) on the average and the corresponding bias
                           is then exactly zero for all θθ θθ θ ∈ Θ. In statistical analysis, the unbiasedness
                           property of an estimator is considered very attractive. The class of unbiased
                           estimators can be fairly rich. Thus, when comparing rival estimators, initially
                           we restrict ourselves to consider the unbiased ones only. Then we choose the
                           estimator from this bunch which appears to be the “best” according to an
                           appropriate criteria.
                              In order to clarify the ideas, let us consider a specific population or
                           universe which is described as N(µ, σ ) where µ ∈ ℜ is unknown but s ∈
                                                            2
                            +
                           ℜ  is assumed known. Here  χ  = ℜ and Θ = ℜ. The problem is one of
                           estimating the population mean µ. From this universe, we observe the iid
                           random variables X , ..., X . Let us consider several rival estimators of µ
                                            1     4
   369   370   371   372   373   374   375   376   377   378   379