Page 372 - Probability and Statistical Inference
P. 372

7. Point Estimation  349

                           wish to find the MLE for θ. The likelihood function is



                           which is maximized at an end point. By drawing a simple picture of L(θ) it
                           should be apparent that L(θ) is maximized when θ = x . That is the MLE of
                                                                        n:n
                           θ is           the largest order statistic. !
                              Example 7.2.10 Suppose that X , ..., X  are iid Poisson(λ) where 0 < λ <
                                                         1
                                                              n
                                                         χ
                           ∞ is the unknown parameter. Here   = {0, 1, 2, ...} and Θ = ℜ . We wish to
                                                                                +
                           find the MLE for θ. The likelihood function is
                           so that one has



                           Now L(λ) is to be maximized with respect to λ and that is equivalent to
                           maximizing logL(λ) with respect to λ. We have   logL(µ) = −n +
                           which when equated to zero provides the solution      If     > 0, then d/dµ
                           logL(µ) = 0 when . In this situation it is easy to verify that    is
                           negative. Hence the MLE is     whenever        But there is a fine point
                           here. If        which is equivalent to saying that x  = ... = x  = 0, the
                                                                          1
                                                                                   n
                           likelihood function in (7.2.5) does not have a global maximum. In this case
                           L(λ) = e  which becomes larger as λ(> 0) is allowed to become smaller. In
                                  -nλ
                           other words if       an MLE for λ can not be found. Observe, however,
                           that                       for all i = 1, ..., n} = exp( -nλ) which will be
                           negligible for “large” values of nλ.

                                             Table 7.2.1. Values of

                                     nλ:           3      4       5        6     7
                                             :   .0498  .01∞3    .0067   .0025 .0009


                              By looking at these entries, one may form some subjective opinion about
                           what values of nλ should perhaps be considered “large” in a situation like
                           this.!
                              In the Binomial(n,p) situation where 0 < p < 1 is the unknown parameter,
                           the problem of deriving the MLE of p would hit a snag similar to what we
                           found in the Example 7.2.10 when   is 0 or 1. Otherwise the MLE of p
                           would be         If the parameter space is replaced by 0 ≤ p ≤ 1, then of
                           course the MLE of p would be        We leave this as the Exercise 7.2.8.
   367   368   369   370   371   372   373   374   375   376   377