Page 386 - Probability and Statistical Inference
P. 386

7. Point Estimation  363

                                        the sufficient statistic for λ. The domain space for U is U =
                           {0, 1, 2, ..}. Now, for all u ∈ u, conditionally given U = u, the statistic T can
                           take one of the possible values 0 or 1. Thus, for u ∈ u, we can write










                           Note that        is Poisson(nλ),      is Poisson((n − 1)λ) whereas X 1
                           and       are independently distributed. Thus, from (7.4.8) we rewrite E [T
                                                                                         λ
                           |  U = u] as








                           and hence the Rao-Blackwellized version of the estimator T is W = (1 -
                                      Now we know the expression for W and so we can directly
                           evaluate E [W]. We use the form of the mgf of a Poisson random variable,
                                    λ
                                                  s
                           namely E  [e ] = exp{nλ(e  - 1)}, and then replace s with log(1 - n ) to write
                                                                                   -1
                                      sU
                                   λ
                                                                     -λ
                           In other words, W is an unbiased estimator of e , but this should not be
                           surprising. The part (i) of the Rao-Blackwell Theorem leads to the same con-
                           clusion. Was there any way to guess the form of the estimator W before
                                                                                        -2λ
                           actually going through Rao-Blackwellization? How should one estimate e  or
                           e ? One should attack these problems via Rao-Blackwellization or mgf as we
                            -3λ
                           just did. We leave these and other related problems as Exercise 7.4.3. !
                              Example 7.4.6 Suppose that X , ..., X  are iid N(µ, σ ) where µ is un-
                                                                            2
                                                        1
                                                               n
                                                                            χ
                           known but σ  is known with −∞ < µ < ∞, 0 < σ < ∞ and   = ℜ. We wish to
                                      2
                           estimate T(µ) = µ unbiasedly. Consider T = X  which is an unbiased estimator
                                                                1
                           of µ. Consider                the sufficient statistic for µ. The domain
                           space for U is u = ℜ. Now, for u ∈ u, conditionally given U = u, the distribu-
                           tion of the statistic T is N (u, σ (1 - n )). Refer to the Section 3.6 on the
                                                       2
                                                            -1
                           bivariate normal distribution as needed. Now,                That
                           is, the Rao-Blackwellized version of the initial unbiased estimator T turns out
                           to be   . !
   381   382   383   384   385   386   387   388   389   390   391