Page 510 - Probability and Statistical Inference
P. 510

10. Bayesian Methods  487

                           error loss function, the determination of the Bayes estimator happens to be
                           very simple.
                              Theorem 10.4.2 In the case of the squared error loss function, the Bayes
                           estimate δ* ≡ δ*(t) is the mean of the posterior distribution k(θ; t), that is


                           for all possible observed data t ∈ T.
                              Proof In order to determine the estimator δ*(T), we need to minimize
                           ∫ L*(θ, δ(t))k(δ; t)dδ with respect to δ, for every fixed t ∈ T. Let us now
                           Θ
                           rewrite





                           where we denote                    In (10.4.8) we used the fact that
                           ∫ k(δ; t)dδ = 1 because k(δ; t) is a probability distribution on Θ. Now, we look
                           Θ
                           at the expression a(t) –           as a function of δ ≡ δ(t) and wish to
                           minimize this with respect to δ. One can accomplish this task easily. We leave
                           out the details as an exercise. !
                              Example 10.4.1 (Example 10.3.1 Continued) Suppose that we have the
                           random variables X , ..., X  which are iid Bernoulli(θ) given that   = θ, where
                                           1
                                                n
                             is the unknown probability of success, 0 <   < 1. Given that   = θ, the
                           statistic         is minimal sufficient for θ. Suppose that the prior distri-
                           bution of   is Beta(α, β) where α(> 0) and β(> 0) are known numbers. From
                           (10.3.2), recall that the posterior distribution of v given the data T = t happens
                           to be Beta(t + α, n – t + β) for t ∈ T = {0, 1, ..., n}. In view of the Theorem
                           10.4.2, under the squared error loss function, the Bayes estimator of   would
                           be the mean of the posterior distribution, namely, the mean of the Beta(t + α,
                           n – t + β) distribution. One can check easily that the mean of this beta distri-
                           bution simplifies to (t + α)/(α + β + n) so that we can write:




                           Now, we can rewrite the Bayes estimator as follows:






                           From the likelihood function, that is given   = θ, the maximum likelihood
                           estimate or the UMVUE of θ would be     whereas the mean of the prior
   505   506   507   508   509   510   511   512   513   514   515