Page 328 - Probability and Statistical Inference
P. 328

6. Sufficiency, Completeness, and Ancillarity  305

                           we only discuss the case of a two-dimensional parameter. Suppose that X is
                           an observable real valued random variable with the pmf or pdf f(x;θ) where
                           the parameter θ = (θ , θ ) ∈ Θ, an open rectangle ⊆ ℜ , and the χ space does
                                                                        2
                                            1  2
                           not depend upon θ. We assume throughout that    f(x; θ) exists, i = 1, 2, for
                           all x ∈ χ, θ ∈ Θ, and that we can also interchange the partial derivative (with
                           respect to θ , θ ) and the integral (with respect to x).
                                     1  2
                              Definition 6.4.2 Let us extend the earlier notation as follows. Denote
                           I (θ) = E                               , for i, j = 1, 2. The Fisher
                           ij      θ
                           information matrix or simply the information matrix about θ is given by





                              Remark 6.4.2 In situations where    f(x; θ) exists for all x ∈ χ, for all
                           i, j = 1, 2, and for all θ ∈ Θ, we can alternatively write




                           and express the information matrix I (θ) accordingly. We have left this as an
                                                          X
                           exercise.











                              Having a statistic T = T(X , ..., X ), however, the associated information
                                                    1
                                                          n
                           matrix about θ will simply be calculated as I (θ) where one would replace the
                                                                T
                           original pmf or pdf f(x; θ) by that of T, namely g(t;θ), t ∈  . When we
                           compare two statistics T  and T  in terms of their information content about a
                                                     2
                                               1
                           single unknown parameter θ, we simply look at the two one-dimensional quan-
                           tities I  (θ) and I  (θ), and compare these two numbers. The statistic associ-
                                T1
                                         T2
                           ated with the larger information content would be more appealing. But, when
                           θ is two-dimensional, in order to compare the two statistics T  and T , we
                                                                                       2
                                                                                 1
                           have to consider their individual two-dimensional information matrices I (θ)
                                                                                        T1
                           and I (θ). It would be tempting to say that T  is more informative about θ
                               T2
                                                                  1
                           than T  provided that
                                2
   323   324   325   326   327   328   329   330   331   332   333