Page 332 - Probability and Statistical Inference
P. 332

6. Sufficiency, Completeness, and Ancillarity  309

                              Let us now go back for a moment to (6.4.10) for the definition of the
                           information matrix I (θ). Now suppose that Y = h(X) where the function
                                             X
                           h(.) : χ → Y is one-to-one. It should be intuitive enough to guess that I (θ) =
                                                                                      X
                           I (θ). For the record, we now state this result formally.
                           Y
                              Theorem 6.4.4 Let X be an observable random variable with its pmf or
                           pdf f(x; θ) and the information matrix I (θ). Suppose that Y = h(X) where the
                                                           X
                           function h(.) : χ → Y is one-to-one. Then, the information matrix about the
                           unknown parameter θ contained in Y is same as that in X, that is



                              Proof In order to keep the deliberations simple, we consider only a real
                           valued continuous random variable X and a real valued unknown parameter θ.
                           Recall that we can write                        . Note that x = h (y)
                                                                                        -1
                           is well-defined since h(.) is assumed one-to-one. Now, using the transforma-
                           tion techniques from (4.4.1), observe that the pdf of Y can be expressed as




                           Thus, one immediately writes










                           The vector valued case and the case of discrete X can be disposed off with
                           minor modifications. These are left out as Exercise 6.4.12. !



                           6.5 Ancillarity



                           The concept called ancillarity of a statistic is perhaps the furthest away
                           from sufficiency. A sufficient statistic T preserves all the information about
                           θ contained in the data X. To contrast, an ancillary statistic T by itself
                           provides no information about the unknown parameter θ. We are not im-
                           plying that an ancillary statistic is necessarily bad or useless. Individually,
                           an ancillary statistic would not provide any information about  θ, but
   327   328   329   330   331   332   333   334   335   336   337