Page 322 - Probability and Statistical Inference
P. 322

6. Sufficiency, Completeness, and Ancillarity  299

                           0 < σ < ∞. Here, we write θ = µ, χ = ℜ and Θ = ℜ. It is easy to verify that the
                           statistic         is minimal sufficient for θ. Now consider the statistic U
                           = X  X  + X  and suppose that the question is whether U is sufficient for θ.
                                2
                              1
                                     3
                           Assume that U is sufficient for θ. But, then    must be a function of U,
                           by the Definition 6.3.1 of minimal sufficiency. That is, knowing an observed
                           value of U, we must be able to come up with a unique value of T. One can
                           proceed in the spirit of the earlier Example 6.3.2 and easily arrive at a contra-
                           diction. So, U cannot be sufficient for θ. !
                              Example 6.3.6 (Example 6.2.13 Continued) Suppose that X , ..., X  are iid
                                                                                     n
                                                                                1
                                                                                       +
                           Uniform(0, θ), where θ (> 0) is unknown. Here, χ = (0, θ) and Θ = ℜ . We
                           wish to find a minimal sufficient statistic for θ. For two arbitrary data points
                           x = (x , ..., x ) and y = (y , ..., y ), both from χ, we have:
                                1    n          1     n





                              Let us denote a(θ) = I(0 < x  < θ)/I(0 < y  < θ). Now, the question is
                                                      n:n
                                                                   n:n
                           this: Does the term a(θ) become free from θ if and only if x  = y ?
                                                                             n:n  n:n
                              If we assume that x  = y , then certainly a(θ) does not involve θ. It
                                                    n:n
                                               n:n
                           remains to show that when a(θ) does not involve θ, then we must have x  =
                                                                                        n:n
                           y . Let us assume that x  ≠ y , and then show that a(θ) must depend on the
                                               n:n
                            n:n
                                                    n:n
                           value of θ. Now, suppose that x  = 2, y  = .5. Then, a(θ) = 0, 1 or 0/0 when
                                                            n:n
                                                     n:n
                           θ = 1, 3 or .1 respectively. Clearly, a(θ) will depend upon the value of θ
                           whenever x  ≠ y . We assert that the term a(θ) becomes free from θ if and
                                         n:n
                                    n:n
                           only if x  = y . Hence, by the theorem of Lehmann-Scheffé, we claim that
                                  n:n  n:n
                           T = X , the largest order statistic, is minimal sufficient for θ. !
                               n:n
                              Remark 6.3.1 Let X , ..., X  be iid with the pmf or pdf f(x; θθ θθ θ), x ∈ χ ⊆ ℜ,
                                                    n
                                               1
                                                 k
                           θ θ θ θ θ = (θ , ..., θ ) ∈ Θ ⊆ ℜ . Suppose that a statistic T = T(X) = (T (X), ...,
                                      k
                                1
                                                                                    1
                           T (X)) is minimal sufficient for θθ θθ θ. In general can we claim that r = k? The
                            r
                           answer is no, we can not necessarily say that r = k. Suppose that f(x; θ)
                           corresponds to the pdf of the N(θ, θ) random variable with the unknown
                           parameter  θ > 0 so that  k = 1. The reader should verify that
                                                    is a minimal sufficient for θ so that r = 2. Here,
                           we have r > k. On the other hand, suppose that X  is N(µ, σ ) where µ and σ 2
                                                                             2
                                                                    1
                                                                              2
                           are both unknown parameters. In this case one has θθ θθ θ = (µ, σ ) so that k = 2.
                           But, T = X  is minimal sufficient so that r = 1. Here, we have r < k. In many
                                    1
                           situations, of course, one would find that r = k. But, the point is that there is
                           no guarantee that r would necessarily be same as k.
                              The following theorem provides a useful tool for finding minimal sufficient
   317   318   319   320   321   322   323   324   325   326   327