Page 331 - Probability and Statistical Inference
P. 331
308 6. Sufficiency, Completeness, and Ancillarity
so that one has
Hence we obtain
Obviously, I (θ) = I (θ) = 0 corresponding to S . Utilizing (6.4.21), we ob-
2
12
21
tain the information matrix corresponding to the statistic S , namely,
2
Comparing (6.4.17) and (6.4.22), we observe that
which is a positive semi definite matrix. That is, if we summarize the whole
data X only through S , then there is certainly some loss of information when
2
2
µ and σ are both assumed unknown. !
Example 6.4.10 (Examples 6.4.8-6.4.9 Continued) Individually, whether
we consider the statistic or S , both lose some information in comparison
2
with I (θ), the information contained in the whole data X. This is clear from
X
(6.4.20) and (6.4.23). But recall that and S are independently distributed,
2
and hence we note that
That is, the lost information when we consider only or S is picked up by
2
the other statistic. !
In the Example 6.4.10, we tacitly used a particular result which is fairly
easy to prove. For the record, we merely state this result while its proof is left
as the Exercise 6.4.11.
Theorem 6.4.3 Suppose that X , ..., X are iid with the common pmf or
n
1
pdf given by f(x; θ). We denote the whole data X = (X , ..., X ). Suppose that
1
n
we have two statistics T = T (X), T = T (X) at our disposal and T , T are
2
1
2
1
2
1
distributed independently. Then, the information matrix I (θ) is given by
T