Page 330 - Probability and Statistical Inference
P. 330
6. Sufficiency, Completeness, and Ancillarity 307
Example 6.4.8 (Example 6.4.7 Continued) Let the
sample mean. We are aware that is distributed as N(µ, n σ ) and its pdf is
1
2
given by
so that one has
Hence we have
We can again show that I (θ) = I (θ) = 0 corresponding to . Utilizing
12
21
(6.4.18), we obtain the following information matrix corresponding to the
statistic :
Comparing (6.4.17) and (6.4.19), we observe that
which is a positive semi definite matrix. That is, if we summarize the whole
data X only through , then there is some loss of information. In other
words, does not preserve all the information contained in the data X when
2
µ and σ are both assumed unknown. !
Example 6.4.9 (Example 6.4.7 Continued) Suppose that we consider the
2
1
sample variance, S = (n 1) . We are aware that Y = (n 1)
1
S /σ is distributed as for n ≥ 2, and so with c = {2 (n1)/2 Γ(½(n 1))} ,
2
2
the pdf of Y is given by
Hence with d = (n - 1) (n-1)/2 c, the pdf of S is given by
2