Page 325 - Probability and Statistical Inference
P. 325
302 6. Sufficiency, Completeness, and Ancillarity
since E [(X µ) ] = V(X) = σ . That is, as we contemplate having larger and
2
2
µ
larger values of σ, the variability built in X increases, and hence it seems
natural that the information about the unknown parameter µ contained in the
data X will go down further and further. !
The following result quantifies the information about the unknown param-
eter θ contained in a random sample X , ..., X of size n.
1 n
Theorem 6.4.1 Suppose that X , ..., X are iid with the common pmf or
1 n
pdf given by f(x; θ). We denote , the infor-
mation contained in the observation X . Then, the information I (θ), con-
1
X
tained in the random sample X = (X , ..., X ), is given by
1 n
Proof Denote the observed data X = (x , ..., x ) and rewrite the likelihood
n
1
function from (6.2.4) as
Hence we have . Now, utilizing (6.4.1), one can
write down the information contained in the data X as follows:
Since the Xs are iid, we have for each i =
1, ..., n, and hence the first term included in the end of (6.4.6) amounts to
nI (θ). Next, the second term in the end of (6.4.6) can be expressed as
X1
since the Xs are identically distributed.