Page 310 - Probability and Statistical Inference
P. 310
6. Sufficiency, Completeness, and Ancillarity 287
and Θ = (0, 1). We had verified that the statistic was sufficient
for θ. Let us consider another statistic U = X X + X . The question is whether
3
2
1
U is a sufficient statistic for p. Observe that
Now, since {X = 1 n X = 0 n X = 0} is a subset of {U = 0}, we have
1 2 3
This conditional probability depends on the true value of p and so we claim
that the statistic U is not sufficient for p. That is, after the completion of the
n trials of the Bernoulli experiment, if one is merely told the observed value of
the statistic U, then some information about the unknown parameter p would
be lost.!
In the continuous case, we work with the same basic idea.
If for some data x , ..., x , the conditional pdf given T = t,
1
n
f (x , ..., x ), involves the parameter θ, then the statistic T
X|T=t 1 n
can not be sufficient for θ. Look at the Example 6.2.6.
Example 6.2.6 (Example 6.2.4 Continued) Suppose that X , X are iid
1
2
N(θ, 1) where θ is unknown, ∞ < θ < ∞. Here, χ = ℜ and Θ = ℜ. Let us
consider a statistic, for example, T = X + 2X while its values are denoted
1
2
by t ∈ = ℜ. Let us verify that T is not sufficient for θ by showing that the
conditional distribution of X given T = t involves θ. Now, following the
1
Definition 4.6.1 and the Example 4.6.1, we can claim that the joint distribu-
tion of (X , T) is N (θ, 3θ, 1, 5, ), and hence the conditional distribu-
1 2
tion of X given T = t is normal with its mean = θ + (t 3θ) =
1 2
1/5(t + 2θ) and variance = 1 ( ) = 4/5, for t ∈ ℜ. Refer to the
Theorem 3.6.1 as needed. Since this conditional distribution depends on
the unknown parameter θ, we conclude that T is not sufficient for θ. That