Page 355 - Probability and Statistical Inference
P. 355
6. Sufficiency, Completeness, and Ancillarity 332
(i) f(x; θ) = exp((x θ)}/[1 + exp{(x θ)}] , x ∈ ℜ, θ ∈ ℜ,
2
which is called the logistic distribution;
(ii) f(x; θ) = 1/π{1 + (x θ) } , x ∈ ℜ, ? ∈ ℜ;
2 1
(iii) f(x; θ) = ½ exp{ | x θ |}, x ∈ ℜ, ? ∈ ℜ.
In each case, show that the order statistics (X , ..., X ) is minimal suf-
n:n
n:1
ficient for the unknown location parameter θ. That is, we do not achieve any
significant reduction of the original data X = (X , ..., X ). {Hint: Part (i) is
1
n
proved in Lehmann (1983, p. 43). Parts (ii) and (iii) can be handled similarly.}
6.4.1 Let X , ..., X be iid Bernoulli(p) where 0 < p < 1 is the unknown
1
n
parameter. Evaluate I (p), the information content in the whole data X = (X ,
1
X
..., X ). Compare I (p) with . Can the Theorem 6.4.2 be used here to
n X
claim that is sufficient for p?
6.4.2 (Exercise 6.4.1 Continued) Let X , ..., X be iid Bernoulli(p) where 0
n
1
< p < 1 is the unknown parameter, n ≥ 3. Let T = X + X , U = X + X + 2X .
2
3
2
1
1
Compare I (p) with I (p) and I (p). Can T be sufficient for p? Can U be
U
T
X
sufficient for p? {Hint: Try to exploit the Theorem 6.4.2}
6.4.3 Verify the results given in equations (6.4.9) and (6.4.11).
6.4.4 (Exercise 6.2.1 Continued) Let X , X , be iid Geometric(p) where 0
2
1
< p < 1 is the unknown parameter. Let X = (X , X ), and T = X + X which is
1
2
1
2
sufficient for p. Evaluate I (p) and I (p), and then compare these two infor-
X
T
mation contents.
6.4.5 (Exercise 6.2.10 Continued) In a N(µ, σ ) distribution where ∞ < µ
2
< ∞, 0 < σ < ∞, suppose that only µ is known. Show that I (s ) > I (s )
2
2
S2
U2
where U = n and S = (n 1) ) , n ≥ 2.
1
2
2
2
1
2
Would it then be fair to say that there is no point in using the statistic S which
making inferences about σ when µ is assumed known?
2
6.4.6 (Exercise 6.2.11 Continued) In the two-parameter negative expo-
nential distribution, if only µ is known, show that I (σ) > I (σ) where V = n
V
T
1 and T = (n - 1) , n ≥ 2. Would it then be fair
-1
to say that there is no point in using the statistic T while making inferences
about σ when µ is assumed known?
6.4.7 (Exercise 6.3.17 Continued) Suppose that we have X , ..., X are iid
1
m
N(µ, σ ), Y , ..., Y are iid N(0, σ ), the Xs are independent of the Ys where
2
2
n
1
∞ < µ < ∞, 0 < σ < ∞ are the unknown parameters. Suppose that T = T(X,
Y) is the minimal sufficient statistic for θ = (µ, σ ). Evaluate the expressions
2
of the information matrices I X,Y (θ) and I (θ), and then compare these two
T
information contents.
6.4.8 (Example 6.2.5 Continued) Consider the statistics X X and U.
1 2