Page 490 - Probability and Statistical Inference
P. 490
9. Confidence Interval Estimation 467
The tables from Cornish (1954, 1962), Dunnett (1955), Dunnett and Sobel
(1954, 1955), Krishnaiah and Armitage (1966), and Hochberg and Tamhane
(1987) provide the values of h in various situations. For example, if α = .05
and p = 2, then from Table 5 of Hochberg and Tamhane (1987) we can read
off h = 2.66, 2.54 respectively when n = 4, 5. Next, we simply rephrase
(9.4.13) to make the following joint statements:
The simultaneous confidence intervals given by (9.4.14) jointly have 100(1
α)% confidence. !
Example 9.4.5 Suppose that X , ..., X are iid random samples from the
i1
in
2
N(µ , σ ) population, i = 1, ..., 4. The observations X , ..., X , refer to the i th
i1
in
i
treatment, i = 1, ..., 4. Let us assume that all the observations from the treat-
ments are independent and that all the parameters are unknown.
Consider, for example, the problem of jointly estimating the parameters θ 1
= µ µ , θ = µ + µ 2µ by means of a simultaneous confidence region.
1
2
3
2
2
4
How should one proceed? Let us denote
The random variables Y , ..., Y are obviously iid. Also observe that any
n
1
linear function of Y is a linear function of the independent normal variables
i
X , ..., X . Thus, Y , ..., Y are iid 2dimensional normal variables. The
n
1
4i
1i
common distribution is given by N (θ, Σ) where θ ′ = (θ , θ ) and
2
2
1
Now, along the lines of the Example 9.4.3, one can construct a 100(1 α)%
joint elliptic confidence region for the parameters θ , θ . On the other hand,
1
2
one may proceed along the lines of the Example 9.4.4 to derive a 100(1 α)%
joint confidence intervals for the parameters θ , θ . The details are left out as
1
2
Exercise 9.4.1. !
9.4.3 Comparing the Variances
Example 9.4.6 Suppose that X , ..., X are iid random samples from the
i1
in
population, i = 0, 1, ..., p. The observations X , ..., X refer to
0n
01
2
a control population with its mean µ and variance σ whereas X , ..., X
0 0 i1 in

