Page 474 - Probability and Statistical Inference
P. 474
9. Confidence Interval Estimation 451
Example 9.2.10 Joint Confidence Intervals for the Normal Mean and
Variance: Suppose that X , ..., X are iid N(µ, σ ) with both unknown pa-
2
n
1
rameters µ ∈ ℜ and σ ∈ ℜ , n ≥ 2. Given some α ∈ (0, 1), we wish to
+
2
construct (1 α) joint two-sided confidence intervals for both µ and σ . Let
be the sample mean and be the sample vari-
ance. The statistic T ≡ ( , S) is minimal sufficient for (µ, σ).
From the Example 9.2.8, we claim that
is a (1 γ) confidence interval for µ for any fixed γ ∈ (0, 1). Similarly, from
the Example 9.2.9, we claim that
2
is a (1 δ) confidence interval for σ for any δ ∈ (0, 1). Now, we can write
Now, if we choose 0 < γ , δ < 1 so that γ + δ = α, then we can think of {J ,
1
J } as the two-sided joint confidence intervals for the unknown parameters
2
2
µ, σ respectively with the joint confidence coefficient at least (1 α). Cus-
tomarily, we pick γ = δ = ½α. !
One will find more closely related problems on joint confidence
intervals in the Exercise 9.2.7 and Exercises 9.2.11-9.2.12.
9.2.3 The Interpretation of a Confidence Coefficient
Next, let us explain in general how we interpret the confidence coefficient
or the coverage probability defined by (9.1.1). Consider the confidence
interval J for θ. Once we observe a particular data X = x, a two-sided
confidence interval estimate of θ is going to be (T (x), T (x)), a fixed
U
L
subinterval of the real line. Note that there is nothing random about this
observed interval estimate (T (x), T (x)) and recall that the parameter θ is
U
L
unknown (∈ Θ) but it is a fixed entity. The interpretation of the phrase
(1 α) confidence simply means this: Suppose hypothetically that
we keep observing different data X = x , x , x , ... for a long time, and we
1
2
3

