Page 241 - Probability and Statistical Inference
P. 241
218 4. Functions of Random Variables and Sampling Distribution
Sampling Distributions : The Multivariate Normal Case
Now, we briefly touch upon some aspects of sampling distributions in the
context of a multivariate normal population. Suppose that X , ..., X are iid
1
n
N (µµ µµ µ, ΣΣ ΣΣ Σ) where µµ µµ µ ∈ ℜ and ΣΣ ΣΣ Σ is a p × p p.d. matrix, n ≥ 2. Let us denote
p
p
Observe that is a p-dimensional column vector whereas W is a p × p
matrix, both functionally depending on the random samples X , ..., X .
1
n
Theorem 4.6.1 Suppose that X , ..., X are iid N (µµ µµ µ, ΣΣ ΣΣ Σ) where µµ µµ µ ∈ ℜ and
p
1
n
p
Σ Σ Σ Σ Σ is a p × p p.d. matrix, n ≥ 2. Then, we have the following sampling distri-
butions:
The part (i) in this theorem is easily proved by applying the Definition
4.6.1. Also, the part (ii) can be easily proved when p = 2. We leave their
verifications out in the Exercise 4.6.10. Proofs of parts (ii)-(iv) in their fullest
generality are, however, out of scope for this book. The readers, however,
should exploit these results to avoid laborious calculations whenever possible.
4.6.2 The t Distribution
This distribution comes up frequently in the areas of multiple comparisons
and selection and ranking. Let the random vector X, where X = (X , ...,
1
X ), have a p-dimensional normal distribution with the mean vector 0 and
p
the p × p dispersion matrix ΣΣ ΣΣ Σ. We assume that each diagonal entry in ΣΣ ΣΣ Σ is 1.
That is, the random variables X , ..., X have each been standardized. In
1
p
th
other words, the (i, j) entry in the matrix ΣΣ ΣΣ Σ corresponds to the population
correlation coefficient ρ between the two variables X , X , 1 ≤ i ≠ j ≤ p. In
j
i
ij
this special situation, one also refers to ΣΣ ΣΣ Σ as a correlation matrix. Suppose
that Y is a positive real valued random variable distributed as . It is also
assumed that Y and (X , ..., X ) are independent. Let us denote p new ran-
p
1
dom variables