Page 262 - Elements of Distribution Theory
P. 262
P1: JZP
052184472Xc08 CUNY148/Severini May 24, 2005 17:54
248 Normal Distribution Theory
denote the
Proof. Let λ 1 ,...,λ r 1 denote the nonzero eigenvalues of A 1 and let e 1 ,..., e r 1
denote the nonzero eigenvalues of A 2
corresponding eigenvectors; similarly, let γ 1 ,...,γ r 2
denote the corresponding eigenvectors. Then
and let v 1 ,..., v r 2
T T
1 e e
A 1 = λ 1 e 1 e + ··· + λ r 1 r 1 r 1
and
T T
v v .
1
A 2 = γ 1 v 1 v + ··· + γ r 2 r 2 r 2
Suppose A 1 A 2 = 0. Then
T
T
e A 1 A 2 v j = λ k γ j e v j = 0
k k
T
so that e v j = 0 for all j = 1,...,r 2 and k = 1,...,r 1 . Let P 1 denote the matrix with
k
. Then
columns e 1 ,..., e r 1 and let P 2 denote the matrix with columns v 1 ,..., v r 2
T
P P 2 = 0.
1
T
T
T
It follows that P X and P X are independent. Since Q 1 is a function of P X and Q 2 is a
1
2
1
T
function of P X,it follows that Q 1 and Q 2 are independent, proving part (i).
2
The proof of part (ii) is similar. As above, Q 1 is a function of P 1 X. Suppose that
T
T
A 1 M = 0. Since A 1 = P 1 DP where D is a diagonal matrix with diagonal elements
1
,
λ 1 ,...,λ r 1
T
T
P 1 DP M = 0.
1
T
T
T
It follows that P M = 0; hence, by part (vi) of Theorem 8.1, P X and MX are inde-
1 1
pendent. The result follows.
The following result gives a simple condition for showing that two quadratic forms are
independent chi-squared random variables.
Theorem 8.8. Let X denote a d-dimensional random vector with a multivariate normal
distribution with mean 0 and covariance matrix I d . Let A 1 and A 2 be d × d nonnegative-
T
definite, symmetric matrices and let Q j = X A j X, j = 1, 2. Suppose that
T
X X = Q 1 + Q 2 .
Let r j denote the rank of A j ,j = 1, 2.Q 1 and Q 2 are independent chi-squared random
variables with r 1 and r 2 degrees of freedom, respectively, if and only if r 1 + r 2 = d.
Proof. Suppose Q 1 and Q 2 are independent chi-squared random variables with r 1 and r 2
T
degrees of freedom, respectively. Since, by Theorem 8.6, X X has a chi-squared distribution
T
with d degrees of freedom, clearly we must have r 1 + r 2 = d; for example, E(X X) = d,
T
E(Q 1 ) = r 1 ,E(Q 2 ) = r 2 , and E(X X) = E(Q 1 ) + E(Q 2 ).
denote the nonzero eigenvalues of A 1 and
Suppose that r 1 + r 2 = d. Let λ 1 ,...,λ r 1
denote the
let e 1 ,..., e r 1 denote the corresponding eigenvectors; similarly, let γ 1 ,...,γ r 2
denote the corresponding eigenvectors. Then
nonzero eigenvalues of A 2 and let v 1 ,..., v r 2
T T
e e
1
A 1 = λ 1 e 1 e + ··· + λ r 1 r 1 r 1