Page 266 - Elements of Distribution Theory
P. 266
P1: JZP
052184472Xc08 CUNY148/Severini May 24, 2005 17:54
252 Normal Distribution Theory
may be viewed as the squared length of the residual vector, X minus its projection. Theo-
¯
rem 8.9 states that the projection Xm 0 and the length of the residual vector are independent
random variables and that any linear function of the projection has a normal distribution.
This result holds much more generally and, in fact, this generalization follows almost
immediately from the results given above.
Theorem 8.10. Let X denote an n-dimensional random vector with a multivariate normal
2
distribution with mean vector 0 and covariance matrix given by σ I n . Let M denote a
n
p-dimensional linear subspace of R and let P M be the matrix representing orthogonal
projection onto M.
n
T
Let a ∈ R be such that a P M a > 0.
2
T
T
(i) a P M X has a normal distribution with mean 0 and variance (a P M a)σ .
2
T
2
2
(ii) Let S = X (I n − P M )X/(n − p). Then (n − p)S /σ has a chi-squared distribu-
tion with n − pdegrees of freedom.
2
(iii) S and P M Xare independent.
(iv)
T
a P M X
T 1 T 1
(a P M a) 2 [X (I d − P M )X/(n − p)] 2
has a t-distribution with n − pdegrees of freedom.
T
T
2
T
Proof. Let Y = a P M X and S = X (I n − P M )X/(n − p). Since P (I n − P M ) = 0, it
M
2
T
follows from Theorem 8.7 that P M X and S are independent. From Theorem 8.1, a P M X
2
T
has a normal distribution with mean 0 and variance (a P M a)σ . From Theorem 8.6,
2
2
(n − p)S /σ has a chi-squared distribution with n − p degrees of freedom. Part (iv) fol-
lows from the definition of the t-distribution.
Example 8.12 (Simple linear regression). Let Y 1 , Y 2 ,..., Y n denote independent random
variables such that, for each j = 1, 2,..., n, Y j has a normal distribution with mean β 0 +
2
β 1 z j and variance σ . Here z 1 , z 2 ,..., z n are fixed scalar constants, not all equal, and β 0 ,β 1 ,
and σ are parameters.
Let Y = (Y 1 ,..., Y n ) and let Z denote the n × 2 matrix with jth row (1 z j ), j =
1,..., n. Let M denote the linear subspace spanned by the columns of Z. Then
T
T
P M = Z(Z Z) −1 Z .
Let β = (β 0 β 1 ) and let
T
ˆ
T
β = (Z Z) −1 Z Y
ˆ
so that P M Y = Zβ. Consider the distribution of
T ˆ
c (β − β)
T = 1
T
[c Z(Z Z) −1 Z c] 2 S
T
T
T
2
2
where S = Y (I d − P M )Y/(n − 2) and c ∈ R .
Let X = Y − Zβ. Then X has a multivariate normal distribution with mean vector 0 and
2
covariance matrix σ I n . Note that
T
T
ˆ
β − β = (Z Z) −1 Z X