Page 362 - Applied Probability
P. 362
Appendix B: The Normal Distribution
352
is(µ+σX)
Here we take σ> 0. The general identity E e
permits us to write the characteristic function of Y as
2 2
isµ ˆ
ψ(σs)= e
isµ−
e
2 .
2
The mean and variance of Y are µ and σ . σ s = e isµ E e i(σs)X
One of the most useful properties of normally distributed random vari-
ables is that they are closed under the formation of independent linear
combinations. Thus, if Y and Z are independent and normally distributed,
then aY + bZ is normally distributed for any choice of the constants a and
b. To prove this result, it suffices to assume that Y and Z are standard
ˆ
normal. In view of the form of ψ(s), we then have
E e is(aY +bZ) =E e i(as)Y E e i(bs)Z
ˆ
2
2
= ψ (a + b )s .
Thus, if we accept the fact that a distribution function is uniquely defined
by its characteristic function, aY + bZ is normally distributed with mean
2
2
0 and variance a + b .
Doubtless the reader is also familiar with the central limit theorem. For
the record, recall that if X n is a sequence of i.i.d. random variables with
2
common mean µ and common variance σ , then
n x
j=1 (X j − µ) 1 u 2
lim Pr √ ≤ x = √ e − 2 du.
n→∞ nσ 2 2π −
Of course, there is a certain inevitability to the limit being standard normal;
namely, if the X n are standard normal to begin with, then the standardized
sum n −1/2 n X j is also standard normal.
j=1
B.2 Multivariate Normal Random Vectors
We now extend the univariate normal distribution to the multivariate nor-
mal distribution. Among the many possible definitions, we adopt the one
most widely used in stochastic simulation. Our point of departure will be
random vectors with independent, standard normal components. If such a
random vector X has n components, then its density is
n
2
t
1 −x j /2 1 n/2 −x x/2
√ e = e .
2π 2π
j=1
Because the standard normal distribution has mean 0, variance 1, and
2
characteristic function e −s /2 , it follows that X has mean vector 0, variance

