Page 133 - Elements of Distribution Theory
P. 133
P1: JZP
052184472Xc04 CUNY148/Severini May 24, 2005 2:39
4.4 Cumulants 119
Suppose that d = 2. Let a and b denote constants and let Z = aX 1 + bX 2 . Then the
cumulant-generating function of Z is given by
K Z (s) = K((as, bs)).
It follows that the second cumulant of Z,Var(Z), is given by
∂ 2
Var(Z) = 2 K X ((as, bs)) s=0
∂s
∂ 2 2 ∂ 2 ∂ 2
2
= a 2 K X (t 1 , 0) + b 2 K X (0, t 2 ) + 2ab K X (t 1 , t 2 ) .
∂t t 1 =0 ∂t t 2 =0 ∂t 1 ∂t 2 t=0
1 2
Hence, by part (i) of the theorem,
2
2
Var(Z) = a Var(X 1 ) + b Var(X 2 ) + 2abκ 11 ;
it follows that κ 11 = Cov(X 1 , X 2 ).
Now consider the case of general d; without loss of generality we may assume that
j = 1 and k = 2. Part (ii) of the theorem now follows from an argument analogous to the
one used in the proof of part (i): the cumulant-generating function of (X 1 , X 2 )isgiven by
K X ((t 1 , t 2 , 0,..., 0)) so that, from the result above, Cov(X 1 , X 2 ) = κ 110···0 .
Consider part (iii). Without loss of generality we may take j = 1 and k = z. Let K 1
denote the cumulant-generating function of X 1 and let K 2 denote the cumulant-generating
function of (X 2 ,..., X d ). Then
K(t) = K 1 (t 1 ) + K 2 (¯ t)
where t = (t 1 ,..., t d ) and ¯ t = (t 2 ,..., t d ). It follows that
∂K
(t) = 0,
∂t 1 ∂t 2
proving the result.
The proof of part (iv) follows from the same argument used in the scalar random variable
case (Theorem 4.15).
Example 4.24 (Multinomial distribution). Let X = (X 1 ,..., X m ) denote a random vector
withamultinomialdistribution,asinExample2.2.Thefrequencyfunctionofthedistribution
is given by
n
x m
x 1 x 2
p(x 1 ,..., x m ) = θ θ ··· θ ,
m
2
1
x 1 , x 2 ,..., x m
m
for x j = 0, 1,..., n, j = 1,..., m, x j = n; here θ 1 ,...,θ m are nonnegative con-
j=1
stants satisfying θ 1 +· · · + θ m = 1.
For t = (t 1 ,..., t m ),
m m
m x j
E exp t j X j = exp(t j x j )θ j
x 1 ,..., x m
j=1 x 1 ,...,x m+1 j=1