Page 113 - Elements of Distribution Theory
P. 113

P1: JZP
            052184472Xc04  CUNY148/Severini  May 24, 2005  2:39





                                      4.3 Laplace Transforms and Moment-Generating Functions  99

                        of (X i , X j ). Let

                                             σ ij = Cov(X i , X j ), i, j = 1,..., d.

                        It is often convenient to work with these covariances in matrix form. Hence, let   denote
                        the d × d matrix with (i, j)th element given by σ ij ; note that this matrix is symmetric. The
                        matrix   is known as the covariance matrix of X. The following theorem gives some basic
                        properties of covariance matrices; the proof is left as an exercise.


                                                                                    T
                        Theorem 4.4. Let X denote a d-dimensional random vector such that E(X X) < ∞ and
                        let   denote the covariance matrix of X. Then
                                         T
                                  T
                           (i) Var(a X) = a  a, a ∈ R d
                          (ii)   is nonnegative definite


                                4.3 Laplace Transforms and Moment-Generating Functions
                        Let X denote a real-valued random variable. Consider the expected value E[exp{tX}] where
                        t is a given number, −∞ < t < ∞. Since exp{tX} is a nonnegative random variable, this
                        expected value always exists, although it may be infinite.
                          The expected value of exp{tX} is closely related to the characteristic function of X,
                        the expected value of exp{it X}.However, there is an important difference between the
                        functions exp{tx} and exp{itx}. Although exp{itx} is bounded, with | exp{itx}| = 1, the
                        function exp{tx} is unbounded for any nonzero t and grows very fast as either x →∞ or
                        x →−∞. Hence, for many random variables, the set of values of t for which E{exp(tX)}
                        is finite is quite small.
                          Suppose there exists number δ> 0 such that E[exp{tX}] < ∞ for |t| <δ.In this case,
                        we say that X,or, more precisely, the distribution of X, has moment-generating function

                                               M X (t) = E[exp{tX}], |t| <δ.

                        As noted above, it is not unusual for a random variable to not have a moment-generating
                        function.

                        Laplace transforms
                        The situation is a little better if X is nonnegative. In that case, we know that E{exp(tX)} < ∞
                        for all t ≤ 0. Hence, we may define a function
                                                L(t) = E{exp(−tX)}, t ≥ 0;

                        we will refer to this function as the Laplace transform of the distribution or, more simply,
                        the Laplace transform of X.


                        Example 4.6 (Gamma distribution). Consider the gamma distribution with parameters α
                        and β,as discussed in Example 3.4. The Laplace transform of this distribution is given by
                                         ∞         β                       β
                                       
             α                      α
                                 L(t) =    exp(−tx)    x α−1  exp(−βx) dx =   α  , t ≥ 0.
                                        0          (α)                   (β + t)
   108   109   110   111   112   113   114   115   116   117   118