Page 114 - Elements of Distribution Theory
P. 114

P1: JZP
            052184472Xc04  CUNY148/Severini  May 24, 2005  2:39





                            100                        Moments and Cumulants

                            Example 4.7 (Inverse gamma distribution). Let X denote a scalar random variable with
                            an absolutely continuous distribution with density function
                                                      x −3  exp(−1/x), x > 0;
                            this is an example of an inverse gamma distribution. The Laplace transform of this distri-
                            bution is given by

                                              ∞                                √
                                      L(t) =    exp(−tx)x −3  exp(−1/x) dx = 2tK 2 (2 t), t ≥ 0.
                                             0
                            Here K 2 denotes the modified Bessel function of order 2; see, for example, Temme
                            (1996).

                              As might be expected, the properties of the Laplace transform of X are similar to those
                            of the characteristic function of X;in particular, if two random variables have the same
                            Laplace transform, then they have the same distribution.

                            Theorem 4.5. Let X and Y denote real-valued, nonnegative random variables. If L X (t) =
                            L Y (t) for all t > 0, then X and Y have the same distribution.

                            Proof. Let X 0 = exp{−X} and Y 0 = exp{−Y}. Then X 0 and Y 0 are random variables
                                                                                               t
                                                                                        t
                            taking values in the interval [0, 1]. Since L X (t) = L Y (t), it follows that E[X ] = E[Y ] for
                                                                                        0      0
                            all t > 0; in particular, this holds for t = 1, 2,.... Hence, for any polynomial g,E[g(X 0 )] =
                            E[g(Y 0 )].
                              From the Weierstrass Approximation Theorem (see Appendix 3), we know that any
                            continuous function on [0, 1] may be approximated to arbitrary accuracy by a polynomial.
                            More formally, let h denote a bounded, continuous function on [0, 1]. Given  > 0, there
                            exists a polynomial g   such that
                                                       sup |h(z) − g   (z)|≤  .
                                                      z∈[0,1]
                            Then


                              E[h(X 0 ) − h(Y 0 )] − E[g   (X 0 ) − g   (Y 0 )]   =  E[h(X 0 ) − g   (X 0 )] − E[h(Y 0 ) − g   (Y 0 )]
                                                               ≤ E[|h(X 0 ) − g   (X 0 )|] + E[|h(Y 0 ) − g   (Y 0 )|]
                                                               ≤ 2 .
                            Since E[g   (X 0 ) − g   (Y 0 )] = 0,

                                                      E[h(X 0 )] − E[h(Y 0 )]  ≤ 2


                            and, since   is arbitrary, it follows that E[h(X 0 )] = E[h(Y 0 )] for any bounded continuous
                            function h.It follows from Theorem 1.14 that X 0 and Y 0 have the same distribution. That
                            is, for any bounded continuous function f ,E[ f (X 0 )] = E[ f (Y 0 )]. Let g denote a bounded,
                            continuous, real-valued function on the range of X and Y. Since X =− log(X 0 ) and
                            Y =− log(Y 0 ), g(X) = f (X 0 ) where f (t) = g(− log(t)), 0 < t < 1. Since g is bounded
                            and continuous, it follows that f is bounded and continuous; it follows that

                                              E[g(X)] = E[ f (X 0 )] = E[ f (Y 0 )] = E[g(Y)]
                            so that X and Y have the same distribution.
   109   110   111   112   113   114   115   116   117   118   119