Page 216 - Elements of Distribution Theory
P. 216

P1: JZX
            052184472Xc07  CUNY148/Severini  May 24, 2005  3:59





                            202             Distribution Theory for Functions of Random Variables

                            Example 7.4 (Weibull distribution). Let X denote a random variable with a standard expo-
                            nential distribution so that X has an absolutely continuous distribution with distribution
                            function

                                                   F X (x) = 1 − exp(−x), x > 0

                            and density

                                                     p X (x) = exp(−x), x > 0.
                                       1                               1              θ
                              Let Y = X θ where θ> 0. The function g(x) = x θ has inverse x = y .It follows that
                            Y has an absolutely continuous distribution with distribution function
                                                                    θ
                                                   F Y (y) = 1 − exp(−y ), y > 0
                            and density function

                                                                     θ
                                                  p Y (y) = θy θ−1  exp(−y ), y > 0.
                            The distribution of Y is called a standard Weibull distribution with index θ.



                                                7.3 Functions of a Random Vector

                            In this section, we consider the extension of Theorem 7.1 to the case of a random vector.
                            Let X denote a random vector with range X; consider a function g on X and let Y =
                            g(X). Because of the possible complexity of the function g,even when it is one-to-one,
                            an analogue of part (i) of Theorem 7.1 is not available. Part (ii) of Theorem 7.1, which
                            does not use the dimension of X in any meaningful way, is simply generalized to the vector
                            case. Part (iii) of the theorem, which is essentially the change-of-variable formula for
                            integration, is also easily generalized by using the change-of-variable formula for integrals
                            on a multidimensional space.
                                                                d
                                                           d
                              Recall that if g is a function from R to R , then the Jacobian matrix of g is the d × d
                            matrix with (i, j)th element given by ∂g i /∂x j where g i denotes the ith component of the
                            vector-valued function g; this matrix will be denoted by ∂g/∂x. The Jacobian of g at x is
                            the absolute value of the determinant of the Jacobian matrix at x, and is denoted by

                                                                   .
                                                              ∂g(x)
                                                              ∂x

                            Theorem 7.2. Let X denote a d-dimensional random vector with an absolutely continuous
                                                                                            d
                            distribution with density function p X . Suppose that Y = g(X) where g : X → R denotes
                            a one-to-one continuously differentiable function. Let X 0 denote an open subset of X such
                            that Pr(X ∈ X 0 ) = 1 and such that the Jacobian of g is nonzero on X 0 . Then Y = g(X) has
                            an absolutely continuous distribution with density function p Y , given by


                                                                      , y ∈ Y 0 ,
                                                                  ∂h(y)
                                                                  ∂y
                                                 p Y (y) = p X (h(y))
                            where Y 0 = g(X 0 ).
   211   212   213   214   215   216   217   218   219   220   221