Page 351 - Elements of Distribution Theory
P. 351

P1: JZP
            052184472Xc11  CUNY148/Severini  May 24, 2005  17:56





                                        11.2 Basic Properties of Convergence in Distribution  337

                            T
                        and t Y have the same distribution for any vector t. Similarly, a very useful consequence of
                        Theorem 11.5 is the result that convergence in distribution of a sequence of d-dimensional
                        randomvectors X 1 , X 2 ,...toarandomvector X maybeestablishedbyshowingthat,forany
                                  d
                                     T
                                                                        T
                                          T
                        vector t ∈ R , t X 1 , t X 2 ,... converges in distribution to t X. Thus, a multidimensional
                        problem may be converted to a class of one-dimensional problems. This often called the
                        Cram´ er–Wold device.
                        Theorem 11.6. Let X, X 1 , X 2 ,... denote d-dimensional random vectors. Then
                                                      D
                                                  X n → X   as n →∞
                        if and only if
                                                      D
                                                 T
                                                         T
                                                t X n → t X  as n →∞
                                  d
                        for all t ∈ R .
                        Proof. Let ϕ n denote the characteristic function of X n and let ϕ denote the characteristic
                                         T
                        function of X. Then t X n has characteristic function
                                                   ˜ ϕ n (s) = ϕ n (st),  s ∈ R

                            T
                        and t X has characteristic function
                                                   ˜ ϕ(s) = ϕ(st),  s ∈ R.

                                     D
                          Suppose X n → X. Then
                                                ϕ n (t) → ϕ(t)  for all t ∈ R d

                        so that, fixing t,
                                               ϕ n (st) → ϕ(st)  for all s ∈ R,

                                       D
                                          T
                                   T
                        proving that t X n → t X.
                                              D
                                          T
                                                 T
                                                              d
                          Now suppose that t X n → t X for all t ∈ R . Then
                                                                           d
                                           ϕ n (st) → ϕ(st)  for all s ∈ R, t ∈ R .
                        Taking s = 1 shows that
                                                                      d
                                                ϕ n (t) → ϕ(t)  for all t ∈ R ,
                        proving the result.
                          Thus, according to Theorem 11.6, convergence in distribution of the component random
                        variables of a random vector is a necessary, but not sufficient, condition for convergence in
                        distribution of the random vector. This is illustrated in the following example.

                        Example 11.10. Let Z 1 and Z 2 denote independent standard normal random variables
                        and, for n = 1, 2,..., let X n = Z 1 + α n Z 2 and Y n = Z 2 , where α 1 ,α 2 ,... is a sequence
                                                D
                        of real numbers. Clearly, Y n → Z 2 as n →∞ and, if α n → α as n →∞, for some real
   346   347   348   349   350   351   352   353   354   355   356