Page 59 - Elements of Distribution Theory
P. 59

P1: JZP
            052184472Xc02  CUNY148/Severini  May 24, 2005  2:29





                                           2.2 Marginal Distributions and Independence        45

                        Example 2.6 (Multinomial distribution). Let X = (X 1 ,..., X m ) denote a random vector
                        with a multinomial distribution, as in Example 2.2. Then X has frequency function

                                                             n
                                                                     x 1 x 2
                                                                             x m
                                        p(x 1 ,..., x m ) =         θ θ ··· θ ,
                                                                     1  2    m
                                                       x 1 , x 2 ,..., x m
                                                        m

                        for x j = 0, 1,..., n, j = 1,..., m,  x j = n; here θ 1 ,...,θ m are nonnegative con-
                                                        j=1
                        stants satisfying θ 1 +· · · + θ m = 1.
                          According to Example 2.2, for j = 1,..., m, X j has a binomial distribution with param-
                        eters n and θ j so that X j has frequency function
                                             n   x j

                                                θ (1 − θ j ) n−x j ,  x j = 0, 1,..., n.
                                                 j
                                             x j
                        Suppose there exists a j = 1, 2,..., m such that 0 <θ j < 1; it then follows from part (iv)
                        of Theorem 2.2 that X 1 , X 2 ,..., X m are not independent. This is most easily seen by noting
                        that Pr(X j = 0) > 0 for all j = 1, 2,..., m, while
                                              Pr(X 1 = X 2 = ··· = X m = 0) = 0.
                          If all θ j are either 0 or 1 then X 1 ,..., X m are independent. To see this, suppose that
                        θ 1 = 1 and θ 2 =· · · = θ m = 0. Then, with probability 1, X 1 = n and X 2 = ··· = X m = 0.
                        Hence,

                               E[g 1 (X 1 ) ··· g m (X m )] = g 1 (n)g 2 (0) ··· g m (0) = E[g 1 (X 1 )] ··· E[g m (X m )]
                        and independence follows from part (ii) of Theorem 2.2.


                          Random variables X 1 , X 2 ,..., X n are said to be independent and identically distributed
                        if, in addition to being independent, each X j has the same marginal distribution. Thus, in
                        Example 2.5, X 1 , X 2 , X 3 are independent and identically distributed. The assumption of
                        independent identically distributed random variables is often used in the specification of
                        the distribution of a vector (X 1 , X 2 ,..., X n ).

                        Example 2.7 (Independent standard exponential random variables). Let X 1 , X 2 ,...,
                        X n denote independent, identically distributed, real-valued random variables such that each
                        X j has a standard exponential distribution; see Example 1.16. Then the vector (X 1 ,..., X n )
                        has an absolutely continuous distribution with density function

                                          n                   n

                            p(x 1 ,..., x n ) =  exp(−x j ) = exp −  x j ,  x j > 0, j = 1,..., n.
                                          j=1                j=1
                          It is often necessary to refer to infinite sequences of random variables, particularly in
                        the development of certain large-sample approximations. An important result, beyond the
                        scope of this book, is that such a sequence can be defined in a logically consistent manner.
                        See, for example, Feller (1971, Chapter IV) or Billingsley (1995, Section 36). As might
                        be expected, technical issues, such as measurability of sets, become much more difficult in
                        this setting. An infinite sequence of random variables X 1 , X 2 ,... is said to be independent
                        if each finite subset of {X 1 , X 2 ,...} is independent.
   54   55   56   57   58   59   60   61   62   63   64