Page 240 - Elements of Distribution Theory
P. 240

P1: JZX
            052184472Xc07  CUNY148/Severini  May 24, 2005  3:59





                            226             Distribution Theory for Functions of Random Variables

                            it follows that Pr(X ∈ X(τ)) = 1/n! for each τ and, hence, that
                                                                1
                                                      E[h(R)] =      h(τ)
                                                                n!
                                                                   τ
                            proving part (ii).
                              Part (iii) follows along similar lines. Let g denote a bounded function of X (·) and let h
                            denote a bounded function of R.
                              Note that

                                            E[g(X (·) )h(R)] =  E[g(X (·) )h(R(X))I {X∈X(τ)} ]
                                                            τ

                                                        =     E[g(τ X)h(τ)I {X∈X(τ)} ]
                                                            τ

                                                        =     h(τ)E[g(τ X)I {τ X∈X(τ 0 )} ]
                                                            τ

                                                        =     h(τ)E[g(X)I {X∈X(τ 0 )} ]
                                                            τ

                                                        = E[g(X)I {X∈X(τ 0 )} ]  h(τ)
                                                                          τ
                                                                           1
                                                        = n!E[g(X)I {X∈X(τ 0 )} ]  h(τ)
                                                                          n!
                                                                              τ
                                                        = E[g(X (·) )]E[h(R)],

                            proving part (iii).
                              Finally, part (iv) follows from the fact that any statistic T ≡ T (X) may be written as
                            ¯
                            T (R, X (·) ) and, by part (iii) of the theorem,
                                              ¯
                                                                 ¯
                               E[T |R = r] = E[T (R, X (·) )|R = r] = E[T (r, X (·) )] = E[T (X (r 1 ) , ··· , X (r n ) )].
                            Example 7.27 (Mean and variance of linear rank statistics). Let R 1 , R 2 ,..., R n denote
                            the ranks of a sequence of independent and identically distributed real-valued random
                            variables, each distributed according to an absolutely continuous distribution. Consider a
                            statistic of the form
                                                               n

                                                          T =    a j R j
                                                              j=1
                            where a 1 , a 2 ,..., a n is a sequence of constants. Here we consider determination of the
                            mean and variance of T .
                              Note that each R j has the same marginal distribution. Since

                                                         n
                                                                n(n + 1)
                                                           R j =       ,                        (7.2)
                                                                   2
                                                        j=1
                            it follows that E(R j ) = (n + 1)/2, j = 1,..., n. Also, each pair (R i , R j ) has the same
                            marginal distribution so that Cov(R i , R j ) does not depend on the pair (i, j).
                                   2
                              Let σ = Var(R j ) and c = Cov(R i , R j ). By (7.2),
                                                      n

                                                                2
                                                Var     R j  = nσ + n(n − 1)c = 0
                                                     j=1
   235   236   237   238   239   240   241   242   243   244   245