Page 45 - Elements of Distribution Theory
P. 45

P1: JZP
            052184472Xc01  CUNY148/Severini  May 24, 2005  17:52





                                                      1.8 Expectation                         31

                        Theorem 1.10. Let X denote a random variable with range X.
                           (i) If g is a nonnegative real-valued function on X, then E[g(X)] ≥ 0 and E[g(X)] = 0
                              if and only if Pr[g(X) = 0] = 1.
                           (ii) If g is the constant function identically equal to c then E[g(X)] = c.
                          (iii) If g 1 , g 2 ,..., g m are real-valued functions on X such that E[|g j (X)|] < ∞,
                               j = 1,..., m, then
                                        E[g 1 (X) +· · · + g m (X)] = E[g 1 (X)] + ··· + E[g m (X)].

                           (iv) Let g 1 , g 2 ,... denote an increasing sequence of nonnegative, real-valued functions
                              on X with limit g. Then

                                                     lim E[g n (X)] = E[g(X)].
                                                    n→∞
                           (v) Let g 1 , g 2 ,... denote a sequence of nonnegative, real-valued functions on X. Then

                                                E[lim inf g n (X)] ≤ lim inf E[g n (X)].
                                                   n→∞           n→∞
                           (vi) Let g 1 , g 2 ,... denote a sequence of real-valued functions on X. Suppose there exist
                              real-valued functions g and G, defined on X, such that, with probability 1,
                                                  |g n (X)|≤ G(X),  n = 1, 2,...,

                              and
                                                       lim g n (X) = g(X).
                                                       n→∞
                              If E[G(X)] < ∞, then
                                                     lim E[g n (X)] = E[g(X)].
                                                    n→∞

                          An important property of expectation is that the expectation operator E(·) completely
                        defines a probability distribution. A formal statement of this fact is given in the following
                        theorem.

                        Theorem 1.11. Let X and Y denote random variables.
                                                    E[g(X)] = E[g(Y)]
                        for all bounded, continuous, real-valued functions g, if and only if X and Y have the same
                        probability distribution.

                        Proof. If X and Y have the same distribution, then clearly E[g(X)] = E[g(Y)] for all
                        functions g for which the expectations exist. Since these expectations exist for bounded g,
                        the first part of the result follows.
                          Now suppose that E[g(X)] = E[g(Y)] for all bounded continuous g.We will show that
                        in this case X and Y have the same distribution. Note that we may assume that X and Y have
                        the same range, neglecting sets with probability 0, for if they do not, it is easy to construct
                        a function g for which the expected values differ.
                          The proof is based on the following idea. Note that the distribution function of a random
                        variable X can be written as the expected value of an indicator function:
                                                   Pr(X ≤ z) = E[I {X≤z} ].
   40   41   42   43   44   45   46   47   48   49   50