Page 94 - Probability and Statistical Inference
P. 94

2. Expectations of Functions of Random Variables  71

                           where as usual I(X > x) stands for the indicator function of the event X > x. If
                           X happens to be one, then I(X > 0) = 1, but I(X > x) = 0 for all x = 1, 2, ... .
                           Hence, the rhs of (2.2.12) is also one. In other words when X = 1, the two
                           sides of (2.2.12) agree. Now if X happens to be two, then I(X > 0) = I(X > 1)
                           = 1, but I(X > x) = 0 for all x = 2, 3, ... . Hence, the rhs of (2.2.12) is also then
                           two. In other words when X = 2, the two sides of (2.2.12) agree. The reader
                           should check that (2.2.12) holds whatever be the value of X observed from
                           the set {1, 2, ...}.
                              For any fixed x ∈ {0, 1, 2, ...} the random variable I(X > x) takes only one
                           of the values 0 or 1. Hence from the Definition 2.2.1 of the expectation or using
                           the Example 2.2.3 which follows shortly, we can claim that E[I(X > x)] = P(X
                           > x) and this equals 1 – F(x).
                              Now the desired result will follow provided that the expectation operation
                           “E” can be taken inside the summation in order to write      x)] =
                                                                   Since I(X > x) ≥ 0 w.p.1, in-
                           deed by applying the Monotone Convergence Theorem (refer to the Exercise
                           2.2.24) one can justify exchanging the expectation and the infinite summation.
                           The proof is now complete. !
                              The Exercise 2.2.21 provides an alternate sufficient condition for the con-
                           clusion of the Theorem 2.2.5 to hold. The Exercises 2.2.22-2.2.23 give inter-
                           esting applications of the Theorems 2.2.5-2.2.6.
                              At this point we consider some of the standard distributions from Section
                           1.7 and derive the expressions for their means and variances. The exact calcu-
                           lation of the mean and variance needs special care and attention to details. In
                           what follows, we initially emphasize this.


                           2.2.1   The Bernoulli Distribution
                           From (1.7.1) recall that a Bernoulli(p) random variable X takes the values 1
                           and 0 with probability p and 1 – p respectively. Then according to the Defini-
                           tions 2.2.1-2.2.3 and using (2.2.9), we have








                              Example 2.2.3 (Exercise 1.7.22 Continued) Consider an arbitrary ran-
                           dom variable Y which may be discrete or continuous. Let A be an arbitrary
                           event (Borel set) defined through the random variable Y. For example, the
                           event A may stand for the set where Y ≥ 2 or |Y| > 4 ∪ |Y| ≤ 1/2 or one of
   89   90   91   92   93   94   95   96   97   98   99