Page 439 - A First Course In Stochastic Models
P. 439

434                           APPENDICES

                using the generating-function technique to be discussed in Appendix C. Omitting
                the details, we state
                                          1 − p           n
                           E(L n ) = −pn +       (2 (1 − p)) − 1 ,  n ≥ 1,
                                         1 − 2p
                                     1
                           E(A n ) =     E(L n ),  n ≥ 1.
                                   1 − 2p
                                              1
                Indeed E(L n )/E(A n ) = 1 − 2p =  in accordance with the fact that the house
                                             37
                percentage in European roulette is 2.702%.

                Convolution formula
                Let X 1 and X 2 be two independent, non-negative random variables with respective
                probability distribution functions F 1 (x) and F 2 (x). For ease assume that F 2 (x)
                has a probability density f 2 (x). Then, by a direct application of the law of total
                probability, we have the convolution formula
                                               x
                            P {X 1 + X 2 ≤ x} =  F 1 (x − y)f 2 (y) dy,  x ≥ 0.
                                             0

                Moments of a non-negative random variable
                Let N be a non-negative, integer-valued random variable. A useful formula is
                                                ∞

                                       E(N) =     P {N > k}.                  (A.6)
                                               k=0
                To verify this result, write  
 ∞  P {N > k} =  
 ∞  
 ∞  P {N = j} and
                                         k=0               k=0  j=k+1
                interchange the order of summation. The relation (A.6) can be generalized. For
                any non-negative random variable X with probability distribution function F(x),
                                               ∞

                                     E(X) =      [1 − F(x)] dx.               (A.7)
                                              0
                A probabilistic proof of (A.7) is as follows. Imagine that X is the lifetime of a
                machine. Define the indicator variable I (t) by I (t) = 1 if the machine is still
                working at time t and by I (t) = 0 otherwise. Then, by E[I (t)] = P {I (t) = 1}
                and P {I (t) = 1} = P {X > t}, it follows that
                                  ∞              ∞              ∞

                      E(X) = E      I (t) dt =    E [I (t)] dt =  P {X > t} dt,
                                  0            0              0
                which proves (A.7). The interchange of the order of expectation and integration is
                justified by the non-negativity of I (t). The result (A.7) can be extended to


                                       ∞
                               k
                           E(X ) = k     x k−1  [1 − F(x)] dx,  k = 1, 2, . . . .  (A.8)
                                      0
   434   435   436   437   438   439   440   441   442   443   444