Page 502 - Discrete Mathematics and Its Applications
P. 502

7.4 Expected Value and Variance  481


                                                        To prove part (ii), note that


                                                        E(aX + b) =    s∈S  p(s)(aX(s) + b)

                                                                  = a   s∈S  p(s)X(s) + b  s∈S  p(s)

                                                                  = aE(X) + b because   s∈S  p(s) = 1.

                                                        Examples 4 and 5 illustrate how to use Theorem 3.

                                      EXAMPLE 4      Use Theorem 3 to find the expected value of the sum of the numbers that appear when a pair of
                                                     fair dice is rolled. (This was done in Example 3 without the benefit of this theorem.)

                                                     Solution: Let X 1 and X 2 be the random variables with X 1 ((i, j)) = i and X 2 ((i, j)) = j,so
                                                     that X 1 is the number appearing on the first die and X 2 is the number appearing on the second die.
                                                     It is easy to see that E(X 1 ) = E(X 2 ) = 7/2 because both equal (1 + 2 + 3 + 4 + 5 + 6)/6 =
                                                     21/6 = 7/2. The sum of the two numbers that appear when the two dice are rolled is the sum
                                                     X 1 + X 2 . By Theorem 3, the expected value of the sum is E(X 1 + X 2 ) = E(X 1 ) + E(X 2 ) =
                                                     7/2 + 7/2 = 7.                                                                 ▲



                                      EXAMPLE 5      In the proof of Theorem 2 we found the expected value of the number of successes when n
                                                     independent Bernoulli trials are performed, where p is the probability of success on each trial
                                                     by direct computation. Show howTheorem 3 can be used to derive this result where the Bernoulli
                                                     trials are not necessarily independent.

                                                     Solution: Let X i be the random variable with X i ((t 1 ,t 2 ,...,t n )) = 1if t i is a suc-
                                                     cess and X i ((t 1 ,t 2 ,...,t n )) = 0if t i is a failure. The expected value of X i is E(X i ) =
                                                     1 · p + 0 · (1 − p) = p for i = 1, 2,...,n. Let X = X 1 + X 2 + ··· + X n , so that X counts
                                                     the number of successes when these n Bernoulli trials are performed. Theorem 3, applied to the
                                                     sum of n random variables, shows that E(X) = E(X 1 ) + E(X 2 ) + ··· + E(X n ) = np.  ▲


                                                        Wecantakeadvantageofthelinearityofexpectationstofindthesolutionsofmanyseemingly
                                                     difficult problems. The key step is to express a random variable whose expectation we wish to
                                                     find as the sum of random variables whose expectations are easy to find. Examples 6 and 7
                                                     illustrate this technique.

                                      EXAMPLE 6      Expected Value in the Hatcheck Problem A new employee checks the hats of n people at a
                                                     restaurant, forgetting to put claim check numbers on the hats. When customers return for their
                                                     hats, the checker gives them back hats chosen at random from the remaining hats. What is the
                                                     expected number of hats that are returned correctly?

                                                     Solution: Let X be the random variable that equals the number of people who receive the correct
                                                     hat from the checker. Let X i be the random variable with X i = 1ifthe ith person receives the
                                                     correct hat and X i = 0 otherwise. It follows that

                                                        X = X 1 + X 2 + ··· + X n .

                                                     Because it is equally likely that the checker returns any of the hats to this person, it follows that
                                                     the probability that the ith person receives the correct hat is 1/n. Consequently, by Theorem 1,
                                                     for all i we have

                                                        E(X i ) = 1 · p(X i = 1) + 0 · p(X i = 0) = 1 · 1/n + 0 = 1/n.
   497   498   499   500   501   502   503   504   505   506   507