Page 501 - Discrete Mathematics and Its Applications
P. 501
480 7 / Discrete Probability
Proof: Let X be the random variable equal to the number of successes in n trials. By
k n−k
Theorem 2 of Section 7.2 we see that p(X = k) = C(n, k)p q . Hence, we have
n
E(X) = kp(X = k) by Theorem 1
k = 1
n
k n−k
= kC(n, k)p q by Theorem 2 in Section 7.2
k = 1
n
k n−k
= nC(n − 1,k − 1)p q by Exercise 21 in Section 6.4
k = 1
n
k−1 n−k
= np C(n − 1,k − 1)p q factoring np from each term
k = 1
n−1
j n−1−j
= np C(n − 1,j)p q shifting index of summation with j = k − 1
j = 0
= np(p + q) n−1 by the binomial theorem
= np. because p + q = 1
This completes the proof because it shows that the expected number of successes in n mutually
independent Bernoulli trials is np.
We will also show that the hypothesis that the Bernoulli trials are mutually independent in
Theorem 2 is not necessary.
Linearity of Expectations
Theorem 3 tells us that expected values are linear. For example, the expected value of the sum
of random variables is the sum of their expected values. We will find this property exceedingly
useful.
THEOREM 3 If X i ,i = 1, 2,...,n with n a positive integer, are random variables on S, and if a and b are
real numbers, then
(i) E(X 1 + X 2 + ··· + X n ) = E(X 1 ) + E(X 2 ) + ··· + E(X n )
(ii) E(aX + b) = aE(X) + b.
Proof: Part (i) follows for n = 2 directly from the definition of expected value, because
E(X 1 + X 2 ) = p(s)(X 1 (s) + X 2 (s))
s∈S
= p(s)X 1 (s) + p(s)X 2 (s)
s∈S s∈S
= E(X 1 ) + E(X 2 ).
The case for n random variables follows easily by mathematical induction using the case of two
random variables. (We leave it to the reader to complete the proof.)

