Page 479 - Discrete Mathematics and Its Applications
P. 479

458  7 / Discrete Probability


                                                    Probability has many applications to genetics, as Examples 6 and 7 illustrate.
                                 EXAMPLE 6      Assume, as in Example 4, that each of the four ways a family can have two children is equally
                                                likely. Are the events E, that a family with two children has two boys, and F, that a family with
                                                two children has at least one boy, independent?

                                                Solution: Because E ={BB},wehave p(E) = 1/4. In Example 4 we showed that p(F) = 3/4
                                                                                       1  3    3
                                                and that p(E ∩ F) = 1/4. But p(E)p(F) =  ·  =   . Therefore p(E ∩ F)  = p(E)p(F),
                                                                                       4  4   16
                                                so the events E and F are not independent.                                     ▲
                                 EXAMPLE 7      Are the events E, that a family with three children has children of both sexes, and F, that this
                                                family has at most one boy, independent? Assume that the eight ways a family can have three
                                                children are equally likely.

                                                Solution: By assumption, each of the eight ways a family can have three children,
                                                BBB, BBG, BGB, BGG, GBB, GBG, GGB, and GGG, has a probability of 1/8. Because
                                                E ={BBG, BGB, BGG, GBB, GBG, GGB}, F ={BGG, GBG, GGB, GGG}, and
                                                E ∩ F ={BGG, GBG, GGB}, it follows that p(E) = 6/8 = 3/4, p(F) = 4/8 = 1/2, and
                                                p(E ∩ F) = 3/8. Because

                                                                3 1    3
                                                    p(E)p(F) =    ·  =  ,
                                                                4 2    8
                                                it follows that p(E ∩ F) = p(E)p(F),so E and F are independent. (This conclusion may seem
                                                surprising. Indeed, if we change the number of children, the conclusion may no longer hold.
                                                See Exercise 27.)                                                              ▲
                                                PAIRWISE AND MUTUAL INDEPENDENCE We can also define the independence of
                                                more than two events. However, there are two different types of independence, given in
                                                Definition 5.


                              DEFINITION 5       The events E 1 ,E 2 ,...,E n are pairwise independent if and only if p(E i ∩ E j ) =
                                                 p(E i )p(E j ) for all pairs of integers i and j with 1 ≤ i< j ≤ n. These events
                                                 are   mutually  independent  if  p(E i 1  ∩ E i 2  ∩ ··· ∩ E i m  ) = p(E i 1  )p(E i 2  ) ··· p(E i m )
                                                 whenever i j ,j = 1, 2,...,m, are integers with 1 ≤ i 1 <i 2 < ··· <i m ≤ n and m ≥ 2.

                                                    From Definition 5, we see that every set of n mutually independent events is also pairwise
                                                independent. However, n pairwise independent events are not necessarily mutually independent,
                                                as we see in Exercise 25 in the Supplementary Exercises. Many theorems about n events include
                                                the hypothesis that these events are mutually independent, and not just pairwise independent.
                                                We will introduce several such theorems later in this chapter.

                                                Bernoulli Trials and the Binomial Distribution

                                                Suppose that an experiment can have only two possible outcomes. For instance, when a bit is
                                                generated at random, the possible outcomes are 0 and 1. When a coin is flipped, the possible
                                                outcomes are heads and tails. Each performance of an experiment with two possible outcomes is
                                                called a Bernoulli trial, after James Bernoulli, who made important contributions to probability
                                                theory. In general, a possible outcome of a Bernoulli trial is called a success or a failure.If p
                                                is the probability of a success and q is the probability of a failure, it follows that p + q = 1.
                                                    Many problems can be solved by determining the probability of k successes when an ex-
                                                periment consists of n mutually independent Bernoulli trials. (Bernoulli trials are mutually
                                                independent if the conditional probability of success on any given trial is p, given any infor-
                                                mation whatsoever about the outcomes of the other trials.) Consider Example 8.
   474   475   476   477   478   479   480   481   482   483   484