Page 99 - A First Course In Stochastic Models
P. 99

TRANSIENT ANALYSIS                       91

                approach to analysing success runs is given in Appendix C and uses generating
                functions.



                Example 3.2.3 A coin-tossing surprise

                A fair coin is repeatedly flipped until the last three tosses either show the combi-
                nation TTH or the combination THH. Here H means that the outcome of a toss
                is a head and T that it is a tail. What is the probability that the combination TTH
                occurs before the combination THH ?
                  To answer this question, we define a Markov chain with eight states, including
                two absorbing states. Let state 0 mean the beginning of a game, state 1 = the first
                toss is H, state 2 = the first toss is T , state 3 = the last two tosses show HH,
                state 4 = the last two tosses show HT, state 5 = the last two tosses show TT, state
                6 = the last two tosses show TH, state 7 = the last three tosses show TTH and
                state 8 = the last three tosses show THH. The states 7 and 8 are taken absorbing.
                It is implicit in the definition of the states 3, 4, 5, 6 that the combinations TTH
                and THH have not appeared before. The Markov chain that describes the evolution
                of the state of the system has the one-step transition probabilities

                                       1
                                                    1
                                                                 1
                            p 01 = p 02 = , p 13 = p 14 = , p 25 = p 26 = ,
                                       2            2            2
                                       1
                                                    1
                                                                 1
                            p 33 = p 34 = , p 45 = p 46 = , p 55 = p 57 = ,
                                       2            2            2
                                       1
                            p 63 = p 68 = , p 77 = 1, p 88 = 1, the other p ij = 0.
                                       2
                The Markov chain will ultimately be absorbed in one of the states 7 and 8 (this
                fact can formally be proved by proceeding as in the proof of Theorem 3.2.2 below
                and replacing the states 7 and 8 by a single absorbing state). Denote by f i the
                probability that the Markov chain is ultimately absorbed in state 7 starting from
                state i. The probability f 0 gives the desired probability that the combination TTH
                occurs before the combination THH. The probabilities f 0 , . . . , f 6 satisfy a system
                of linear equations. The equation for f i follows by conditioning on the next state
                after the current state i. This gives

                              1
                                                1
                                                                      1
                                                                1
                                                     1
                                   1
                         f 0 = f 1 + f 2 ,  f 1 = f 3 + f 4 ,  f 2 = f 5 + f 6 ,
                              2    2            2    2          2     2
                                                     1
                                                1
                                   1
                              1
                         f 3 = f 3 + f 4 ,  f 4 = f 5 + f 6 ,
                              2    2            2    2
                              1    1             1     1
                         f 5 = f 5 +  × 1,   f 6 = f 3 +  × 0.
                              2    2             2     2
                                                                      1
                                                         2 2 2 2 2
                The solution of these equations is (f 0 , . . . , f 6 ) = ( , , , , , 1, ). The desired
                                                                      3
                                                         3 3 3 3 3
                               2
                probability is thus . A surprising result for many people. Can you give a simple
                               3
                                                             1
                explanation why the sought probability is not equal to ?
                                                             2
   94   95   96   97   98   99   100   101   102   103   104