Page 143 - A First Course In Stochastic Models
P. 143

EXERCISES                            135

                three times his stake with respective probabilities 0.25 and 0.15. His strategy is to bet $5 each
                time his payroll is more than $50 dollars and $10 otherwise. Define an appropriate Markov
                chain to compute the probability that Joe reaches his goal. Also calculate the expected
                number of bets placed by Joe until he has gone broke or reached his goal.
                3.9 A training program consists of three parts, each having a length of one month. Fifty
                percent of the starting students immediately pass the first part after one month, 30% drop
                out before the end of the first month and 20% take the first part again. Seventy percent
                of the last group pass the first part after a second trial and the other 30% still drop out.
                Eighty percent of the students taking the second part pass this second part after the first
                trial, 10% drop out after the first trial and the other 10% move on after a second trial of the
                first part. Any student streaming into the third part of the training program will complete it
                successfully. Calculate the probability that a starting student will be successful.
                3.10 Consider a finite-state Markov chain {X n } with no two disjoint closed sets. The matrix
                of one-step transition probabilities is called doubly stochastic when for each column the sum
                of the column elements equals 1. Verify that the equilibrium distribution of such a Markov
                chain is a uniform distribution.
                3.11 A gambling device is tuned such that a player who wins (loses) on a given play will
                win on the next play with probability 0.25 (0.50). The player pays $1 for each play and
                receives $2.50 for each play that is won. Use Markov chain analysis to find out whether the
                game is fair or not.
                                                           3
                3.12 A factory has a storage tank with a capacity of 4 m for temporarily storing waste
                                                                         3
                produced by the factory. Each week the factory produces 0, 1, 2 or 3 m waste with
                                      1       1      1         1
                respective probabilities p 0 =  8  , p 1 =  2  , p 2 =  4  , and p 3 =  8  . If the amount of waste
                produced in one week exceeds the remaining capacity of the tank, the excess is specially
                removed at a cost of $30 per cubic metre. At the end of each week there is a regular
                opportunity to remove waste from the storage tank at a fixed cost of $25 and a variable cost
                of $5 per cubic metre. The following policy is used. If at the end of the week the storage
                                     3
                tank contains more than 2 m of waste, the tank is emptied; otherwise no waste is removed.
                Use Markov chain analysis to find the long-run average cost per week.
                3.13 In a series of repeated plays, you can choose each time between games A and B.
                During each play you win $1 or you lose $1. You are also allowed to play when your
                capital is not positive (a negative capital corresponds to a debt). In game A there is a single
                                                  1
                coin. This coin lands heads with probability  2  − ε (ε = 0.005) and tails with probability
                1                                                          1
                2  + ε. In game B there are two coins. One coin lands heads with probability  10  − ε and
                the other coin lands heads with probability  3 4  − ε. If you play game B, then you must take
                the first coin when your current capital is a multiple of 3 and you must take the other coin
                otherwise. In each play of either game you win $1 if the coin lands heads and you lose $1
                otherwise.
                  (a) Use Markov chain analysis to verify that the long-run fraction of plays you win is
                0.4957 when you always play game B (Hint: a three-state Markov chain suffices.)
                  (b) Suppose you alternately play the games A, A, B, B, A, A, B, B, . . . . Use an appro-
                priate Markov chain to verify that the long-run fraction of plays you win is 0.5064.
                  This problem shows that in special cases with dependencies, a combination of two
                unfavourable games may result in a favourable game. This paradox is called Parrondo’s
                paradox after the Spanish physicist Juan Parrondo.
                3.14 At the beginning of each day, a crucial piece of electronic equipment is inspected and
                then classified as being in one of the working conditions i = 1, . . . , N. Here the working
                condition i is better than the working condition i + 1. If the working condition is i = N
                the piece must be replaced by a new one and such an enforced replacement takes two days.
                If the working condition is i with i < N there is a choice between preventively replacing
   138   139   140   141   142   143   144   145   146   147   148