Page 92 - A First Course In Stochastic Models
P. 92

84                   DISCRETE-TIME MARKOV CHAINS

                The Markov chain {X n , n = 0, 1, . . . } is completely determined by the probability
                distribution of the initial state X 0 and the one-step transition probabilities p ij . In
                applications of Markov chains the art is:

                (a) to choose the state variable(s) such that the Markovian property (3.1.1) holds,
                (b) to determine the one-step transition probabilities p ij .

                  Once this (difficult) modelling step is done, the rest is simply a matter of applying
                the theory that will be developed in the next sections. The student cannot be urged
                strongly enough to try the problems at the end of this chapter to acquire skills to
                model new situations. Let us return to the drunkard’s walk.

                Example 3.1.1 (continued) The drunkard’s random walk
                In this example we have already defined the state variable as the position of the
                drunkard. The process {X n } with X n denoting the state just after the nth step of the
                drunkard is indeed a discrete-time Markov chain. The one-step transition probabil-
                ities are as follows. For any interior state (x, y) with −N < x, y < N, we have

                              1   for (v, w) = (x + 1, y), (x − 1, y), (x, y + 1), (x, y − 1),
                              4
                 p (x,y)(v,w) =
                              0   otherwise.
                For any boundary state (x, N) with −N < x < N, we have


                                  1  for (v, w) = (x + 1, N), (x − 1, N), (x, N − 1),
                                  3
                     p (x,y)(v,w) =
                                  0  otherwise.
                For the boundary state (x, −N) with −N < x < N, (N, y) and (N, −y) with
                −N < y < N, the one-step transition probabilities follow similarly. For the corner
                point (x, y) = (N, N), we have

                                      1   for (v, w) = (N − 1, N), (N, N − 1),
                                      2
                         p (x,y)(v,w) =
                                      0   otherwise.
                Similarly, for the corner points (x, y) = (−N, N), (−N, −N) and (N, −N).
                  A variant of the drunkard’s random walk problem is the problem in which the
                drunkard never chooses the same direction as was chosen in the previous step.
                Then we have to augment the state with an extra state variable in order to satisfy
                the Markovian property. The state of the drunkard after each step is now defined as
                (x, y, z), where (x, y) denotes the position of the drunkard and z ∈ {N, S, W, L}
                denotes the direction of the last step. Letting X n be the state of the drunkard’s
                process just after the nth step (with the convention X 0 = (0, 0)), the stochastic
                process {X n } is a discrete-time Markov chain. It is left to the reader to write down
                the one-step transition probabilities of this process.
   87   88   89   90   91   92   93   94   95   96   97