Page 90 - A First Course In Stochastic Models
P. 90

82                   DISCRETE-TIME MARKOV CHAINS

                Several applications will be discussed as well. For didactical reasons not all of
                the results that are stated in Section 3.3 are proved in this section. Some of the
                proofs are deferred to a later section. In Section 3.4 we discuss computational
                methods for solving the equilibrium equations of the Markov chain. In particular,
                we give a simple but powerful method for computing the equilibrium distribution
                of an infinite-state Markov chain whose state probabilities exhibit a geometric tail
                behaviour. Section 3.5 deals with theoretical issues such as the state classification
                for Markov chains and proofs of the ergodic theorems used in earlier sections.


                                        3.1  THE MODEL

                A discrete-time Markov chain is a stochastic process which is the simplest gen-
                eralization of a sequence of independent random variables. A Markov chain is a
                random sequence in which the dependency of the successive events goes back only
                one unit in time. In other words, the future probabilistic behaviour of the process
                depends only on the present state of the process and is not influenced by its past
                history. This is called the Markovian property. Despite its very simple structure the
                Markov chain model is extremely useful in a wide variety of practical probability
                problems. Let us first give an illustrative example.


                Example 3.1.1 The drunkard’s random walk
                A drunkard starts a random walk in the middle of a square; see Figure 3.1.1. He
                performs a sequence of independent unit steps. Each step has equal probability  1
                                                                                  4
                of going north, south, east or west as long as the drunkard has not reached the edge
                of the square. The drunkard never leaves the square. Should he reach the boundary
                of the square, his next step is equally likely to be in one of the three remaining
                directions if he is not at a corner point, and is equally likely to be in two remaining
                directions otherwise. What stochastic process describes the drunkard’s walk? What
                is the expected number of steps he needs to return to his starting point?
                  For n = 0, 1, . . . , we define the random variable

                          X n = the position of the drunkard just after the nth step

                with the convention X 0 = (0, 0). Let us say that the process {X n } is in state
                (x, y) when the current position of the drunkard is described by point (x, y). Then
                {X n , n = 0, 1, . . . } is a discrete-time stochastic process with state space

                               I = {(x, y) | x, y integer, − N ≤ x, y ≤ N}.

                The successive states of the drunkard’s process are not independent of each other,
                but are dependent. However, the dependence goes only one step back. The next
                position of the drunkard depends only on the current position and is not influenced
                by the earlier positions in the path of the drunkard. In other words, the drunkard’s
   85   86   87   88   89   90   91   92   93   94   95