Page 89 - A First Course In Stochastic Models
P. 89

CHAPTER 3


                     Discrete-Time Markov

                     Chains








                                      3.0  INTRODUCTION

                The notion of what is nowadays called a Markov chain was devised by the Russian
                mathematician A.A. Markov when, at the beginning of the twentieth century, he
                investigated the alternation of vowels and consonants in Pushkin’s poem Onegin.
                He developed a probability model in which the outcomes of successive trials are
                allowed to be dependent on each other such that each trial depends only on its
                immediate predecessor. This model, being the simplest generalization of the prob-
                ability model of independent trials, appeared to give an excellent description of
                the alternation of vowels and consonants and enabled Markov to calculate a very
                accurate estimate of the frequency at which consonants occur in Pushkin’s poem.
                  The Markov model is no exception to the rule that simple models are often
                the most useful models for analysing practical problems. The theory of Markov
                processes has applications to a wide variety of fields, including biology, computer
                science, engineering and operations research. A Markov process allows us to model
                the uncertainty in many real-world systems that evolve dynamically in time. The
                basic concepts of a Markov process are those of a state and of a state transition.
                In specific applications the modelling ‘art’ is to find an adequate state descrip-
                tion such that the associated stochastic process indeed has the Markovian property
                that the knowledge of the present state is sufficient to predict the future stochastic
                behaviour of the process. In this chapter we consider discrete-time Markov pro-
                cesses in which state transitions only occur at fixed times. Continuous-time Markov
                processes in which the state can change at any time are the subject of Chapter 4.
                The discrete-time Markov chain model is introduced in Section 3.1. In this section
                considerable attention is paid to the modelling aspects. Most students find the
                modelling more difficult than the mathematics. Section 3.2 deals with the n-step
                transition probabilities and absorption probabilities. The main interest, however, is
                in the long-run behaviour of the Markov chain. In Section 3.3 we discuss both the
                existence of an equilibrium distribution and the computation of this distribution.

                A First Course in Stochastic Models H.C. Tijms
                c   2003 John Wiley & Sons, Ltd. ISBNs: 0-471-49880-7 (HB); 0-471-49881-5 (PB)
   84   85   86   87   88   89   90   91   92   93   94