Page 148 - A First Course In Stochastic Models
P. 148

CHAPTER 4


                     Continuous-Time Markov

                     Chains








                                      4.0  INTRODUCTION

                In the continuous-time analogue of discrete-time Markov chains the times between
                successive state transitions are not deterministic, but exponentially distributed.
                However, the state transitions themselves are again governed by a (discrete-time)
                Markov chain. Equivalently, a continuous-time Markov chain can be represented
                by so-called infinitesimal transition rates. This is in analogy with the ‘ t-represen-
                tation’ of the Poisson process. The representation by infinitesimal transition rates
                leads naturally to the flow rate equation approach. This approach is easy to visualize
                and is widely used in practice. The continuous-time Markov chain model is intro-
                duced in Section 4.1. In Section 4.2 we discuss the flow rate equation approach.
                The discussion in Section 4.2 concentrates on giving insights into this powerful
                approach but no proofs are given. The proofs are given in Section 4.3. Results for
                discrete-time Markov chains are the basis for the proofs of the ergodic theorems
                for continuous-time Markov chains.
                  In Section 4.4 we discuss specialized methods to solve the equilibrium equations
                for continuous-time Markov chains on a semi-infinite strip in two-dimensional
                space. Many applications of continuous-time Markov chains have this structure.
                Section 4.5 deals with transient analysis for continuous-time Markov chains. The
                basic tools for the computation of the transient state probabilities and first pas-
                sage time probabilities are Kolmogoroff’s method of linear differential equations
                and the probabilistic method of uniformization. Both methods will be discussed.
                In Section 4.6 we give algorithms for the computation of the transient proba-
                bility distribution of the cumulative reward in a continuous-time Markov chain
                model with a reward structure. A special case of this model is the computation
                of the transient distribution of the sojourn time of the process in a given set
                of states.




                A First Course in Stochastic Models H.C. Tijms
                c   2003 John Wiley & Sons, Ltd. ISBNs: 0-471-49880-7 (HB); 0-471-49881-5 (PB)
   143   144   145   146   147   148   149   150   151   152   153