Page 149 - A First Course In Stochastic Models
P. 149

142                 CONTINUOUS-TIME MARKOV CHAINS

                                        4.1  THE MODEL

                In Chapter 3 we considered Markov processes in which the changes of the state only
                occurred at fixed times t = 0, 1, . . . . However, in numerous practical situations,
                changes of state may occur at each point of time. One of the most appropriate
                models for analysing such situations is the continuous-time Markov chain model.
                In this model the times between successive transitions are exponentially distributed,
                while the succession of states is described by a discrete-time Markov chain. A
                wide variety of applied probability problems can be modelled as a continuous-time
                Markov chain by an appropriate state description.
                  In analogy with the definition of a discrete-time Markov chain, a continuous-time
                Markov chain is defined as follows.
                Definition 4.1.1 A continuous-time stochastic process {X(t), t ≥ 0} with discrete
                state space I is said to be a continuous-time Markov chain if

                              P {X(t n ) = i n | X(t 0 ) = i 0 , . . . , X(t n−1 ) = i n−1 }
                                    = P {X(t n ) = i n | X(t n−1 ) = i n−1 }
                for all 0 ≤ t 0 < · · · < t n−1 < t n and i 0 , . . . , i n−1 , i n ∈ I.

                  Just as in the discrete-time case, the Markov property expresses that the condi-
                tional distribution of a future state given the present state and past states depends
                only on the present state and is independent of the past. In the following we
                consider time-homogeneous Markov chains for which the transition probability
                P {X(t + u) = j | X(u) = i} is independent of u. We write
                                  p ij (t) = P {X(t + u) = j | X(u) = i}.
                The theory of continuous-time Markov chains is much more intricate than the the-
                ory of discrete-time Markov chains. There are very difficult technical problems
                and some of them are not even solved at present time. Fortunately, the stagger-
                ing technical problems do not occur in practical applications. In our treatment of
                continuous-time Markov chains we proceed pragmatically. We impose a regular-
                ity condition that is not too strong from a practical point of view but avoids all
                technical problems.
                  As an introduction to the modelling by a continuous-time Markov chain, let us
                construct the following Markov jump process. A stochastic system with a discrete
                state space I jumps from state to state according to the following rules:
                Rule (a)  If the system jumps to state i, it then stays in state i for an exponentially
                distributed time with mean 1/ν i independently of how the system reached state i
                and how long it took to get there.

                Rule (b)  If the system leaves state i, it jumps to state j(j  = i) with probability

                p ij independently of the duration of the stay in state i, where  j =i  p ij = 1 for all
                i ∈ I.
   144   145   146   147   148   149   150   151   152   153   154