Page 151 - A First Course In Stochastic Models
P. 151

144                 CONTINUOUS-TIME MARKOV CHAINS

                The state transitions are governed by a discrete-time Markov chain whose one-step
                transition probabilities have the simple form

                                   p i,i−1 = 1 for i = 1, . . . , Q,
                                    p 0Q = 1 and the other p ij = 0.


                Infinitesimal transition rates
                Consider the general Markov jump process {X(t)} that was constructed above. The
                sojourn time in any state i has an exponential distribution with mean 1/ν i and
                the state transitions are governed by a Markov chain having one-step transition
                probabilities p ij for i, j ∈ I with p ii = 0 for all i. The Markov process allows for
                an equivalent representation involving the so-called infinitesimal transition rates.
                To introduce these rates, let us analyse the behaviour of the process in a very small
                time interval of length  t. Recall that the exponential (sojourn-time) distribution
                has a constant failure rate; see Appendix B. Suppose that the Markov process
                {X(t)} is in state i at the current time t. The probability that the process will leave
                state i in the next  t time units with  t very small equals ν i  t + o( t) by the
                constant failure rate representation of the exponential distribution. If the process
                leaves state i, it jumps to state j ( = i) with probability p ij . Hence, for any t > 0,

                                                ν i  t × p ij + o( t),  j  = i,
                    P {X(t +  t) = j | X(t) = i} =
                                                1 − ν i  t + o( t),  j = i,
                as  t → 0. One might argue that in the next  t time units state j could be reached
                from state i by first jumping from state i to some state k and next jumping in the
                same time interval from state k to state j. However, the probability of two or more
                state transitions in a very small time interval of length  t is of the order ( t) 2
                and is thus o( t); that is, this probability is negligibly small compared with  t as
                 t → 0. Define now

                                   q ij = ν i p ij ,  i, j ∈ I with j  = i.
                The non-negative numbers q ij are called the infinitesimal transition rates of the
                continuous-time Markov chain {X(t)}. Note that the q ij uniquely determine the

                sojourn-time rates ν i and the one-step transition probabilities p ij by ν i =  q ij
                                                                              j =i
                and p ij = q ij /ν i . The q ij themselves are not probabilities but transition rates.
                However, for  t very small, q ij  t can be interpreted as the probability of moving
                from state i to state j within the next  t time units when the current state is state i.
                  In applications one usually proceeds in the reverse direction. The infinitesimal
                transition rates q ij are determined in a direct way. They are typically the result
                of the interaction of two or more elementary processes of the Poisson type. Con-
                trary to the discrete-time case in which the one-step transition probabilities deter-
                mine unambiguously a discrete-time Markov chain, it is not generally true that the
                infinitesimal transition rates determine a unique continuous-time Markov chain.
   146   147   148   149   150   151   152   153   154   155   156