Page 173 - A First Course In Stochastic Models
P. 173

166                 CONTINUOUS-TIME MARKOV CHAINS

                  In general the linear differential equations (4.5.1) have to be solved numerically.
                Let us assume in the remainder of this section that the state space I of the Markov
                chain is finite. There are several possibilities to numerically solve the homoge-
                neous system (4.5.1) of linear differential equations with constant coefficients. In
                most applications the matrix of coefficients has distinct eigenvalues and is thus
                diagonalizable. In those situations one might compute the eigenvalues λ 1 , . . . , λ n
                of the matrix and the corresponding eigenvectors. The transient probabilities p ij (t)
                are then a linear combination of pure exponential functions e λ 1 t , . . . , e λ n t  (zero is
                always among the eigenvalues and the corresponding eigenvector gives the equilib-
                rium distribution of the Markov process up to a multiplicative constant). In general,
                however, one uses a so-called Runge–Kutta method to solve the linear differen-
                tial equations numerically. Standard codes for this method are widely available.
                From a numerical point of view, the Runge–Kutta method is in general superior
                to the eigenvalue approach. The Runge–Kutta method has the additional advan-
                tage that it can also be applied to continuous-time Markov processes with time-
                dependent transition rates. Another possible way to compute the p ij (t) is to use
                the discrete FFT method when an explicit expression for the generating function
                                  j
                P (t, z) =   p ij (t)z , |z| ≤ 1 is available.
                          j∈I
                4.5.2 The Uniformization Method
                This method falls back on an earlier construction of a continuous-time Markov
                chain in Section 4.1. In this construction the process leaves state i after an expo-
                nentially distributed time with mean 1/ν i and then jumps to another state j(j  = i)
                with probability p ij . Letting X n denote the state of the process just after the nth
                state transition, the discrete-time stochastic process {X n } is an embedded Markov
                chain with one-step transition probabilities p ij . The jump probabilities p ij and the
                infinitesimal transition rates q ij are related to each other by


                                   q ij = ν i p ij ,  i, j ∈ I with j  = i.  (4.5.2)

                To introduce the uniformization method, consider first the special case in which
                the leaving rates ν i of the states are identical, say ν i = ν for all i. Then the
                transition epochs are generated by a Poisson process with rate ν. In this situation an
                expression for p ij (t) is directly obtained by conditioning on the number of Poisson
                events up to time t and using the n-step transition probabilities of the embedded
                Markov chain {X n }. However, the leaving rates ν i are in general not identical.
                Fortunately, there is a simple trick for reducing the case of non-identical leaving
                rates to the case of identical leaving rates. The uniformization method transforms
                the original continuous-time Markov chain with non-identical leaving rates into
                an equivalent stochastic process in which the transition epochs are generated by
                a Poisson process at a uniform rate. However, to achieve this, the discrete-time
                Markov chain describing the state transitions in the transformed process has to
                allow for self-transitions leaving the state of the process unchanged.
   168   169   170   171   172   173   174   175   176   177   178