Page 174 - A First Course In Stochastic Models
P. 174

TRANSIENT STATE PROBABILITIES                167

                  To formulate the uniformization method, choose a finite number ν with

                                           ν ≥ ν i ,  i ∈ I.

                Define now {X n } as the discrete-time Markov chain whose one-step transition
                probabilities p are given by
                            ij

                                            (ν i /ν)p ij ,  j  = i,
                                      p =
                                       ij
                                            1 − ν i /ν,  j = i,
                for any i ∈ I. Let {N(t), t ≥ 0} be a Poisson process with rate ν such that the
                process is independent of the discrete-time Markov chain {X n }. Define now the
                continuous-time stochastic process {X(t), t ≥ 0} by

                                        X(t) = X N(t) ,  t ≥ 0.              (4.5.3)

                In other words, the process {X(t)} makes state transitions at epochs generated by a
                Poisson process with rate ν and the state transitions are governed by the discrete-
                time Markov chain {X n } with one-step transition probabilities p . Each time the
                                                                     ij
                Markov chain {X n } is in state i, the next transition is the same as in the Markov
                chain {X n } with probability ν i /ν and is a self-transition with probability 1 − ν i /ν.
                The transitions out of state i are in fact delayed by a time factor of ν/ν i , while the
                time itself until a state transition from state i is condensed by a factor of ν i /ν. This
                heuristically explains why the continuous-time process {X(t)} is probabilistically
                identical to the original continuous-time Markov chain {X(t)}. Another heuristic
                way to see that the two processes are identical is as follows. For any i, j ∈ I with
                j  = i

                  P {X(t +  t) = j | X(t) = i} = ν t × p + o( t)
                                                    ij
                                          = ν i  t × p ij + o( t) = q ij  t + o( t)

                                          = P {X(t +  t) = j | X(t) = i}  for  t → 0.
                In the next theorem we give a formal proof that the two processes {X(t)} and
                {X(t)} are probabilistically equivalent.

                Theorem 4.5.2 Suppose that the continuous-time Markov chain {X(t)} satisfies
                Assumption 4.1.2. Then

                           p ij (t) = P {X(t) = j | X(0) = i},  i, j ∈ I and t > 0.

                Proof  For any t > 0, define the matrix P(t) by P(t) = (p ij (t)), i, j ∈ I. Denote
                by Q the matrix Q = (q ij ), i, j ∈ I, where the diagonal elements q ii are defined by
                                             q ii = −ν i .
   169   170   171   172   173   174   175   176   177   178   179