Page 156 - A First Course In Stochastic Models
P. 156

THE FLOW RATE EQUATION METHOD                  149

                independently of the initial state i. The interested reader is referred to Chung
                (1967) for a proof. The limiting probability p j can be interpreted as the probability
                that an outside observer finds the system in state j when the process has reached
                statistical equilibrium and the observer has no knowledge about the past evolution
                of the process. The notion of statistical equilibrium relates not only to the length
                of time the process has been in operation but also to our knowledge of the past
                evolution of the system. But a more concrete interpretation which better serves our
                purposes is that

                          the long-run fraction of time the process will be in state j  (4.2.1)

                            = p j  with probability 1,

                independently of the initial state X(0) = i. More precisely, denoting for fixed j
                the indicator variable I j (t) by


                                              1  if X(t) = j,
                                      I j (t) =
                                              0  otherwise,
                it holds for any j ∈ I that

                                   1     t
                               lim     I j (u) du = p j  with probability 1,
                              t→∞ t
                                     0
                independently of the initial state X(0) = i. A proof of this result will be given in
                Section 4.3 using the theory of renewal-reward processes. In Section 4.3 we also
                prove the following important theorem.

                Theorem 4.2.1 Suppose the continuous-time Markov chain {X(t)} satisfies
                Assumptions 4.1.2 and 4.2.1. Then the probabilities p j , j ∈ I are the unique solution
                to the linear equations


                                       ν j x j =  q kj x k ,  j ∈ I          (4.2.2)
                                             k =j

                                                x j = 1                      (4.2.3)
                                             j∈I
                in the unknowns x j , j ∈ I. Moreover, let {x j , j ∈ I} be any solution to (4.2.2) with

                    x j < ∞. Then, for some constant c, x j = cp j for all j ∈ I.

                  j
                  The linear equations (4.2.2) are called the equilibrium equations or balance
                equations of the Markov process. The equation (4.2.3) is a normalizing equation.
                The probabilities p j are called the equilibrium probabilities of the continuous-time
                Markov chain. They can be computed by solving a system of linear equations.
   151   152   153   154   155   156   157   158   159   160   161