Page 161 - A First Course In Stochastic Models
P. 161

154                 CONTINUOUS-TIME MARKOV CHAINS

                Theorem 4.2.2 Suppose the continuous-time Markov chain {X(t)} satisfies
                Assumptions 4.1.2, 4.2.1 and 4.2.2. Then, for each initial state X(0) = i,

                       R(t)
                   lim      =    r(j)p j +  p j  q jk F jk  with probability 1.
                   t→∞   t
                              j∈I        j∈I  k =j
                  A proof of this ergodic theorem will be given in Section 4.3. Intuitively the
                theorem can be seen by noting that p j gives the long-run fraction of time the
                process is in state j and p j q jk gives the long-run average number of transitions
                from state j to state k per time unit.


                Example 4.1.1 (continued) Inventory control for an inflammable product

                Suppose that the following costs are made in the inventory model. For each unit
                kept in stock, a holding cost h > 0 is incurred for each unit of time the unit is
                kept in stock. Penalty costs R > 0 are incurred for each demand that is lost and
                fixed costs K > 0 are made for each inventory replenishment. Then the long-run
                average cost per time unit equals
                                        Q

                                      h    jp j + Rλp 0 + Kµp 0 .
                                       j=0

                Strictly speaking, the cost term Rλp 0 is not covered by Theorem 4.2.2. Alterna-
                tively, by using part (a) of Theorem 2.4.1 it can be shown that the long-run average
                amount of demand that is lost per time unit equals λp 0 .



                                   4.3  ERGODIC THEOREMS

                In this section we prove Theorems 4.2.1 and 4.2.2. The proofs rely heavily on
                earlier results for the discrete-time Markov chain model. In our analysis we need
                the embedded Markov chain {X n , n = 0, 1, . . . }, where X n is defined by
                          X n = the state of the continuous-time Markov chain just
                               after the nth state transition

                with the convention that X 0 = X(0). The one-step transition probabilities of the
                discrete-time Markov chain {X n } are given by


                                              q ij /ν i ,  j  = i,
                                       p ij =                                (4.3.1)
                                              0,     j = i;
                see Section 4.1. It is readily verified that Assumption 4.2.1 implies that the embed-
                ded Markov chain {X n } satisfies the corresponding Assumption 3.3.1 and thus state
                r is a positive recurrent state for the Markov chain {X n }.
   156   157   158   159   160   161   162   163   164   165   166