Page 54 - A First Course In Stochastic Models
P. 54

RENEWAL-REWARD PROCESSES                     45

                periodic discrete-time Markov chains; see Chapter 3. For completeness we state
                the following theorem.

                Theorem 2.2.4 For the regenerative process {X(t), t ∈ T },

                                                       E(T B )
                                      lim P {X(t) ∈ B} =
                                      t→∞              E(C 1 )

                provided that the probability distribution of the cycle length has a continuous part
                in the continuous-time case and is aperiodic in the discrete-time case.

                  A distribution function is said to have a continuous part if it has a positive
                density on some interval. A discrete distribution {a j , j = 0, 1, . . . } is said to
                be aperiodic if the greatest common divisor of the indices j ≥ 1 for which
                a j > 0 is equal to 1. The proof of Theorem 2.2.4 requires deep mathematics
                and is beyond the scope of this book. The interested reader is referred to Miller
                (1972). It is remarkable that the proof of Theorem 2.2.3 for the time-average limit
                              t
                lim t→∞ (1/t)  0 B (u) du is much simpler than the proof of Theorem 2.2.4 for
                              I
                the ordinary limit lim t→∞ P {X(t) ∈ B}. This is all the more striking when we
                take into account that the time-average limit is in general much more useful
                for practical purposes than the ordinary limit. Another advantage of the time-
                average limit is that it is easier to understand than the ordinary limit. In interpret-
                ing the ordinary limit one should be quite careful. The ordinary limit represents
                the probability that an outside person will find the process in some state of the
                set B when inspecting the process at an arbitrary point in time after the process
                has been in operation for a very long time. It is essential for this interpretation
                that the outside person has no information about the past of the process when
                inspecting the process. How much more concrete is the interpretation of the time-
                average limit as the long-run fraction of time the process will spend in the set B
                of states!
                  To illustrate Theorem 2.2.4, consider again Example 2.2.1. In this example we
                analysed the long-run average behaviour of the regenerative process {X(t)}, where
                X(t) = 1 if the machine is up at time t and X(t) = 0 otherwise. It was shown that
                the long-run fraction of time the machine is down equals E(D)/[E(U) + E(D)],
                where the random variables U and D denote the lengths of an up-period and a
                down-period. This result does not require any assumption about the shapes of the
                probability distributions of U and D. However, some assumption is needed in order
                to conclude that
                                                             E(D)
                        lim P {the system is down at time t} =       .       (2.2.2)
                        t→∞                              E(U) + E(D)
                It is sufficient to assume that the distribution function of the length of an up-period
                has a positive density on some interval.
                  We state without proof a central limit theorem for the renewal-reward process.
   49   50   51   52   53   54   55   56   57   58   59