Page 53 - A First Course In Stochastic Models
P. 53
44 RENEWAL-REWARD PROCESSES
Also, define the random variable
T B = the amount of time the process spends in the set B of states during
one cycle.
S 1
Note that T B = I B (u) du for a continuous-time process {X(t)}; otherwise, T B
0
equals the number of indices 0 ≤ k < S 1 with X(k) ∈ B. The following theorem
is an immediate consequence of the renewal-reward theorem.
Theorem 2.2.3 For the regenerative process {X(t)} it holds that the long-run
fraction of time the process spends in the set B of states is E(T B )/E(C 1 ) with
probability 1.
That is,
1 t E(T B )
lim I B (u) du = with probability 1
t→∞ t 0 E(C 1 )
for a continuous-time process {X(t)} and
n
1 E(T B )
lim I B (k) = with probability 1
n→∞ n E(C 1 )
k=0
for a discrete-time process {X(n)}.
Proof The long-run fraction of time the process {X(t)} spends in the set B of
states can be interpreted as a long-run average reward per time unit by assuming
that a reward at rate 1 is earned while the process is in the set B and a reward at
rate 0 is earned otherwise. Then
E(reward earned during one cycle) = E(T B ).
The desired result next follows by applying the renewal-reward theorem.
Since E(I B (t)) = P {X(t) ∈ B}, we have as consequence of Theorem 2.2.3 and
the bounded convergence theorem that, for a continuous-time process,
1 t E(T B )
lim P {X(u) ∈ B} du = .
t→∞ t 0 E(C 1 )
t
Note that (1/t) P {X(u) ∈ B} du can be interpreted as the probability that an
0
outside observer arriving at a randomly chosen point in (0, t) finds the process in
the set B.
In many situations the ratio E(T B )/E(C 1 ) could be interpreted both as the long-
run fraction of time the process {X(t)} spends in the set B of states and as the
probability of finding the process in the set B when the process has reached sta-
tistical equilibrium. This raises the question whether lim t→∞ P {X(t) ∈ B} always
exists. This ordinary limit need not always exist. A counterexample is provided by