Page 138 - A First Course In Stochastic Models
P. 138
130 DISCRETE-TIME MARKOV CHAINS
when X 0 = ℓ. However, this result also holds when the Markov chain starts in state
s. To see this, define the indicator variable I k equal to 1 if X k = ℓ and I k equal to
0 otherwise. Let ω = (s, i 1 , i 2 , . . . ) be any realization of the Markov chain with
i k denoting the realized state at the kth state transition. Since f sℓ = 1, we have
for almost all ω that there is a finite integer t = t(ω) such that i t = ℓ. Hence, for
n > t(ω),
n t(ω) n
1 1 1
I k (ω) = I k (ω) + I k (ω).
n n n
k=1 k=1 k=t(ω)+1
Letting n → ∞, the first term on the right-hand side of this equation converges
to zero and the second term converges to π ℓ . This proves that (3.5.13) also holds
when X 0 = s. Together (3.5.12), (3.5.13) and the relation 1/µ ss = π s yield
π ℓ = π s E(number of visits to state ℓ in one cycle),
which proves the desired result.
In Example 3.1.3, dealing with the GI/M/1 queue, we tried a solution of the
j
form π j = γ τ , j ≥ 0 for the equilibrium distribution of the Markov chain
{X n } describing the number of customers present just prior to the arrival epochs.
This geometric form can be proved by using Lemma 3.5.10. Since the arrival rate
is less than the service rate, Assumption 3.3.1 is satisfied with the regeneration
state 0. Since any two states of the Markov chain {X n } communicate, it follows
from Lemma 3.5.2 and Theorem 3.5.3 that the state space I is an irreducible set
consisting of (positive) recurrent states. Hence, by Lemma 3.5.10, we have for the
GI/M/1 queue that
E(number of visits to state j + 1 between two successive returns to state j)
π j+1
= for j = 0, 1, . . . . (3.5.14)
π j
Some reflections show that the left-hand side of this equation is independent of j
by the memoryless property of the exponential distribution for the service times.
Hence, for some constant η, π j+1 /π j = η for all j ≥ 0 showing that π j = π 0 η j
for j ≥ 0.
Next we prove Theorem 3.3.3. The proof is very similar to that of Lemma 3.5.10.
Assume that the Markov chain earns a reward f (j) each time it visits state j.
Theorem 3.5.11 Suppose that the Markov chain {X n } satisfies the Assumptions
3.3.1 and 3.3.2. Then
n
1
lim f (X k ) = f (j)π j with probability 1
n→∞ n
k=1 j∈I
for each initial state X 0 = i.