Page 108 - A First Course In Stochastic Models
P. 108
100 DISCRETE-TIME MARKOV CHAINS
Interpretation of the π j
Using elementary results from renewal theory, we have already seen from the proof
of Theorem 3.3.1 that for any state j,
the long-run average number of visits to state j
with probability 1 (3.3.9)
per time unit = π j
when the process starts in state j. Under Assumption 3.3.1, the interpretation (3.3.9)
can easily be shown to hold for each starting state i ∈ I (this is obvious for a
transient state j and, by Lemma 3.5.8, a recurrent state j will be reached from
each initial state X 0 = i after finitely many transitions with probability 1). The
proof of Theorem 3.3.1 also showed that
1
π j = for each recurrent state j, (3.3.10)
µ jj
where µ jj is the mean recurrence time from state j to itself. The interpretation
(3.3.9) is most useful for our purposes. Using this interpretation, we can also
give a physical interpretation of the equilibrium equation (3.3.5). Each visit to
state j means a transition to state j (including self-transitions) and subsequently a
transition from state j. Thus
the long-run average number of transitions from state j
per time unit = π j
and
the long-run average number of transitions from state k to state j
per time unit = π k p kj .
This latter relation gives
the long-run average number of transitions to state j
per time unit = π k p kj .
k∈I
By physical considerations, the long-run average number of transitions to state j
per time unit must be equal to the long-run average number of transitions from
state j per time unit. Why? Hence the equilibrium equations express that the
long-run average number of transitions from state j per time unit equals the long-
run average number of transitions to state j per time unit for all j ∈ I. The
simplest way to memorize the equilibrium equations is provided by the following
(n) (n)
heuristic. Suppose that lim n→∞ p exists so that π j = lim n→∞ p . Next apply
ij ij