Page 300 - A Course in Linear Algebra with Applications
P. 300
284 Chapter Eight: Eigenvalues and Eigenvectors
The preceding problem is an example of what is known as
a Markov process. For an understanding of this concept some
knowledge of elementary probability is necessary. A Markov
process is a system which has a finite set of states Si,..., S n.
At any instant the system is in a definite state and over a fixed
period of time it changes to another state. The probability
that the system changes from state Sj to state Si over one
time period is assumed to be a constant Pij. The matrix
* = [Pij\n,n
is called the transition matrix of the system. In Example 8.2.4
there are two states: a person is either in or not in California.
The transition matrix is the matrix A.
Clearly all the entries of P lie in the interval [0, 1]; more
importantly P has the property that the sum of the entries in
any column equals 1. Indeed Y^i=\Pij = 1 s m c e it is certain
that the system will change from state Sj to some state Si.
This property guarantees that 1 is an eigenvalue of P; indeed
det(P — I) = 0 because the sum of the entries in any column
of the matrix P — / is equal to zero, so its determinant is zero.
Suppose that we are interested in the behavior of the
system over two time periods. For this we need to know the
probability of going from state Sj to state Si over two periods.
Now the probability of the system going from Sj to Si via Sk
is PikPkj) s o the probability of going from state Sj to Si over
two periods is
n
^2 PikPkj-
2
But this is immediately recognizable as the (i,j) entry of P ;
therefore the transition matrix for the system over two time
2
periods is P . More generally the transition matrix for the
system over k time periods is seen to be P k by similar consid-
erations.