Page 173 - Probability, Random Variables and Random Processes
P. 173
CHAP. 53 RANDOM PROCESSES
G. Ergodic Processes :
Consider a random process {X(t), - co < t < co) with a typical sample function x(t). The time
average of x(t) is defined as
Similarly, the time autocorrelation function Rx(7) of x(t) is defined as
A random process is said to be ergodic if it has the property that the time averages of sample
functions of the process are equal to the corresponding statistical or ensemble averages. The subject
of ergodicity is extremely complicated. However, in most physical applications, it is assumed that
stationary processes are ergodic.
5.5 DISCRETE-PARAMETER MARKOV CHAINS
In this section we treat a discrete-parameter Markov chain {X,, n 2 0) with a discrete state
space E = (0, 1, 2, . . .), where this set may be finite or infinite. If X, = i, then the Markov chain is
said to be in state i at time n (or the nth step). A discrete-parameter Markov chain {X,, n 2 0) is
characterized by [Eq. (5.2711
P(Xn+l = jJXo = io, Xi = i,, ..., X, = i) = P(X,+, = jJX, = i) (5.32)
where P(x,+ , = j 1 X, = i) are known as one-step transition probabilities. If P{x, + , = j 1 X, = i} is
independent of n, then the Markov chain is said to possess stationary transition probabilities and the
process is referred to as a homogeneous Markov chain. Otherwise the process is known as a nonhomo-
geneous Markov chain. Note that the concepts of a Markov chain's having stationary transition
probabilities and being a stationary random process should not be confused. The Markov process, in
general, is not stationary. We shall consider only homogeneous Markov chains in this section.
A. Transition Probability Matrix :
Let (X,, n 2 0) be a homogeneous Markov chain with a discrete infinite state space E = (0, 1,
2, . . .). Then
regardless of the value of n. A transition probability matrix of (X,, n 2 0) is defined by
where the elements satisfy