Page 365 - Numerical Methods for Chemical Engineering
P. 365
354 7 Probability theory and stochastic simulation
depends only upon the values of the previous m members of the sequence,
[k] [k−1] [k−2] [1] [0] [k] [k−1] [k−2] [k−m]
P q q , q ,..., q , q = P q q , q ,..., q (7.228)
Of particular importance are first-order Markov processes,
[0] [1] [N] [0] [1] [0] [2] [1] [3] [2]
P q , q ,..., q = P q × P q q × P q q × P q q
(7.229)
[N] [N−1]
×··· × P q q
Here, the conditional probabilities take the form of transition probabilities,
[k] [k−1] [k−1] [k]
P q q ≡ T q → q (7.230)
[0]
[1]
Each sequence q , q ,...,q [N] generated by a Markov process is called a Markov
chain. Markov chain Monte Carlo (MCMC) simulation is the general term given to the
use of computers (with hopefully good random number generators) to generate Markov
chains. Here, we focus on the case where the transition probabilities are defined such that
the members of the Markov chain are distributed according to some specified distribution
P(q).
Monte Carlo simulation in statistical mechanics
Consider a system at constant temperature and volume, with a state vector q and an energy
function U(q), such that at equilibrium, the probability of finding the system in state q is
described by the Boltzmann distribution
1 U(q) U(q)
P(q) = exp − Z = exp − (7.231)
Z k b T k b T
q
As T → 0, the system is confined more closely to its minimum potential energy state,
until at T = 0 , only the minimum energy state has a nonzero probability of being
+
observed.
[2]
[3]
[1]
In Monte Carlo simulation, we generate a random sequenceq ,q ,q ,...of states that
are sampled according to the probability distribution P(q). That is, the number of members
of such a sequence within some region of volume dq around q is proportional to P(q)dq.
Let us assume that we have some way of generating a random sequence with the correct
[k]
probability distribution, and that at iteration k we are at the state q . We then propose at
random a move to a new state q (new) , where the probability of proposing this move is γ (q [k]
→ q (new) ). We now either accept or reject this move in such a way as to sample correctly
from P(q).
To do so, we define an acceptance probability α(q [k] → q (new) ), and if we generate a
random number u, distributed uniformly on [0, 1], that is less than or equal to this probability,
we accept the move. That is, the new state is generated by the rule
(new) [k] (new)
q , if u ≤ α q → q
[k+1]
q = [k] [k] (new) (7.232)
q , if u >α q → q
Note that if we reject the move, we then count the current state again.