Stat mech and metrop..

Monte Carlo Simulation
Dr. Vikas Malik
Department of Physics
and Material Science,
Jaypee Institite of
Information Technology,
Noida.
The average value of observable O is
For N spins there are 2N possible spin states. The best we can do is to choose a
subset of states and average over them. This will always introduce an error into the
calculation. We choose a subset of states at random from some probability
distribution P(s). The average value of the observable O becomes
The energy of most of these states is very high and the contribute almost nothing to the
mean value. Very few states (close to the gs)actually contribute to the mean. If we do a
simple sampling(probability of picking each states is equal),we will pick very few states
with low energies and mostly the useless high energy states. So one again needs to do
importance sampling. If we choose our M states with the probability exp(-H(s)/kT) the
average value is
Question: How do we exactly pick up states so that each
one appears with the correct Boltzmann probability.
Monte Carlo schemes rely on Markov processes as the
generating engine for the set of states used.
For our purposes, a Markov process is a mechanism which, given a
system in one state u, generates a new state of that system v. It does so in
a random fashion; it will not generate the same new state every time it is
given the initial state u. The probability of generating the state v given u.
is called the transition probability P(uv) for the transition from u to v,
and for a true Markov process all the transition probabilities should
satisfy two conditions: A) they should not vary over time, and B) they
should depend only on the properties of the current states u and v, and
not on any other states the system has passed through. These conditions
mean that the probability of the Markov process generating the state v on
being fed the state u, is the same every time it is fed the state u,
irrespective of anything else that has happened.
The transition probabilities P(uv)must
also satisfy the constraint,
P(uu) can be nonzero. The system will either
go to the new state or remain in the same state.
The Markov process is chosen specially so that when it is
run for long enough starting from any state of the system it
will eventually produce a succession of states which appear
with probabilities given by the Boltzmann distribution (the
system will equilibrate). For this two additional
requirements are Ergodicity and detailed balance.
The condition of ergodicity is the requirement that it should be
possible for our Markov process to reach any state of the
system from any other state, if we run it for long enough. This
is necessary to achieve our stated goal of generating states with
their correct Boltzmann probabilities. Every state ν appears
with some non-zero probability pv in the Boltzmann
distribution, and if that state were inaccessible from another
state μ no matter how long we continue our process for, then
our goal is thwarted if we start in state μ:the probability of
finding ν in our Markov chain of states will be zero, and not pv
as we require it to be.
The condition of ergodicity tells us that we are allowed to make
some of the transition probabilities of our Markov process zero,
but that there mustbe at least one path of non-zero transition
probabilities between any two states that we pick.
As t goes to infinity, the system equlibriates (reaches equilibrium), wV(t) tends
to pv
I equilibrium the LHS of eq. 1.1 is zero. To achieve this one puts
the term in the bracket on the RHS equal to zero. The following
condition is called detailed balance.
Metropolis Algorithm
Detailed Balance:
Since we want our equilibrium probability to be the Boltzmann
distribution, pu=exp(-Eu/kT) and pv=exp(-Ev/kT).The detailed
balance equation tells us that transition probabilities should
satisfy
We break the transition probability into two parts
g(uv) is the selection probability, which is the probability, given
an initial state u, our algorithm will generate a new target state v.
A(uv) is the acceptance ratio, which the probability that we accept
the new state v generated from u.
What we need from g and A.
g: the energy of the new state should not differ
much from the previous state, because we want to
be around the equilibrium state which we have
achieved.
A: the acceptance ratio should be large, because
otherwise we will not move to new states which
is the whole idea. We want to sample as many
states as we can in a given time.
Ising Model
Spins s (+1,-1) on a lattice interact by nearest neighbour
interaction J and B is the applied magnetic field.
Single Spin Flip dynamics: select one spin randomly
from the lattice and flip it.
In equilibrium LHS=0 and we get the condition for detailed
balance.
Where Peq(s) is the Boltzmann distribution.
Peq(s)=exp(-H(s)/kT)/Z.
Now for single flip dynamics, one chooses any spin on the
lattice. So if total no. of lattice sites is N then the selection
probability g: g(uv)=1/N,
The condition of detailed balance becomes
Metropolis Algorithm
Any acceptance ratio which satisfies the
above equation can be used. One of the most
efficient acceptance ratios was given by
Metropolis