Computational Project, 2022 PHYS3020/3920/7021 Statistical Mechanics

Introduction

In this assessment piece, you will be guided through a numerical simulation of ferromagnetic materials using the Ising model, and investigating various thermodynamic properties

with your simulation. You will start by simulating a finite line of spins whose spin direction

depends only on nearest-neighbour interactions (a 1D Ising model), and eventually will simulate a 2D Ising model with periodic boundary conditions (which, in effect, approximates an

’infinite’ 2D lattice of spins). You will then measure some thermodynamic properties of these

systems using your simulation. Like all good physicists, we will request that you report any

uncertainties or variance in your simulations, as well as to compare your simulated results to

expected physical quantities and phenomenon.

The purpose of this assessment is not to get everything perfect, nor is it to simulate absolutely

realistic models of atomic spin—that would be too difficult in a third year undergraduate

course whose focus isn’t just on computational simulations. Instead, it is to introduce you

to concepts and approaches in writing and measuring physical simulations. It is also to

cement some of the concepts you are studying in PHYS3020 by having you “discover” these

properties yourself. Furthermore, it should convince you how powerful simulations are in

physics as you will see how even very complicated physical phenomenon can emerge out of

simple “rules”. Finally, it will also allow you to demonstrate your insight and creativity in

ways that other assessment pieces simply couldn’t.

We recommend that you perform your simulations in MATLAB, a powerful scientific programming language. Support will be offered in lectures and tutorials to assist you in learning

how to use the language, and there is a general MATLAB guide (including some exercises

on physics simulations using MATLAB) available on the LearnX platform. You should feel

free to use an alternative programming language if you are experienced with one. However,

be aware that tutor/staff assistance may be more limited in this case. For all questions,

you should provide any code snippets that are relevant inside the body of your answer. You

should also upload your entire code as a separate m-file in your submission.

For those of you enrolled in the postgraduate version of the course (PHYS7021), this project is

an opportunity to learn graduate attributes including The ability to work and learn independently and effectively and The ability to formulate and investigate problems, create solutions,

innovate and improve current practices. The grade you receive will be based partly on your

demonstration of these attributes.

Project structure and grading

Core project: Questions 1 and 2 form the core of the project. The idea is that you’ll submit

the core project in your first submission, due September 12, 2pm, via a Blackboard submission

portal. Each answer should not exceed five pages (so you may use up to 10 pages total, limited

to five pages for each question), though you may include an appendix for additional working

and code. You will be provided feedback and may, if you wish, resubmit improved answers

to be remarked together with your project extension for your second submission. You may

submit the entire project instead, but note that Question 3 has a 10 page limit (not 5).

Page 1 of 15

Computational Project, 2022 PHYS3020/3920/7021 Statistical Mechanics

Open-ended extension: Question 3 extends the core of the project to more open-ended

simulations. This question should be submitted with the second submission, due October 17,

2pm, via a Blackboard submission portal. Your answer to Question 3 should not exceed ten

pages, though you may include an appendix for additional working and code.

Grading and Style: The report is marked out of a total of 100%, with Questions 1 and 2

graded holistically out of 70%, and Question 3 graded out of 30%. Importantly, note that

the core project marks are not evenly split; the idea is that you will attempt to write a good

report first and foremost, rather than merely answering the questions asked. As such, keep

in mind that the format of the submission should be a report-style, similar to a lab report. If

it helps, you can think of your simulation as the experiment—though of course, you should

provide code snippets, where relevant, to aid in completing the tasks required of you.

As always, please try and keep your work readable, relevant and concise. This is a good

habit to get into as future physicists. It also makes the marking easier and quicker for the

tutors! And, where relevant, don’t forget to include data analysis (error bars, discussion of

sources of error, error propagation and estimation, linking simulation results to, or explaining

discrepancies from, theory, etc.).

Page 2 of 15

Computational Project, 2022 PHYS3020/3920/7021 Statistical Mechanics

Background Physics

Ferromagnets and the Ising Model

A simple model of a paramagnet is a substance consisting of magnetic dipoles, whose spin may

align in one of two directions (we will call these spin-up or spin-down). For these substances,

there is no inherent preference for dipoles to align parallel or anti-parallel to their neighbours

unless an external magnetic field is applied.

However, not all substances are like paramagnets since, In the real world, dipoles are influenced by their neighbours’ spin direction. Without considering the complicated physical

reasons for it, let’s just introduce into our model an energy contribution to the system’s

Hamiltonian depending on the relative alignment of neighbouring dipoles.

When these energy contributions cause neighbouring dipoles to align parallel to each other,

we call the substance ferromagnetic (the most famous example being iron, Fe). When they

cause them to align antiparallel, we call the substance antiferromagnetic (for example, nickel

oxide (NiO)). By merely adding some neighbouring interactions, we will see that—under

certain conditions—a net, long-range magnetisation can emerge within our lattice.

Of course, iron is not ordinarily a permanent magnet because a lump of iron consists of

millions of domains (microscopic regions containing billions of dipoles). The net spins of

each domain align in different directions from one another, and hence the entire material

has no net magnetisation. However, we can make a lump of iron a permanent magnet by

heating it in an external magnetic field, then cooling it back down to room temperature.

This ’permanent’ magnet, of course, is not truly permanent, since heating our lump of iron

above a certain temperature (called the Curie temperature, which is 1043 K for iron) will

cause it to become a paramagnet.

We will model the behaviour of a ferromagnetic domain using a very simple model: the

Ising model. We account only for the neighbouring interactions between dipoles, but ignore

any long-range magnetic interactions between them. We will also assume a preferred axis of

magnetisation, and that dipoles can only align parallel or antiparallel to this axis.

Next, we introduce some notation: let N be the total number of dipoles, and si be the spin

of the ith dipole, where si = 1 signifies spin-up, and si = −1 signifies spin-down. The energy

contribution when neighbouring dipoles are aligned parallel will be +ϵ, and when aligned

antiparallel will be −ϵ. Hence, regardless of spin direction, the energy contribution can be

written as −ϵsisj

, where j is the spin of the jth dipole (assumed to be a neighbouring dipole).

The total energy of the system due to all nearest-neighbour interactions is thus:

U = −ϵ

X

neighbouring

pairs i,j

sisj

. (1)

Now, to predict the thermal behaviour of the system, we should compute the partition

function (in accordance with canonical formalism):

Z =

X

{si}

e

−βU

, (2)

Page 3 of 15

Computational Project, 2022 PHYS3020/3920/7021 Statistical Mechanics

where β = 1/kT, with k = 1.38×10−23 J/K is Boltzmann’s constant, and T is the temperature

of the system. Note that this sum is over all possible sets of dipole alignments. For N dipoles,

each with two possible alignments, the number of terms in the sum is 2N . For even relatively

small values of N, adding up all the terms by brute force is not practical.

Monte Carlo Simulations

Consider an Ising model with about 100 dipoles. Even the fastest computer in the world

couldn’t compute the probabilities of all possible states (at least, not within our lifetimes),

but then maybe it isn’t necessary to consider all states. Perhaps a random sampling of about

a million states is sufficient to compute the values we need. This is the idea behind Monte

Carlo methods (named after the Casino de Monte-Carlo in Monaco): by repeated random

sampling of possible states from deterministic systems, we can build up a sufficiently large

sample to compute the values we are interested in.

This procedure (called the naive Monte Carlo method) does not work for the Ising model,

however. Recall that the fundamental assumption of statistical mechanics is that, given

enough time, an isolated system in a given macrostate has an equal likelihood of being in any

of the microstates that produce that macrostate. However, the naive Monte Carlo method

implicitly assumes that every possible microstate (drawn from every possible macrostate of

our system) is equally probable (despite the fact that the macrostates themselves are not

equally probable, and hence nor are the microstates, since this is a canonical ensemble and

not a microcanonical one). Hence, even if we compute a billion states, this only represents

a tiny fraction of the states for an Ising model with N = 100 (wherein there are about 1021

such states). Even worse, at low temperatures where we are interested in the important

(and most probable) states which are strongly magnetised, we are likely to miss such states

entirely (since, even though these states are the most probable, a billion states is simply too

small a sample to ensure it is representative of the actual distribution of possible states).

A better idea is to use the Boltzmann factors themselves to weight our random distribution.

For the Ising model, this can be achieved as follows: first, start with any state you like.

Secondly, choose a dipole at random (using a uniform random distribution). Thirdly, compute

the energy difference resulting from the flip: if ∆U ≤ 0, go ahead and flip it (thus generating

the next system state); otherwise, if ∆U > 0, decide at random whether to flip the dipole,

where the probability of a flip is e

−β∆U

. If the dipole isn’t flipped, then the next state is the

same as the last one. Otherwise, the next state has that dipole flipped. In either case, we

then choose another dipole at random (using a uniform random distribution) and repeat the

process until every dipole has had many chances to be flipped. This method is called either

the Metropolis algorithm, or Monte Carlo summation with importance sampling.

The Metropolis algorithm generates a subset of states where low-energy states are more

common than high-energy ones. Indeed, it can be shown that the probability of a transition

to a new state using this algorithm is identical to the probability of a transition according

to the Boltzmann probabilities for that system (in other words, it avoids “clumping” of

highly improbable states by “weighting” the states with more probable energy values in our

random distribution). Thus, our subset of states better represents the distribution of possible

macrostates for our system than with the naive Monte Carlo method.

Page 4 of 15

Computational Project, 2022 PHYS3020/3920/7021 Statistical Mechanics

There is one caveat: this method will only generate states with the precisely correct Boltzmann probabilities once the algorithm has been running long enough for every state to have

been generated multiple times. We want to run it for a much shorter time than this (such

that almost all states are never generated at all). Under such circumstances, there is no guarantee that the subset of states actually generated accurately represents all possible system

states. But then again, this is analogous to what happens in the real world where a large

system never has time to explore all its possible microstates. So—provided we give every

dipole many chances to flip (say, between 103 and 105

such chances for each dipole)—we can

be relatively sure our simulation will represent the expected thermodynamics of a real-world

system.

To mitigate any edge effects of our system (since the edges have fewer neighbouring dipoles),

it is a good idea to include periodic boundaries on our lattice. In the 1D case, this involves

setting, e.g., the right neighbour of the last dipole as being the first dipole (such that the

lattice effectively ’wraps around’ on itself, like a closed loop). This approximates an infinite

lattice, with the approximation improving as the number of dipoles increases.

Relevant Equations and the Effect of Dimensionality

Although this model seems rather simple, attempting to solve it exactly can prove tedious

and difficult. Hence, we present the exact results without their derivations (although these

can be easily found online and in textbooks).

In one dimension, our spins are organised in a line, and each dipole only has 2 neighbours

(its left and right neighbour, respectively). The average energy of the system from nearestneighbour interactions is:

⟨U⟩ = −N ϵtanh βϵ, (3)

which goes to −N ϵ as T → 0 and goes to 0 as T → ∞. Thus, the dipoles are all parallel at

T = 0, and are all randomly aligned at high temperatures. The transition to alignment is

gradual and continuous, however, and so it does not exhibit the spontaneous alignment we

expect of true ferromagnetic systems.

Let’s turn to two dimensions now. Here, we will note that there are four neighbours—one

above, one below, one to the left and one to the right (we assume diagonal neighbours don’t

interact). Unfortunately, though an exact solution for the energy exists in two dimensions,

there are no closed-form representations of it. Something that does emerge is the spontaneous

alignment of the dipoles at a non-zero critical temperature, given by:

Tc =

2ϵ

k ln(1 + √

2)

≈ 2.27ϵ/k. (4)

This temperature is the temperature at which a phase transition occurs for our 2D Ising

model, and is analogous to the Curie temperature for ferromagnetic materials.

Let us now consider the number of neighbours as a function of dimensionality. Note that in

3 dimensions, there are 3 different simple lattice structures we could adopt. The number of

Page 5 of 15

Computational Project, 2022 PHYS3020/3920/7021 Statistical Mechanics

nearest neighbours (represented by n) is then:

n =

2 in one dimension;

4 in two dimensions (square lattice);

6 in three dimensions (simple cubic lattice);

8 in three dimensions (body-centred cubic lattice);

12 in three dimensions (face-centred cubic lattice).

(5)

In general, our Hamiltonian for the energy is then given by:

H = −ϵ

X

⟨i,j⟩

sisj − µBB

X

i

si

, (6)

where ⟨i, j⟩ refers to a sum over all neighbouring pairs, µ is the magnetic moment of each

dipole, and B is the strength of the external magnetic field. Note that this is the same

as Equation 1, with an added term for the energy from an external magnetic field (and it

reduces to the same thing when either B or µB equals 0). Note also that ⟨U⟩ is given by the

time average value of H.

Further, we define the net magnetisation as:

M = µBNs, (7)

where s =

1

N

P

i

si

is the average spin of our lattice. We also note that the heat capacity for

our system is:1

C =

∂U

∂T

N,V,B

=

⟨U

2

⟩ − ⟨U⟩

2

kT2

. (8)

Our entropy can be computed using the definition:

S = k ln Ω, (9)

where Ω is the multiplicity for our system (the number of microstates corresponding to a

particular macrostate). With ⟨U⟩, T and S, we can then compute the Helmholtz free energy

F from the definition:

F = U − T S. (10)

This is all the information you need to get started on the core project. As such, you don’t

need to look up external literature to help you (though of course you are more than welcome

to). For the open-ended extension, however, we recommend you do some external reading.

1This equality holds since we assume that our system is in equilibrium with a reservoir at temperature T;

for more information, see Schroeder, An Introduction to Thermal Physics (3rd ed.), Problem 6.18

Page 6 of 15

Computational Project, 2022 PHYS3020/3920/7021 Statistical Mechanics

Question 1

(Core project. Five pages maximum. Questions 1 & 2 graded together out of 70%.)

In this question, you will simulate a 1D Ising model, and compare your results to exact

solutions that you will derive from Z. You will also discuss the behaviours of finite and

infinite 1D Ising models based on your simulations. There are five parts in this question.

(a) Code up a one-dimensional Ising model with periodic boundaries using the Metropolis

algorithm for N = 100; for the time being, ignore any effects from an external magnetic

field. Provide the initial and final outputs from the simulation for at least three different

temperatures (we recommend T = 2.0, T = 1.0 and T = 0.5 ϵ/k). We recommend

storing your spin values in a row vector labelled s, where spin-up is 1 and spin-down

is -1, and using the image command in MATLAB as follows:

image(‘CData’,s*256)

colormap(‘gray’)

xlim([1 length(s)]
set(gcf,‘position’,[200 200 600 200])

If you see a blank plot, remember that s has to be a row vector. Your output, provided

your simulation is working, should look like a barcode at sufficiently high temperatures.

Make sure you give every dipole many chances to flip to ensure the system has reached

approximate thermodynamic equilibrium (we recommend at least 1,000 chances for

every dipole to flip). What do you notice about the size of the ‘chunks’ of colour at

low temperatures compared to at high temperatures?

Your task: Provide a code which meets the above outlined requirements; to provide

figures of the initial and final states of the system at 3 different temperatures; and to

comment on the size of the ‘chunks’ you see at low vs. high temperatures.

(b) For a sufficiently large 1D Ising model, the partition function can be computed exactly.

We present the result without proof:2

Z =

2 cosh(βϵ)

N

(11)

Using Equation 11, derive equations for internal energy per dipole, u = U/N, free

energy per dipole, f = F/N, entropy per dipole, S = S/(N), and specific heat capacity,

c = C/N. Note that you should be using relations for a canonical ensemble. You should

obtain the following results:

u = −ϵtanh(βϵ), (12)

f = −ϵ − kT ln

1 + e

−2ϵβ

, (13)

S =

ϵ

T

(1 − tanh (βϵ)) + k ln

1 + e

−2ϵβ

, (14)

c =

ϵ

2β

T cosh2

(βϵ)

. (15)

Your task: Provide derivations, starting from Equation 11, for u, f, S, and c.

2A relatively simple proof can be found in Schroeder, An Introduction to Thermal Physics (3rd ed.), pg.

343.

Page 7 of 1