<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://chemwiki.ch.ic.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cg1417</id>
	<title>ChemWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://chemwiki.ch.ic.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cg1417"/>
	<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/wiki/Special:Contributions/Cg1417"/>
	<updated>2026-05-16T06:38:15Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796605</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796605"/>
		<updated>2019-11-20T10:44:04Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graph from CG1417IsingModelGraphs.ipynbː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
The curves produced using the C++ data are much smoother and have less noise than the data gained from my python code. This is likely due to the C++ code having more montecarlosteps per temperature, reducing the effect of random fluctuations on the averages and also having a smaller step gap which will make the curve smoother as the points are closer together.&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#reads data from C++ file&lt;br /&gt;
temps2x2C=data2x2C[:,0]&lt;br /&gt;
energies2x2C=data2x2C[:,1]&lt;br /&gt;
energysq2x2C=data2x2C[:,2]&lt;br /&gt;
mag2x2C=data2x2C[:,3]&lt;br /&gt;
magsq2x2C=data2x2C[:,4]&lt;br /&gt;
heatcap2x2C=data2x2C[:,5]&lt;br /&gt;
&lt;br /&gt;
#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial of degree 35ǃ&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity. However, the polyfit curve still doesn&#039;t perfectly fit the peak due to the significant amount of noise present there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039; from CG1417PolyfitScript.ipynbː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are significant for the smaller sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored - this should allow a more accurate value of &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt; to be determined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039; from CG1417PolyfitScript.py&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796604</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796604"/>
		<updated>2019-11-20T10:43:17Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an 8\times 8 lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graph from CG1417IsingModelGraphs.ipynbː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
The curves produced using the C++ data are much smoother and have less noise than the data gained from my python code. This is likely due to the C++ code having more montecarlosteps per temperature, reducing the effect of random fluctuations on the averages and also having a smaller step gap which will make the curve smoother as the points are closer together.&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#reads data from C++ file&lt;br /&gt;
temps2x2C=data2x2C[:,0]&lt;br /&gt;
energies2x2C=data2x2C[:,1]&lt;br /&gt;
energysq2x2C=data2x2C[:,2]&lt;br /&gt;
mag2x2C=data2x2C[:,3]&lt;br /&gt;
magsq2x2C=data2x2C[:,4]&lt;br /&gt;
heatcap2x2C=data2x2C[:,5]&lt;br /&gt;
&lt;br /&gt;
#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial of degree 35ǃ&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity. However, the polyfit curve still doesn&#039;t perfectly fit the peak due to the significant amount of noise present there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are significant for the smaller sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored - this should allow a more accurate value of &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt; to be determined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039; from CG1417PolyfitScript.py&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796603</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796603"/>
		<updated>2019-11-20T10:42:43Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of T...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
The curves produced using the C++ data are much smoother and have less noise than the data gained from my python code. This is likely due to the C++ code having more montecarlosteps per temperature, reducing the effect of random fluctuations on the averages and also having a smaller step gap which will make the curve smoother as the points are closer together.&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#reads data from C++ file&lt;br /&gt;
temps2x2C=data2x2C[:,0]&lt;br /&gt;
energies2x2C=data2x2C[:,1]&lt;br /&gt;
energysq2x2C=data2x2C[:,2]&lt;br /&gt;
mag2x2C=data2x2C[:,3]&lt;br /&gt;
magsq2x2C=data2x2C[:,4]&lt;br /&gt;
heatcap2x2C=data2x2C[:,5]&lt;br /&gt;
&lt;br /&gt;
#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial of degree 35ǃ&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity. However, the polyfit curve still doesn&#039;t perfectly fit the peak due to the significant amount of noise present there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are significant for the smaller sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored - this should allow a more accurate value of &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt; to be determined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039; from CG1417PolyfitScript.py&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796600</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796600"/>
		<updated>2019-11-20T10:41:53Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of T...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
The curves produced using the C++ data are much smoother and have less noise than the data gained from my python code. This is likely due to the C++ code having more montecarlosteps per temperature, reducing the effect of random fluctuations on the averages and also having a smaller step gap which will make the curve smoother as the points are closer together.&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#reads data from C++ file&lt;br /&gt;
temps2x2C=data2x2C[:,0]&lt;br /&gt;
energies2x2C=data2x2C[:,1]&lt;br /&gt;
energysq2x2C=data2x2C[:,2]&lt;br /&gt;
mag2x2C=data2x2C[:,3]&lt;br /&gt;
magsq2x2C=data2x2C[:,4]&lt;br /&gt;
heatcap2x2C=data2x2C[:,5]&lt;br /&gt;
&lt;br /&gt;
#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial of degree 35ǃ&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity. However, the polyfit curve still doesn&#039;t perfectly fit the peak due to the significant amount of noise present there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are significant for the smaller sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored - this should allow a more accurate value of &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt; to be determined.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796598</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796598"/>
		<updated>2019-11-20T10:39:19Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
The curves produced using the C++ data are much smoother and have less noise than the data gained from my python code. This is likely due to the C++ code having more montecarlosteps per temperature, reducing the effect of random fluctuations on the averages and also having a smaller step gap which will make the curve smoother as the points are closer together.&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#reads data from C++ file&lt;br /&gt;
temps2x2C=data2x2C[:,0]&lt;br /&gt;
energies2x2C=data2x2C[:,1]&lt;br /&gt;
energysq2x2C=data2x2C[:,2]&lt;br /&gt;
mag2x2C=data2x2C[:,3]&lt;br /&gt;
magsq2x2C=data2x2C[:,4]&lt;br /&gt;
heatcap2x2C=data2x2C[:,5]&lt;br /&gt;
&lt;br /&gt;
#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial of degree 35ǃ&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity. However, the polyfit curve still doesn&#039;t perfectly fit the peak due to the significant amount of noise present there.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796596</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796596"/>
		<updated>2019-11-20T10:38:19Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an e...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
The curves produced using the C++ data are much smoother and have less noise than the data gained from my python code. This is likely due to the C++ code having more montecarlosteps per temperature, reducing the effect of random fluctuations on the averages and also having a smaller step gap which will make the curve smoother as the points are closer together.&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#reads data from C++ file&lt;br /&gt;
temps2x2C=data2x2C[:,0]&lt;br /&gt;
energies2x2C=data2x2C[:,1]&lt;br /&gt;
energysq2x2C=data2x2C[:,2]&lt;br /&gt;
mag2x2C=data2x2C[:,3]&lt;br /&gt;
magsq2x2C=data2x2C[:,4]&lt;br /&gt;
heatcap2x2C=data2x2C[:,5]&lt;br /&gt;
&lt;br /&gt;
#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial of degree 35ǃ&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796592</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796592"/>
		<updated>2019-11-20T10:37:27Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: T, E, E^2, M, M^2, C (the final fi&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
The curves produced using the C++ data are much smoother and have less noise than the data gained from my python code. This is likely due to the C++ code having more montecarlosteps per temperature, reducing the effect of random fluctuations on the averages and also having a smaller step gap which will make the curve smoother as the points are closer together.&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#reads data from C++ file&lt;br /&gt;
temps2x2C=data2x2C[:,0]&lt;br /&gt;
energies2x2C=data2x2C[:,1]&lt;br /&gt;
energysq2x2C=data2x2C[:,2]&lt;br /&gt;
mag2x2C=data2x2C[:,3]&lt;br /&gt;
magsq2x2C=data2x2C[:,4]&lt;br /&gt;
heatcap2x2C=data2x2C[:,5]&lt;br /&gt;
&lt;br /&gt;
#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796585</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796585"/>
		<updated>2019-11-20T10:34:16Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: T, E, E^2, M, M^2, C (the final...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#reads data from C++ file&lt;br /&gt;
temps2x2C=data2x2C[:,0]&lt;br /&gt;
energies2x2C=data2x2C[:,1]&lt;br /&gt;
energysq2x2C=data2x2C[:,2]&lt;br /&gt;
mag2x2C=data2x2C[:,3]&lt;br /&gt;
magsq2x2C=data2x2C[:,4]&lt;br /&gt;
heatcap2x2C=data2x2C[:,5]&lt;br /&gt;
&lt;br /&gt;
#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796582</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796582"/>
		<updated>2019-11-20T10:32:51Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, \mathrm{V...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases. Also, as lattice size increases the noise around the peak becomes larger which will affect the accuracy of determining the maximum heat capacity and Curie Temperature for the larger lattices.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796574</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796574"/>
		<updated>2019-11-20T10:30:18Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* Section 6 - The effect of system size */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Long-range interactions are present and more significant in the smaller lattices where there are fewer stronger, short range interactions. As a result, I expect long-range interactions to be important in square lattices up to a 4x4 size.&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796567</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796567"/>
		<updated>2019-11-20T10:23:23Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than a...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 above the specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796563</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796563"/>
		<updated>2019-11-20T10:19:36Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functio...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 The use of the bumpy modules reduces the amount of code required and removes the need for loops making the code significantly shorter, and is therefore expected to run ILtimetrial.py faster than the initial code developed.&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796558</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796558"/>
		<updated>2019-11-20T10:16:19Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of T...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 18&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 18 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 18&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796556</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796556"/>
		<updated>2019-11-20T10:15:42Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 17&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796555</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796555"/>
		<updated>2019-11-20T10:15:17Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an e...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796554</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796554"/>
		<updated>2019-11-20T10:15:00Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: T, E, E^2, M, M^2, C (the final...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796551</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796551"/>
		<updated>2019-11-20T10:14:29Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, \mathrm{V...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases which means the Curie Temperature decreases as matrix size increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796549</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796549"/>
		<updated>2019-11-20T10:13:39Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* Section 6 - The effect of system size */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 12 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796548</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796548"/>
		<updated>2019-11-20T10:13:12Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* Section 5 - The effect of temperature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 12&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796546</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796546"/>
		<updated>2019-11-20T10:11:46Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in y...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796544</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796544"/>
		<updated>2019-11-20T10:11:20Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796541</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796541"/>
		<updated>2019-11-20T10:10:45Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: If T , do you expect a spontaneous magnetisation (i.e. do you expect \left\langle M\right\rangle \neq 0)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the ou...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796539</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796539"/>
		<updated>2019-11-20T10:09:55Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy()...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows the results of a single run of the montecarlostep() function and the lattice the function operated upon.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417MonteCarloStep_run.png|thumb|left|Figure 3 - Results from a single montecarlostep() function and the resulting lattice produced along with the correct return from the statistics() function]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=File:Cg1417MonteCarloStep_run.png&amp;diff=796533</id>
		<title>File:Cg1417MonteCarloStep run.png</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=File:Cg1417MonteCarloStep_run.png&amp;diff=796533"/>
		<updated>2019-11-20T10:05:03Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796516</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796516"/>
		<updated>2019-11-20T10:00:13Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* Section 1 - Introduction to the Ising Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{lattice sites!}{n. spin up!n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796491</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796491"/>
		<updated>2019-11-20T09:44:32Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of T...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code used to generate &#039;&#039;Figure 17&#039;&#039;&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
Cmax64x64 = np.max(fitted_C_values64) #finds Cmax for 64x64 matrix - done for others already &lt;br /&gt;
Tmax64x64 = peak_T_range64[fitted_C_values64 == Cmax64x64] #finds Tmax corresponding to Cmax&lt;br /&gt;
&lt;br /&gt;
LatSize=[2,4,8,16,32,64] #stores lattice sizes&lt;br /&gt;
Tmax=[Tmax2x2,Tmax4x4,Tmax8x8,Tmax16x16,Tmax32x32,Tmax64x64] #stores corresponding Tmax data&lt;br /&gt;
np.savetxt(&#039;CmaxVSTmax.txt&#039;, (LatSize,Tmax)) #writes data to txt file&lt;br /&gt;
&lt;br /&gt;
ScalData=np.loadtxt(&#039;CmaxVSTmax.txt&#039;) #loads data&lt;br /&gt;
LatticeSize=ScalData[0] #gets lattice sizes&lt;br /&gt;
TempMax=ScalData[1] #gets max temp or curie temp for each lattice&lt;br /&gt;
&lt;br /&gt;
Lmin1min = np.min(np.divide(1,LatticeSize)) #minimum of 1/LatticeSize values&lt;br /&gt;
Lmin1max = np.max(np.divide(1,LatticeSize)) #maximum of 1/LatticeSize values&lt;br /&gt;
&lt;br /&gt;
fitTcl = np.polyfit(np.divide(1,LatticeSize),TempMax, 1) #creates fit object&lt;br /&gt;
&lt;br /&gt;
Lmin1values = np.linspace(Lmin1min, Lmin1max, 1000) #finds 1000 values between min and max x-axis value of 1/LatticeSize&lt;br /&gt;
fitted_Tcl_values = np.polyval(fitTcl, Lmin1values) #creates corresponding Curie Temp values for each value in Lmin1values&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
scalrelax = fig.add_subplot(1,1,1)&lt;br /&gt;
scalrelax.set_xlabel(&#039;1/Lattice Size&#039;)&lt;br /&gt;
scalrelax.set_ylabel(&#039;Curie Temperature/ J/k_B&#039;)&lt;br /&gt;
scalrelax.plot(np.divide(1,LatticeSize),TempMax,color=&#039;black&#039;,marker=&#039;.&#039;,linestyle=&#039;&#039;) #plots Curie Temp against 1/LatticeSize&lt;br /&gt;
scalrelax.plot(Lmin1values,fitted_Tcl_values,color=&#039;red&#039;,marker=&#039;&#039;,linestyle=&#039;-&#039;) #plots line of best fit for data above&lt;br /&gt;
pl.savefig(&#039;CurieTemp.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796486</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796486"/>
		<updated>2019-11-20T09:36:27Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 16&#039;&#039;ː&lt;br /&gt;
&amp;lt;pre&amp;gt;data16 = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;) #loads data to variable&lt;br /&gt;
&lt;br /&gt;
T16 = data16[:,0] #gets temps&lt;br /&gt;
C16 = data16[:,5] # gets heat capacities&lt;br /&gt;
&lt;br /&gt;
Tmin16 = 2.15 #chosen min temp&lt;br /&gt;
Tmax16 = 2.55 #chosen max temp&lt;br /&gt;
&lt;br /&gt;
selection16 = np.logical_and(T16 &amp;gt; Tmin16, T16 &amp;lt; Tmax16) #choose only those rows where both conditions are true&lt;br /&gt;
peak_T_values16 = T16[selection16] #choose temp values in range chosen above&lt;br /&gt;
peak_C_values16 = C16[selection16] #choose heat cap values in range of t above&lt;br /&gt;
&lt;br /&gt;
fit16 = np.polyfit(peak_T_values16,peak_C_values16,3) #fit 3rd order polynomial&lt;br /&gt;
peak_T_range16 = np.linspace(Tmin16, Tmax16, 1000) #defines 1000 temps within data range&lt;br /&gt;
fitted_C_values16 = np.polyval(fit16, peak_T_range16) #use the fit object to get corresponding values of heat cap&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T16,C16,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat cap against temp&lt;br /&gt;
heatcapax.plot(peak_T_range16,fitted_C_values16,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for small range&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_16x16C_3.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796480</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796480"/>
		<updated>2019-11-20T09:30:53Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an e...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Here is the source code for &#039;&#039;Figure 15&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data_test = np.loadtxt(&amp;quot;16x16C.dat&amp;quot;)&lt;br /&gt;
T_test = data_test[:,0] #gets temperatures&lt;br /&gt;
C_test = data_test[:,5] #gets heat capacity data&lt;br /&gt;
&lt;br /&gt;
#first we fit the polynomial to the data&lt;br /&gt;
fit_test = np.polyfit(T_test, C_test, 35) # fit a polynomial&lt;br /&gt;
&lt;br /&gt;
#now we generate interpolated values of the fitted polynomial over the range of our function&lt;br /&gt;
T_min_test = 0.5 #np.min(T_test)&lt;br /&gt;
T_max_test = 5 #np.max(T_test)&lt;br /&gt;
&lt;br /&gt;
T_range_test = np.linspace(T_min_test, T_max_test, 1000) #generate 1000 evenly spaced points between T_min and T_max&lt;br /&gt;
fitted_C_values_test = np.polyval(fit_test, T_range_test)# use the fit object to generate the corresponding values of C&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(T_test,C_test,color=&#039;orange&#039;,label=&#039;C++ Data&#039;) #plots C data of heat capacity against temp&lt;br /&gt;
heatcapax.plot(T_range_test,fitted_C_values_test,label=&#039;Fitted Polynomial&#039;) #plots fitted polynomial for whole range of temp&lt;br /&gt;
heatcapax.legend()&lt;br /&gt;
pl.savefig(&#039;FIT_TEST16x16_35.png&#039;, bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796478</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796478"/>
		<updated>2019-11-20T09:28:23Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: T, E, E^2, M, M^2, C (the final...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here is the source code the produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt;#fitting C++ data&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.plot(temps2x2, np.array(energies2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python energy against T&lt;br /&gt;
enerax.plot(temps2x2C, energies2x2C, color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
magax.plot(temps2x2, np.array(mag2x2)/4,color=&#039;black&#039;,alpha=0.7,label=&#039;Python Data&#039;) #python magnetisation against T&lt;br /&gt;
magax.plot(temps2x2C, mag2x2C,color=&#039;red&#039;,label=&#039;C++ Data&#039;) #C energy against T&lt;br /&gt;
enerax.legend() #shows legend on energy graph&lt;br /&gt;
magax.legend() #shows legend on energy graph&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relevant variables and dat files were changed for each matrix.&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796477</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796477"/>
		<updated>2019-11-20T09:24:53Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, \mathrm{V...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Here is the source code to produce the figuresː&lt;br /&gt;
&amp;lt;pre&amp;gt; def heatCap(energies,energysq,T,latsize):&lt;br /&gt;
    #defines the heat capacity for a given temperature&lt;br /&gt;
    energiesq=np.multiply(energies,energies) #creates array of (average energies) squared&lt;br /&gt;
    varE=np.subtract(energysq,energiesq) #defines variance of average energy&lt;br /&gt;
    tempsq=np.multiply(T,T) #array of temperature squared&lt;br /&gt;
    return np.array(np.divide(varE,tempsq))/(latsize**2)&lt;br /&gt;
&lt;br /&gt;
heatCap2x2=heatCap(energies2x2,energysq2x2,temps2x2,2) #creates array of heat capacity for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
heatcapax = fig.add_subplot(1,1,1)&lt;br /&gt;
heatcapax.set_xlabel(&#039;Temperature&#039;)&lt;br /&gt;
heatcapax.set_ylabel(&#039;Heat Capacity&#039;)&lt;br /&gt;
heatcapax.plot(temps2x2,heatCap2x2,color=&#039;orange&#039;) #plots heat capacity for each T&lt;br /&gt;
pl.savefig(&#039;cg14172x2heatcap.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796475</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796475"/>
		<updated>2019-11-20T09:21:14Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot show...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section is identical as for the 8x8 graph above in Figure 11 with the relevant files and variables changed accordingly.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796474</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796474"/>
		<updated>2019-11-20T09:19:31Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an 8\times 8 lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is the source code for the script to produce the graphː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;data8x8=np.loadtxt(&#039;8x8.dat&#039;) #loads data&lt;br /&gt;
temps8x8=data8x8[:,0] #stores temperatures&lt;br /&gt;
energies8x8=data8x8[:,1] #stores average energy for each T&lt;br /&gt;
energysq8x8=data8x8[:,2] #stores average energy squared for each T&lt;br /&gt;
mag8x8=data8x8[:,3] #stores magnetisation for each T&lt;br /&gt;
magsq8x8=data8x8[:,4] #stores magnetisation squared for each T&lt;br /&gt;
stde8x8=data8x8[:,5] #edited ILtemperaturerange.py to record the standard deviation of the energy for each T&lt;br /&gt;
stdm8x8=data8x8[:,6] #edited ILtemperaturerange.py to record the standard deviation of the magnetisation for each T&lt;br /&gt;
&lt;br /&gt;
fig = pl.figure()&lt;br /&gt;
enerax = fig.add_subplot(2,1,1)&lt;br /&gt;
enerax.set_ylabel(&amp;quot;Energy per spin&amp;quot;)&lt;br /&gt;
enerax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
enerax.set_ylim([-2.5, 0.5])&lt;br /&gt;
enerax.set_xlim([0.5,5.1])&lt;br /&gt;
magax = fig.add_subplot(2,1,2)&lt;br /&gt;
magax.set_ylabel(&amp;quot;Magnetisation per spin&amp;quot;)&lt;br /&gt;
magax.set_xlabel(&amp;quot;Temperature&amp;quot;)&lt;br /&gt;
magax.set_ylim([-2, 2])&lt;br /&gt;
magax.set_xlim([0.5,5.1])&lt;br /&gt;
enerax.errorbar(temps8x8, np.array(energies8x8)/64,yerr=np.divide(stde8x8,64),color=&#039;black&#039;,ecolor=&#039;teal&#039;,alpha=0.8) #plots energy per spin against T&lt;br /&gt;
magax.errorbar(temps8x8, np.array(mag8x8)/64,yerr=np.divide(stdm8x8,64),alpha=0.8,ecolor=&#039;salmon&#039;,color=&#039;black&#039;) #plots magnetisation per spin against T on separate graph&lt;br /&gt;
pl.savefig(&#039;8x8error.png&#039;,bbox_inches=&#039;tight&#039;) #saves figure&lt;br /&gt;
pl.show()&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796458</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796458"/>
		<updated>2019-11-20T08:50:33Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796457</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796457"/>
		<updated>2019-11-20T08:50:13Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of T...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; &amp;lt;ref&amp;gt;L. Onsager, Phys. Rev., 1944, 65, 117--149.&amp;lt;/ref&amp;gt;for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*L. Onsager, Phys. Rev., 1944, 65, 117--149.&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796453</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796453"/>
		<updated>2019-11-20T08:39:52Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of T...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. The points which have the largest residuals and deviation from the line of best fit in &#039;&#039;Figure 17&#039;&#039; corresponds to the smaller lattice sizes of 2x2 and 4x4 where longer range interactions are more significant. The longer range interactions posed by the boundary conditions are nowhere near as significant for the larger sizes and causes the energy of the smaller matrices to be less accurate and have a larger associated error with the energy and thus the Curie Temperature for that lattice size. This affects the accuracy of the line of best fit and to increase the accuracy of this line, larger lattice sizes of 128x128, 256x256 etc should be included in the calculation for the line of best fit and the smaller matrices ignored.&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796448</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796448"/>
		<updated>2019-11-20T08:23:44Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in y...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and sum modules with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796446</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796446"/>
		<updated>2019-11-20T08:22:25Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* Section 3 - Introduction to Monte Carlo Simulation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is longer than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796442</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796442"/>
		<updated>2019-11-20T08:21:55Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with D = 3,\ N=1000 at absolute zero? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796441</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796441"/>
		<updated>2019-11-20T08:21:09Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* Section 1 - Introduction to the Ising Model */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = \frac{n. spin up!}{n. spin down!}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln(\frac{100!}{100!}) = 0&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = \frac{1000!}{1000!}&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt; \Omega = \frac{1000!}{999!1!}=1000&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1000) -  0 =  6.91 k_B&amp;lt;/math&amp;gt;, which is an expected increase in entropy as the number of possible configurations of the system increases.&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be parallel as such that magnetisation, &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;. For all the spins to be parallel, there is only one possible configuration. So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = N&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(1) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796436</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796436"/>
		<updated>2019-11-20T07:56:47Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of T...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 17&#039;&#039; below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796435</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796435"/>
		<updated>2019-11-20T07:56:31Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* Section 8 - Locating the Curie Temperature */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 14 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 15&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 15 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 16 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 15&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796434</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796434"/>
		<updated>2019-11-20T07:55:23Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: T, E, E^2, M, M^2, C (the final...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 14&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796433</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796433"/>
		<updated>2019-11-20T07:54:45Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, \mathrm{V...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 13 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796432</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796432"/>
		<updated>2019-11-20T07:54:34Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* Section 6 - The effect of system size */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 12 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796431</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796431"/>
		<updated>2019-11-20T07:54:17Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an 8\times 8 lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. &#039;&#039;Figure 11&#039;&#039; shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 11 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796430</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796430"/>
		<updated>2019-11-20T07:53:48Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than a...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 7&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. Figure 12 shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796429</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796429"/>
		<updated>2019-11-20T07:52:43Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in y...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. Figure 12 shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796428</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796428"/>
		<updated>2019-11-20T07:52:28Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 4 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. Figure 12 shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796427</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796427"/>
		<updated>2019-11-20T07:51:46Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: If T , do you expect a spontaneous magnetisation (i.e. do you expect \left\langle M\right\rangle \neq 0)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the ou...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 3 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 3&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. Figure 12 shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796426</id>
		<title>Rep:Y3CMPCG1417</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Rep:Y3CMPCG1417&amp;diff=796426"/>
		<updated>2019-11-20T07:51:29Z</updated>

		<summary type="html">&lt;p&gt;Cg1417: /* TASK: Run the ILcheck.py script from the IPython Qt console using the command */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Section 1 - Introduction to the Ising Model==&lt;br /&gt;
&lt;br /&gt;
===TASK: Show that the lowest possible energy for the Ising model is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, where D is the number of dimensions and N is the total number of spins. What is the multiplicity of this state? Calculate its entropy.===&lt;br /&gt;
&lt;br /&gt;
Consider a 1D row of lattice sites of N=3 with spin configuration [+1][+1][+1].&lt;br /&gt;
&lt;br /&gt;
Mathematically the interaction energy is defined asː &lt;br /&gt;
&amp;lt;math&amp;gt; -\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} &amp;lt;/math&amp;gt; where J is a constant and &amp;lt;math&amp;gt;s_{i}s_{j}&amp;lt;/math&amp;gt; is the product between two spins in adjacent lattice sites.&lt;br /&gt;
&lt;br /&gt;
The sum of the interaction energies &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}&amp;lt;/math&amp;gt; can be considered as the sum of the individual interaction energies between spinsː&lt;br /&gt;
&amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = \epsilon_{12} + \epsilon_{23} + \epsilon_{13} + \epsilon_{21} + \epsilon_{32} + \epsilon_{31} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Although lattice sites 1 and 3 are not adjacent they are said to still interact according to the periodic boundary conditions applied.&lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;math&amp;gt;\epsilon_{12} = \epsilon_{21} &amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = \epsilon_{32}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{13} = \epsilon_{31}&amp;lt;/math&amp;gt; which means that all of the interactions within the system are counted twice, hence the total energy needs to be halved, resulting in the following formula being obtainedː &amp;lt;math&amp;gt; \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j} = 2\epsilon_{12} + 2\epsilon_{13} + 2\epsilon_{23} &amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It can be determined that &amp;lt;math&amp;gt;\epsilon_{12} = (+1)(+1) = 1&amp;lt;/math&amp;gt; , &amp;lt;math&amp;gt;\epsilon_{13} = (+1)(+1) = 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{23} = (+1)(+1) = 1&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Thereforeː  &amp;lt;math&amp;gt;-\frac{1}{2}  \ J \ \sum^{N}_{i} \sum^{}_{j \ \epsilon \ neighbours (i)} s_{i} s_{j}  = -\frac{1}{2}  \ J \ (2 + 2 + 2) = -\frac{1}{2}  \ J \ 6 = - 3 J  =  -DNJ&amp;lt;/math&amp;gt; for a 1D lattice with &amp;lt;math&amp;gt;D=1&amp;lt;/math&amp;gt; and 3 lattice sites &amp;lt;math&amp;gt;N=3&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The multiplicity of the system,&amp;lt;math&amp;gt;\Omega = 2S+1&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is the total magnetisation of the system.&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;math&amp;gt;\Omega = 2(3)+1 = 7&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Entropy, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;S = k_B ln(\Omega)&amp;lt;/math&amp;gt; and so in this case &amp;lt;math&amp;gt;S =  k_B ln7 = 1.95 k_B&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Imagine that the system is in the lowest energy configuration. To move to a different state, one of the spins must spontaneously change direction (&amp;quot;flip&amp;quot;). What is the change in energy if this happens &amp;lt;math&amp;gt;(D=3, N=1000)&amp;lt;/math&amp;gt;? How much entropy does the system gain by doing soʔ===&lt;br /&gt;
&lt;br /&gt;
In a 3D lattice system, each lattice site has three unique interactions with its neighbours to its left, top and front. In the lowest energy configuration, all spins are parallel and for a system the minimum energy is &amp;lt;math&amp;gt;E = -DNJ&amp;lt;/math&amp;gt;, so for the system with &amp;lt;math&amp;gt;N=1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D=3&amp;lt;/math&amp;gt;, the minimum energy is &amp;lt;math&amp;gt;-3000J&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If a single spin is flipped, the product of its spin with its neighbours spin reverses and becomes negative and this increases the total energy of the system. Since 3 unique spin-spin interactions are reversed in sign, the total energy increases by &amp;lt;math&amp;gt;+3J&amp;lt;/math&amp;gt;, meaning the new total energy is &amp;lt;math&amp;gt;-2997J&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the multiplicity of the system will be &amp;lt;math&amp;gt;\Omega = 2(1000)+1=2001&amp;lt;/math&amp;gt; , and after the flip, the multiplicity becomes &amp;lt;math&amp;gt;\Omega = 2(1000-1)+1=1999&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The associated change in entropy, &amp;lt;math&amp;gt;\Delta S =  k_B ln(1999) -  k_B ln(2001)=  k_B ln(\frac{1999}{2001}) = -0.001 k_B&amp;lt;/math&amp;gt;, which is a very small decrease in entropy as the system starts&lt;br /&gt;
&lt;br /&gt;
===TASK: Calculate the magnetisation of the 1D and 2D lattices in Figure 1. What magnetisation would you expect to observe for an Ising lattice with &amp;lt;math&amp;gt;D = 3,\ N=1000&amp;lt;/math&amp;gt; at absolute zero?===&lt;br /&gt;
&lt;br /&gt;
[[File:ThirdYearCMPExpt-IsingSketch.png|thumb|left|Figure 1 - Shows 1D (N = 5), 2D (N = 5x5) and 3D (N = 5x5x5) lattices.]]&lt;br /&gt;
&lt;br /&gt;
Magnetisation is defined as &amp;lt;math&amp;gt;M=\sum_{i} s_i&amp;lt;/math&amp;gt;. So for the 1D lattice with &amp;lt;math&amp;gt;N = 5&amp;lt;/math&amp;gt; in &#039;&#039;Figure 2&#039;&#039;, &amp;lt;math&amp;gt;M = +1&amp;lt;/math&amp;gt; and for the 2D lattice with &amp;lt;math&amp;gt;N = 25 , M = +1&amp;lt;/math&amp;gt; too.&lt;br /&gt;
&lt;br /&gt;
According to the 3rd Law of thermodynamics, entropy is 0 at absolute zero for a perfect crystalline solid, and consequently it is expected that the lattices will have follow suit and have zero entropy at 0K. To have zero entropy all spins must be paired as such that magnetisation, &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt; - which is only possible in the lattices contains an even numbers of lattice sites (N = even). So, for a lattice with &amp;lt;math&amp;gt;N = 1000&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;D =3&amp;lt;/math&amp;gt;, if &amp;lt;math&amp;gt;M = 0&amp;lt;/math&amp;gt;, then multiplicity, &amp;lt;math&amp;gt;\Omega = 1&amp;lt;/math&amp;gt; and entropy, &amp;lt;math&amp;gt;S =k_B ln(\Omega) = 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Section 2 - Calculating the Energy and Magnetisation==&lt;br /&gt;
&lt;br /&gt;
===TASK: complete the functions energy() and magnetisation(), which should return the energy of the lattice and the total magnetisation, respectively. In the energy() function you may assume that &amp;lt;math&amp;gt;J=1.0&amp;lt;/math&amp;gt; at all times (in fact, we are working in reduced units in which &amp;lt;math&amp;gt;J=k_B&amp;lt;/math&amp;gt;, but there will be more information about this in later sections). Do not worry about the efficiency of the code at the moment — we will address the speed in a later part of the experiment.===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		mag=[]&lt;br /&gt;
		for i in range(0,len(lat)): #loops through all rows of lattice&lt;br /&gt;
			for j in range(0,len(lat[i])): #loops through elements of each row&lt;br /&gt;
				mag+=[lat[i][j]] #adds spin value to mag array&lt;br /&gt;
		return sum(mag)	#sums all spins from mag array&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		lat=self.lattice #creates lattice and stores it&lt;br /&gt;
		left=[]&lt;br /&gt;
		top=[]&lt;br /&gt;
&lt;br /&gt;
		for i in range(0,len(lat)):&lt;br /&gt;
			for j in range(0,len(lat[i])):&lt;br /&gt;
				left+=[lat[i][j]*lat[i][j-1]] #multiplies spin by spin to left&lt;br /&gt;
				top+=[lat[i][j]*lat[i-1][j]] #multiplies spin by spin above it&lt;br /&gt;
		int_en=left+top #sums spin products from left and top&lt;br /&gt;
		energy=-sum(int_en) #sums all spin products for each spin to give total &lt;br /&gt;
&lt;br /&gt;
		return energy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Run the ILcheck.py script from the IPython Qt console using the command===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 2&#039;&#039; shows the results when ILcheck.py was ran on my IsingLattice.py file. The ILcheck.py file was ran several times to ensure the code worked for various random lattices.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILcheck run.png|thumb|left|500px| Figure 2 - Result from running the ILcheck.py file]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 3 - Introduction to Monte Carlo Simulation==&lt;br /&gt;
&lt;br /&gt;
===TASK: How many configurations are available to a system with 100 spins? To evaluate these expressions, we have to calculate the energy and magnetisation for each of these configurations, then perform the sum. Let&#039;s be very, very, generous, and say that we can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second with our computer. How long will it take to evaluate a single value of &amp;lt;math&amp;gt;\left\langle M\right\rangle_T&amp;lt;/math&amp;gt;?===&lt;br /&gt;
&lt;br /&gt;
For a system with 100 lattice sites and two possible spins for each site, there are &amp;lt;math&amp;gt;2^{100}&amp;lt;/math&amp;gt;possible configurations for the system. &amp;lt;math&amp;gt;2^{100}= 1.27\times 10^{30} &amp;lt;/math&amp;gt;, so if the computer can analyse &amp;lt;math&amp;gt;1\times 10^9&amp;lt;/math&amp;gt; configurations per second, then it will take &amp;lt;math&amp;gt;\frac{1.27\times 10^{30}}{10^9} = 1.27\times 10^{21} s&amp;lt;/math&amp;gt; to analyse the whole system, which is older than the age of the universe and therefore is not a practical approach.&lt;br /&gt;
&lt;br /&gt;
===TASK: Implement a single cycle of the above algorithm in the montecarlocycle(T) function. This function should return the energy of your lattice and the magnetisation at the end of the cycle. You may assume that the energy returned by your energy() function is in units of &amp;lt;math&amp;gt;k_B&amp;lt;/math&amp;gt;! Complete the statistics() function. This should return the following quantities whenever it is called: &amp;lt;math&amp;gt;&amp;lt;E&amp;gt;, &amp;lt;E^2&amp;gt;, &amp;lt;M&amp;gt;, &amp;lt;M^2&amp;gt;&amp;lt;/math&amp;gt;, and the number of Monte Carlo steps that have elapsed.===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
E = []&lt;br /&gt;
E2 = []&lt;br /&gt;
M = []&lt;br /&gt;
M2 = []&lt;br /&gt;
n_cycles = 0&lt;br /&gt;
&lt;br /&gt;
def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back if rejected else not changed&lt;br /&gt;
		&lt;br /&gt;
                self.E+=[self.energy()] #records energy&lt;br /&gt;
		self.E2+=[self.energy()**2] #records energy squared&lt;br /&gt;
		self.M+=[self.magnetisation()] #records magnetisation&lt;br /&gt;
		self.M2+=[self.magnetisation()**2] #records magnetisation squared		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1 #adds 1 to run total&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
def statistics(self):&lt;br /&gt;
		# complete this function so that it calculates the correct values for the averages of E, E*E (E2), M, M*M (M2), and returns them&lt;br /&gt;
&lt;br /&gt;
		e=np.mean(self.E)&lt;br /&gt;
		e2=np.mean(self.E2)&lt;br /&gt;
		m=np.mean(self.M)&lt;br /&gt;
		m2=np.mean(self.M2)&lt;br /&gt;
&lt;br /&gt;
		return e,e2,m,m2,self.n_cycles&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: If &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;, do you expect a spontaneous magnetisation (i.e. do you expect &amp;lt;math&amp;gt;\left\langle M\right\rangle \neq 0&amp;lt;/math&amp;gt;)? When the state of the simulation appears to stop changing (when you have reached an equilibrium state), use the controls to export the output to PNG and attach this to your report. You should also include the output from your statistics() function.===&lt;br /&gt;
&lt;br /&gt;
If the temperature of the system is less than the Curie Temperature, &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; then spontaneous magnetisation can occur and the system will tend to its lowest energy state where all of the spins are parallel - this is a property of ferromagnetic materials.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg1417ILanim_run.png|400px|thumb|left|Figure 4 - Results from running the ILanim.py file - shows the energy and magnetisation converging over time]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 4&#039;&#039; shows that over time the the system spontaneously converges to the minimum energy state with all of the spins parallel to one another and shows, as I expected, that spontaneous magnetisation occurs and also shows that the temperature of this simulation is below the Curie Temperature,  &amp;lt;math&amp;gt;T &amp;lt; T_C&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 4 - Accelerating the Code==&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your current version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 5&#039;&#039; show the results of running the ILtimetrial.py file on my code three timesː&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=cg1417ILtimetrial_run2.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=cg1417ILtimetrial_run3.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 5 - Results of running the ILtimetrial.py file on my code three separate times&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gave me an avergage time of &amp;lt;math&amp;gt;24.3 s \pm 0.2s&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Look at the documentation for the NumPy sum function. You should be able to modify your magnetisation() function so that it uses this to evaluate M. The energy is a little trickier. Familiarise yourself with the NumPy roll and multiply functions, and use these to replace your energy double loop (you will need to call roll and multiply twice!).===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; def energy(self):&lt;br /&gt;
		&amp;quot;Return the total energy of the current lattice configuration.&amp;quot;&lt;br /&gt;
		&lt;br /&gt;
		left=multiply(roll(self.lattice,1,axis=1),self.lattice) #product of spin with spin left of it&lt;br /&gt;
		top=multiply(roll(self.lattice,-1,axis=0),self.lattice) #product of spin with spin above it&lt;br /&gt;
&lt;br /&gt;
		int_en=sum(left+top) #sum of array containing sum of left and top spin products for each spin&lt;br /&gt;
&lt;br /&gt;
		energy = -sum(int_en) #calculates the total energy of system&lt;br /&gt;
		return energy&lt;br /&gt;
&lt;br /&gt;
def magnetisation(self):&lt;br /&gt;
		&amp;quot;Return the total magnetisation of the current lattice configuration.&amp;quot;&lt;br /&gt;
		return sum(sum(self.lattice)) #adds up all spins in lattice&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use the script ILtimetrial.py to record how long your new version of IsingLattice.py takes to perform 2000 Monte Carlo steps. This will vary, depending on what else the computer happens to be doing, so perform repeats and report the error in your average!===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 6&#039;&#039; shows the result of running the ILtimetrial.py on my new accelerated code.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | align = left&lt;br /&gt;
&lt;br /&gt;
 | image1=cg1417ILtimetrial_run1fast.png&lt;br /&gt;
 | width1=500&lt;br /&gt;
 | image2=ILtimetrial_run2fast.png&lt;br /&gt;
 | width2=500&lt;br /&gt;
 | image3=ILtimetrial_run3fast.png&lt;br /&gt;
 | width3=500&lt;br /&gt;
 | footer = Figure 6 - Results of running the ILtimetrial.py file on my new updated and accelerated code.&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The accelerated code is much faster upon using the roll, multiply and s with a new average time of &amp;lt;math&amp;gt;0.790 s \pm 0.005 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 5 - The effect of temperature==&lt;br /&gt;
&lt;br /&gt;
===TASK: The script ILfinalframe.py runs for a given number of cycles at a given temperature, then plots a depiction of the final lattice state as well as graphs of the energy and magnetisation as a function of cycle number. This is much quicker than animating every frame! Experiment with different temperature and lattice sizes. How many cycles are typically needed for the system to go from its random starting position to the equilibrium state? Modify your statistics() and montecarlostep() functions so that the first N cycles of the simulation are ignored when calculating the averages. You should state in your report what period you chose to ignore, and include graphs from ILfinalframe.py to illustrate your motivation in choosing this figure.===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 7&#039;&#039; below shows the results from running the ILfinalframe.py for 2x2 lattice at T=1,2,3,5.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2T1.png&lt;br /&gt;
 | image2 = cg14172x2T2.png&lt;br /&gt;
 | image3 =cg14172x2T3.png&lt;br /&gt;
 | image4 =cg14172x2T5.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 7 - Results of running the ILfinalframe.py file at T=1,2,3,5 for a 2x2 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a 2x2 matrix, a suitable cut-off point to exclude from the avergage energies and magnetisations is where the energy and magnetisations per spin are constant, which is 30 steps. For T=3 and T=5 the graphs do not converge because it is possible that these temperatures are higher than the Curie Temperature and as such spontaneous magnetisation will not occur and the system will not diverge to the lowest energy state. At the higher temperatures, there are larger thermal fluctations and the Boltzmann factor is more significant allowing the system to move away from the lowest energy state easier. As a result moving forwards, a suitable cut-off point will only be determined from T=1 and T=2 graphs for the larger matrices.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 8&#039;&#039; shows the results from running a 4x4 lattice at T=1,2 and 3.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14174x4T1.png&lt;br /&gt;
 | image2 = cg14174x4T2.png&lt;br /&gt;
 | image3 =cg14174x4T3.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 8 - Results of running the ILfinalframe.py file at T=1,2,3 for a 4x4 matrix.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 8&#039;&#039;, a suitable cut-off point for the energy and magnetisations is 200 as this is after where the energy and magnetisation has converged for T=1, and is after the initial large drop in energy for T=2, even though there are a few small fluctuations after 200 steps. The result from T=3 has been included to show the large fluctuations for the larger temperatures, and supporting my choice to determine the cut-off from T=1 and T=2 only.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 9&#039;&#039; shows the results for an 8x8 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14178x8T1.png&lt;br /&gt;
 | image2 = cg14178x8T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 9 - Results of running the ILfinalframe.py file at T=1,2 for an 8x8 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 9&#039;&#039; above, a suitable cut-off point is 1000 steps as this is where the energy and magnetisation has easily converged by and is also the point after which the initial large drop in energy has been overcome for T=2 too.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 10&#039;&#039; shows the result of running the ILfinalframe.py for a 16x16 matrix.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141716x16T1.png&lt;br /&gt;
 | image2 = cg141716x16T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 10 - Results of running the ILfinalframe.py file at T=1,2 for a 16x16 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
From &#039;&#039;Figure 10&#039;&#039;, a suitable cut-off point is 15000 steps as for T=1 the energy and magnetisations have converged significantly and will not change much, and likewise this is the same for the T=2 frame.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; below shows the results from a 32x32 matrix at T=1 and T=2.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg141732x32T1.png&lt;br /&gt;
 | image2 = cg141732x32T2.png&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 11 - Results of running the ILfinalframe.py file at T=1,2 for a 32x32 matrix&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 11&#039;&#039; above shows the results of running the ILfinalframe.py file for a 32x32 matrix at T=1 and T=2. As a result a suitable cut-off of 50000 steps was chosen as the energy and magnetisation has significantly converged, but not as much as it could at 100000 steps. I chose a slightly lower value to ensure that the run times of my monte-carlo simulations in future tasks were not extremely time consuming.&lt;br /&gt;
&lt;br /&gt;
The montecarlostep() function was changed by adding a condition that values only above the pre-determined cut-off were included when determining the average value of energy; energy squared; magnetisation and magnetisations squared from the statistics function. The statistics() function did not need to be modified.&lt;br /&gt;
&lt;br /&gt;
The following code is from the 32x32 matrixː&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;def montecarlostep(self, T):&lt;br /&gt;
		# complete this function so that it performs a single Monte Carlo step&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		energy = self.energy() #defines initial energy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		#the following two lines will select the coordinates of the random spin for you&lt;br /&gt;
		random_i = np.random.choice(range(0, self.n_rows))&lt;br /&gt;
		random_j = np.random.choice(range(0, self.n_cols))&lt;br /&gt;
		#the following line will choose a random number in the range[0,1) for you&lt;br /&gt;
		random_number = np.random.random()&lt;br /&gt;
&lt;br /&gt;
		self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #flips spin and changes lattice&lt;br /&gt;
		energy2=self.energy() #energy of new flipped lattice&lt;br /&gt;
		deltaE=energy2-energy #calculates change in energy&lt;br /&gt;
&lt;br /&gt;
		#at this point the system has the new spin config and new energy&lt;br /&gt;
		&lt;br /&gt;
		if deltaE &amp;gt; 0 and random_number &amp;gt; e**(-deltaE/T):&lt;br /&gt;
			self.lattice[random_i][random_j]=(self.lattice[random_i][random_j])*(-1) #reverts spin back&lt;br /&gt;
		&lt;br /&gt;
&lt;br /&gt;
		if self.n_cycles &amp;gt; 50000: #only adds values to array of E,E2,M and M2 is above specific cut-off&lt;br /&gt;
			self.E+=[self.energy()]&lt;br /&gt;
			self.E2+=[self.energy()**2]&lt;br /&gt;
			self.M+=[self.magnetisation()]&lt;br /&gt;
			self.M2+=[self.magnetisation()**2]		&lt;br /&gt;
		self.n_cycles=self.n_cycles+1&lt;br /&gt;
&lt;br /&gt;
		return (self.energy(),self.magnetisation())&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Use ILtemperaturerange.py to plot the average energy and magnetisation for each temperature, with error bars, for an &amp;lt;math&amp;gt;8\times 8&amp;lt;/math&amp;gt; lattice. Use your intuition and results from the script ILfinalframe.py to estimate how many cycles each simulation should be. The temperature range 0.25 to 5.0 is sufficient. Use as many temperature points as you feel necessary to illustrate the trend, but do not use a temperature spacing larger than 0.5. The NumPy function savetxt() stores your array of output data on disk — you will need it later. Save the file as 8x8.dat so that you know which lattice size it came from.===&lt;br /&gt;
&lt;br /&gt;
Using the modified code, the file ILtemperaturerange.py was ran on an 8x8 matrix between T=0.5 and T=5 with a step of T=0.02 for 10000 Montecarlo steps and the first 1000 steps of each temperature were excluded when calculating the averages. Figure 12 shows the result of the simulation and also included error bars of standard deviation.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg14178x8error.png|1000px|thumb|left|Figure 12 - Graph showing average energy and average magnetisation for an 8x8 lattice with error bars between T=0.5 and T=5]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 6 - The effect of system size==&lt;br /&gt;
&lt;br /&gt;
===TASK: Repeat the final task of the previous section for the following lattice sizes: 2x2, 4x4, 8x8, 16x16, 32x32. Make sure that you name each datafile that your produce after the corresponding lattice size! Write a Python script to make a plot showing the energy per spin versus temperature for each of your lattice sizes. Hint: the NumPy loadtxt function is the reverse of the savetxt function, and can be used to read your previously saved files into the script. Repeat this for the magnetisation. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. How big a lattice do you think is big enough to capture the long range fluctuations?===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
Each matrix was simulated using the ILtemperaturerange.py file between T=0.5 and T=5 with a step of T=0.02.&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =350&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2error.png&lt;br /&gt;
 | caption1 = 2x2 matrix - 5000 steps, cut-off = 30 steps&lt;br /&gt;
 | image2 = 4x4error.png&lt;br /&gt;
 | caption2 = 4x4 matrix - 1000 steps, cut-off = 200 steps&lt;br /&gt;
 | image3 =cg141716x16error.png&lt;br /&gt;
 | caption3 = 16x16 matrix - 50000 steps, cut-off = 15000 steps&lt;br /&gt;
 | image4 =cg141732x32error.png&lt;br /&gt;
 | caption4 = 32x32 matrix - 200000 steps, cut-off = 50000 steps&lt;br /&gt;
 | footer_align = left&lt;br /&gt;
 | footer = Figure 13 - Results of running the ILtemperaturerange.py file for 2x2, 4x4, 16x16 and 32x32 matrices.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Section 7 - Determining the Heat Capacity==&lt;br /&gt;
&lt;br /&gt;
===TASK: By definition, &amp;lt;math&amp;gt;C = \frac{\partial \left\langle E\right\rangle}{\partial T}&amp;lt;/math&amp;gt;. From this, show that &amp;lt;math&amp;gt;C = \frac{\mathrm{Var}[E]}{k_B T^2}&amp;lt;/math&amp;gt; (Where &amp;lt;math&amp;gt;\mathrm{Var}[E]&amp;lt;/math&amp;gt; is the variance in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;.)===&lt;br /&gt;
&lt;br /&gt;
Recall from statistical thermodynamics that the average energy of a system is the sum across all microstates of the probability of that microstate multiplied by the energy of that microstate, which is defined mathematically asː &amp;lt;math&amp;gt;\langle E \rangle = \sum_i p_{i}\epsilon_{i}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;q = \sum_{i} exp(-\beta \epsilon_{i})&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;\beta =\frac{1}{k_BT}&amp;lt;/math&amp;gt;and the probability, &amp;lt;math&amp;gt;p_{i}&amp;lt;/math&amp;gt; can be defined in terms of the partition function as &amp;lt;math&amp;gt;p_{i} = \frac{exp(-\beta \epsilon_{i})}{\sum_{i} exp(-\beta \epsilon_{i})} = \frac{exp(-\beta \epsilon_{i})}{q}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a result, &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; can be re-written as &amp;lt;math&amp;gt;\langle E \rangle = \sum_{i} \frac{\epsilon_{i} exp(-\beta \epsilon_{i})}{q} = -\frac{1}{q} \frac{\partial}{\partial \beta}\sum_{i}exp(-\beta \epsilon_{i}) = -\frac{1}{q} \frac{\partial q}{\partial \beta}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Likewise, &amp;lt;math&amp;gt;\langle E^2 \rangle = \sum_i p_{i}\epsilon_{i}^{2} = \sum_{i} \frac{\epsilon_{i}^{2}exp(-\beta \epsilon_{i})}{q} = \frac{1}{q} \frac{\partial^{2}}{\partial \beta^{2}}\sum_{i}exp(-\beta \epsilon_{i}) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From definitionː &amp;lt;math&amp;gt;Var[E] = \Delta E^2 = \langle E^2 \rangle - \langle E \rangle^2 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the definition of &amp;lt;math&amp;gt;\langle E \rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\langle E^2 \rangle&amp;lt;/math&amp;gt; is written in terms of partition function &amp;lt;math&amp;gt;q&amp;lt;/math&amp;gt;ː &amp;lt;math&amp;gt;Var[E] = \langle E^2 \rangle - \langle E \rangle^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \left(\frac{1}{q}\frac{\partial q}{\partial \beta}\right)^2 = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
According to the chain ruleː &amp;lt;math&amp;gt;\frac{\partial}{\partial \beta}\left(\frac{1}{q} \frac{\partial q}{\partial \beta}\right) = \frac{1}{q} \frac{\partial^{2} q}{\partial \beta^{2}} - \frac{1}{q^2}\left(\frac{\partial q}{\partial \beta}\right)^2 = -\frac{\partial}{\partial \beta}\langle E \rangle = Var[E]&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And using the chain rule againː &amp;lt;math&amp;gt;C = \frac{\partial \langle E \rangle}{\partial T}= \left(-\frac{\partial \langle E \rangle}{\partial \beta}\right) \left(-\frac{\partial \beta}{\partial T}\right) = \frac{Var[E]}{k_B T^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===TASK: Write a Python script to make a plot showing the heat capacity versus temperature for each of your lattice sizes from the previous section. You may need to do some research to recall the connection between the variance of a variable, &amp;lt;math&amp;gt;\mathrm{Var}[X]&amp;lt;/math&amp;gt;, the mean of its square &amp;lt;math&amp;gt;\left\langle X^2\right\rangle&amp;lt;/math&amp;gt;, and its squared mean &amp;lt;math&amp;gt;\left\langle X\right\rangle^2&amp;lt;/math&amp;gt;. You may find that the data around the peak is very noisy — this is normal, and is a result of being in the critical region. As before, use the plot controls to save your a PNG image of your plot and attach this to the report. ===&lt;br /&gt;
&lt;br /&gt;
The python script for this section can be found in the Jupyter Notebook  - CG1417IsingModelGraphs.ipynb&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
 | width =300&lt;br /&gt;
 | align = left&lt;br /&gt;
 | image1 = cg14172x2heatcap.png&lt;br /&gt;
 | caption1 = 2x2 Matrix&lt;br /&gt;
 | image2 = cg14174x4heatcap.png&lt;br /&gt;
 | caption2 = 4x4 Matrix&lt;br /&gt;
 | image3 =cg14178x8heatcap.png&lt;br /&gt;
 | caption3 = 8x8 Matrix&lt;br /&gt;
 | image4 =cg141716x16heatcap.png&lt;br /&gt;
 | caption4 = 16x16 Matrix&lt;br /&gt;
 | image5=cg141732x32heatcap.png&lt;br /&gt;
 | caption5= 32x32 Matrix&lt;br /&gt;
 | footer = Figure 14 - Graphs showing Heat Capacity against Temperature for each matrix size&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A general trend from the above graphs is that the peak of the graph shifts towards lower temperatures as the size of the matrix used increases.&lt;br /&gt;
&lt;br /&gt;
==Section 8 - Locating the Curie Temperature==&lt;br /&gt;
===TASK: A C++ program has been used to run some much longer simulations than would be possible on the college computers in Python. You can view its source code here if you are interested. Each file contains six columns: &amp;lt;math&amp;gt;T, E, E^2, M, M^2, C&amp;lt;/math&amp;gt; (the final five quantities are per spin), and you can read them with the NumPy loadtxt function as before. For each lattice size, plot the C++ data against your data. For one lattice size, save a PNG of this comparison and add it to your report — add a legend to the graph to label which is which. To do this, you will need to pass the label=&amp;quot;...&amp;quot; keyword to the plot function, then call the legend() function of the axis object (documentation here).===&lt;br /&gt;
&lt;br /&gt;
The python code used to read and plot the C++ data is found in the Jupyter notebook CG1417IsingModelGraphs.ipynb.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Figure 15&#039;&#039; below shows the C++ plotted against my own data for a 16x16 Matrix.&lt;br /&gt;
&lt;br /&gt;
[[File:Cg141716x16C++.png|400px|thumb|left|Figure 15 - Graph showing my own data against the C++ data for a 16x16 matrix.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: write a script to read the data from a particular file, and plot C vs T, as well as a fitted polynomial. Try changing the degree of the polynomial to improve the fit — in general, it might be difficult to get a good fit! Attach a PNG of an example fit to your report.===&lt;br /&gt;
&lt;br /&gt;
The python script to read and plot the fitted polynomial is found in CG1417PolyfitScript.ipynb&lt;br /&gt;
&lt;br /&gt;
Below in &#039;&#039;Figure 16&#039;&#039; is a plot of my Heat Capacity against Temperature data for a 16x16 matrix and features a polynomial of degree 35 plotted against it. Even with a polynomial of such a high degree, it poorly fits the curve and does not fit to the peak of the curve either.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417FIT_TEST16x16_35.png|thumb|left|400px|Figure 16 - Plot of Heat Capacity against Temperature along with a poorly fitted polynomial of degree 35. ]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br clear = all &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: Modify your script from the previous section. You should still plot the whole temperature range, but fit the polynomial only to the peak of the heat capacity! You should find it easier to get a good fit when restricted to this region===&lt;br /&gt;
&lt;br /&gt;
The script was modified as such that the polynomial was fitted in a set range around the peak of the graph, this is demonstrated in &#039;&#039;Figure 17&#039;&#039; which shows a newly fitted polynomial between a much smaller range of temperatures (T = 2.15-2.55) and a much smaller degree polynomial (3).&lt;br /&gt;
&lt;br /&gt;
[[File:CG1417FIT_16x16C_3.png|thumb|left|400px|Figure 17 - Graph showing Heat Capacity against Temperature for a 16x16 matrix along with a fitted polynomial between a much more restricted range of temperatures and a significantly lower degree of polynomial]]&lt;br /&gt;
&lt;br /&gt;
Upon comparison with &#039;&#039;Figure 16&#039;&#039;, the new fitted polynomial is a significantly better fit even for a 3rd degree polynomial and is a much more accurate representation of my data around the peak of the graph and will make it easier to determine the maximum value of Heat Capacity.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br  clear = all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TASK: find the temperature at which the maximum in C occurs for each datafile that you were given. Make a text file containing two colums: the lattice side length (2,4,8, etc.), and the temperature at which C is a maximum. This is your estimate of &amp;lt;math&amp;gt;T_C&amp;lt;/math&amp;gt; for that side length. Make a plot that uses the scaling relation given above to determine &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. By doing a little research online, you should be able to find the theoretical exact Curie temperature for the infinite 2D Ising lattice. How does your value compare to this? Are you surprised by how good/bad the agreement is? Attach a PNG of this final graph to your report, and discuss briefly what you think the major sources of error are in your estimate.===&lt;br /&gt;
&lt;br /&gt;
Figure 17 below shows a graph of &amp;lt;math&amp;gt;T_{C,L}&amp;lt;/math&amp;gt; against &amp;lt;math&amp;gt;\frac{1}{Lattice Size}&amp;lt;/math&amp;gt; to determine the Curie Temperature of an infinite 2D Ising Model Lattice &amp;lt;math&amp;gt;T_{C,\infty}&amp;lt;/math&amp;gt;. The black dots represent the raw data obtained from obtaining the temperature at which the Heat Capacity was a maximum for the lattices and the red line in a linear curve fit plotted against the data to allow the y-intercept which is the Curie Temperature for the infinite 2D lattice to be determined.&lt;br /&gt;
&lt;br /&gt;
[[File:cg1417CurieTemp.png|400px|thumb|left|Figure 17 - Plot of 1/Lattice Size against Curie Temperature for that lattice size.]]&lt;br /&gt;
&lt;br /&gt;
The value for &amp;lt;math&amp;gt;T_{C,\inf}&amp;lt;/math&amp;gt; obtained from the data is &amp;lt;math&amp;gt;T_{C,\infty} = 2.277 \frac{J}{k_B}&amp;lt;/math&amp;gt; with a literature value being &amp;lt;math&amp;gt;T_{C,\infty} = 2.269 \frac{J}{k_B}&amp;lt;/math&amp;gt; for an infinite square 2D lattice. This means that my result slightly over-estimates the Curie Temperature for the infinite lattice and as a result for an infinite lattice the temperature at which spontaneous magnetisation stops would actually occur at a slightly lower temperature than expected. However, the difference between my value and the literature value is only 0.008 which is incredibly small and the amount of agreement between the two values is somewhat surprising, which means that the error in my estimates of the Curie Temperature for each lattice size is relatively small. A potential source of error from the values of the Curie Temperature for each lattice size could come from the&lt;/div&gt;</summary>
		<author><name>Cg1417</name></author>
	</entry>
</feed>