Rep:Mm5713
Introduction to Molecular Dynamics Simulation
TASK: Open the file HO.xls. In it, the velocity-Verlet algorithm is used to model the behaviour of a classical harmonic oscillator. Complete the three columns "ANALYTICAL", "ERROR", and "ENERGY": "ANALYTICAL" should contain the value of the classical solution for the position at time , "ERROR" should contain the absolute difference between "ANALYTICAL" and the velocity-Verlet solution (i.e. ERROR should always be positive -- make sure you leave the half step rows blank!), and "ENERGY" should contain the total energy of the oscillator for the velocity-Verlet solution. Remember that the position of a classical harmonic oscillator is given by (the values of , , and are worked out for you in the sheet).
The formula used to calculate the position at time, t, for the classical harmonic oscillator was:
To obtain the classical solution for the energy the following equation was employed:
The given initial conditions were:
k = 1.00 Timestep = 0.1000, Mass = 1.00, = 0.00, A = 1.00, = 1.00
Figure 1 illustrates that the results obtained from the velocity-Verlet algorithm for the position of the oscillator are in good agreement with the ones obtained from the classical solution.
From figure 2 it can be seen that the total energy of the system oscillates between values of 0.4988 and 0.5000. For a perfect harmonic oscillator, the total energy would remain constant at a fixed value, whilst oscillating between potential and kinetic energy. Nevertheless, since the total energy only fluctuates by approximately 0.25%, the total energy of the system can be regarded as constant, thus giving an acceptable physical behaviour.
After having simulated the position and energy of the harmonic oscillator with the above specified conditions, it was decided to change the mass in order to observe how the position and energy of the harmonic oscillator would vary. The new conditions used were:
k = 1.00, Timestep = 0.1000, Mass = 2.00, = 0.00, A = 1.00, = 1.00
The above two diagrams demonstrate that increasing the mass resulted in a decrease in frequency for both the position against time and energy against time graph. This behaviour was explained by knowing that:
Where = frequency, k = force constant and = reduced mass
From the above equation it follows that as the mass was increased from m = 1.00 to m = 2.00, the reduced mass increased from = 0.5 to = 1.0, thus causing the frequency to decrease.
Similarly to exploring the effects of changing mass on the harmonic oscillator, it was decided to investigate the effects brought by varying the force constant. The new simulation conditions were:
k = 2.00, Timestep = 0.1000, Mass = 1.00, = 0.00, A = 1.00, = 1.00
The results obtained for these simulations are found in the diagrams below.
Each of the above two charts illustrates that as the force constant was increased from k = 1.00 to k = 2.00, the oscillation frequency increased for both the position against time plot and the energy against time plot. This trend could also be justified by looking at the equation:
Which shows that as k, the force constant, increases, the oscillation frequency also increases.
TASK: For the default timestep value, 0.1, estimate the positions of the maxima in the ERROR column as a function of time. Make a plot showing these values as a function of time, and fit an appropriate function to the data.
The line of best connecting the ERROR maxima was found to be a linear function:
The positive gradient in the linear trend line demonstrates that the error increases as the time increases. This is due to the nature of the error calculations, which involved calculating the absolute difference between the classical and velocity-Verlet algorithm solution. As the velocity-Verlet algorithm is an approximation based on a Taylor expansion, small errors in the simulated position accumulate over time, thus also resulting in an increased error as the time progresses.
TASK: Experiment with different values of the timestep. What sort of a timestep do you need to use to ensure that the total energy does not change by more than 1% over the course of your "simulation"? Why do you think it is important to monitor the total energy of a physical system when modelling its behaviour numerically?
Figure 8, 9 and 10 show that as the timestep was increased the fluctuation in the energy also increased. Moreover, the diagrams also illustrate that to ensure that the total energy did not change by more than 1% (i.e. remain between values of 0.495 and 0.505) over the course of the simulation a timestep value of 0.2000 had to be used.
It is important to monitor the total energy of a physical system when modelling its behaviour numerically to ensure that the system obeys to the law of energy conservation, hence lead to accurate results.
TASK: For a single Lennard-Jones interaction, , find the separation, , at which the potential energy is zero. What is the force at this separation? Find the equilibrium separation, , and work out the well depth ().
When
From classical physics it follows that the force acting on an object is equal to the negative derivative of the potential:
Hence:
Since:
The equilibrium separation, , is the separation at which the force is zero[1]:
~Hence, when:
The well depth is then determined by inserting the solution obtained for into the the Lennard-Jones equation giving:
Evaluate the integrals , , and when .
When:
TASK: Estimate the number of water molecules in 1ml of water under standard conditions. Estimate the volume of water molecules under standard conditions.
Under standard conditions[2]:
Hence:
Hence:
Hence:
molecules
Consequently, there are molecules of water in 1ml of water under standard conditions.
The volume occupied by 10000 water molecules is calculated as follows:
Hence:
Hence:
Under standard conditions:
Hence:
The volume occupied by 10000 water molecules is
TASK: Consider an atom at position in a cubic simulation box which runs from to . In a single timestep, it moves along the vector . At what point does it end up, after the periodic boundary conditions have been applied?
If there were no periodic boundary conditions, an atom at the position being displaced along the vector would end up at the position . Nevertheless, since we have introduced periodic boundary conditions in our system, the atom's final position will be at
TASK: The Lennard-Jones parameters for argon are . If the LJ cutoff is , what is it in real units? What is the well depth in ? What is the reduced temperature in real units?
Hence:
The LJ cutoff distance in real units is
Hence:
Hence:
Equilibration
TASK: Why do you think giving atoms random starting coordinates causes problems in simulations? Hint: what happens if two atoms happen to be generated close together?
Assigning atoms to totally random starting coordinates can cause problems within the simulation, since the simulation does not take into account the relative position of one atom with respect to the others. This means that there is a high probability that the simulation will generate atoms that are in close proximity to each other i.e. overlap each other. Due to the nature of the Lennard-Jones potential, which sharply increases as the internuclear distance between two atoms is smaller than the equilibrium distance, this results in very high energy potentials, which do not reflect reality accurately. Considering that the simulations in this study feature approximately 8000 atoms, the likelihood for two atoms overlapping each other is thus very high, hence also the probability to generate excessively high energy potentials.
Moreover, a consequence of generating overlapping atoms is that the strong repulsive forces will lead to incredibly fast moving atoms once the simulation has started, thus causing the simulation to require a very small timestep to continue delivering accurate results.
TASK: Satisfy yourself that this lattice spacing corresponds to a number density of lattice points of . Consider instead a face-centred cubic lattice with a lattice point number density of 1.2. What is the side length of the cubic unit cell?
From the following line in the output file
Lattice spacing in x,y,z = 1.07722 1.07722 1.07722
it follows that the distance between the lattice points, hence the length of the cubic unit cell, in this simulation is 1.07722. Thus the volume of our unit cell equals to:
The formula for the number density for any given lattice is:
Hence:
As it can be seen from figure 12, there are four atoms within the FCC unit cell, since . Consequently, the side length of the cubic unit cell can be determined by employing the formula:
Since:
The side length of the cubic unit cell is thus 1.4938.
TASK: Consider again the face-centred cubic lattice from the previous task. How many atoms would be created by the create_atoms command if you had defined that lattice instead?
The lines in the previous input file corresponded to:
region box block 0 10 0 10 0 10 create_box 1 box
Thus meaning that the simulation has created a 10x10x10 array of unit cells. If the face-cubic centred lattice, with four atoms in each unit cell (8*1/8 + 6*1/2 = 4), had been used in this simulation, 4000 atoms would have been created by the create_atoms command.
TASK: Using the LAMMPS manual, find the purpose of the following commands in the input script:
mass 1 1.0 pair_style lj/cut 3.0 pair_coeff 1 1 1.0 1.0
mass 1 1.0
The mass 1 1.0 command sets the mass for all atoms of one or more types. In this case the mass of atom type 1 has been set to 1.0 units. If the term 1 had been replaced by an asterisk (*), then all atoms types from 1 to N would have been assigned the same mass, in this case the mass 1.0. If instead an asterisk had been placed before a number, n, the atom types 1 to n had been assigned the specified value. Conversely, an asterisk after n, signifies that the atom types n to N will be assigned the specified value.
pair_style lj/cut 3.0
The pair_style command is used to calculate the pairwise interactions, i.e. the potential between atoms. The lj specification instructs LAMMPS to calculate the Lennard-Jones potential without taking into account Coulombic forces. The cut 3.0 term indicates that only pair interactions below a cutoff distance of 3.0 reduced units will be calculated. Specifying a cutoff distance in our simulations is very useful, considering the nature of the Lennard-Jones, which tends asymptotically to zero at large internuclear distances. Using a cut-off distance thus allows to obtain accurate results, whilst significantly reducing simulation times.
pair_coeff 1 1 1.0 1.0
The pair_coeff command specififes the force field coefficient for the types of atoms interacting with each other, in this case for the two atoms of type 1 and type 1, as specified by the terms 1 and 1. The values of 1.0 instead set the force field coefficients for both types of atoms involved in this simulation to 1.0 reduced units.
TASK: Given that we are specifying and , which integration algorithm are we going to use?
We use the velocity-Verlet algorithm, since this specific algorithm requires and to be specified.
TASK: Look at the lines below.
### SPECIFY TIMESTEP ### variable timestep equal 0.001 variable n_steps equal floor(100/${timestep}) variable n_steps equal floor(100/0.001) timestep ${timestep} timestep 0.001 ### RUN SIMULATION ### run ${n_steps} run 100000
The second line (starting "variable timestep...") tells LAMMPS that if it encounters the text ${timestep} on a subsequent line, it should replace it by the value given. In this case, the value ${timestep} is always replaced by 0.001. In light of this, what do you think the purpose of these lines is? Why not just write:
timestep 0.001 run 100000
If the code was written in the simplified form, each time the timestep was changed, it would be necessary to manually change the number of steps that are run in order to keep the overall simulation time constant. For example, if the length of the timestep was doubled, the simplified code would lead to doubling the simulation time. Conversely, by employing the effective code used within our simulations, the number of timesteps to be run are changed according to the way the timestep duration is modified, thus ensuring a constant simulation time. This is very useful, since our domain will remain constant throughout all of the different simulations.
TASK: make plots of the energy, temperature, and pressure, against time for the 0.001 timestep experiment (attach a picture to your report). Does the simulation reach equilibrium? How long does this take? When you have done this, make a single plot which shows the energy versus time for all of the timesteps (again, attach a picture to your report). Choosing a timestep is a balancing act: the shorter the timestep, the more accurately the results of your simulation will reflect the physical reality; short timesteps, however, mean that the same number of simulation steps cover a shorter amount of actual time, and this is very unhelpful if the process you want to study requires observation over a long time. Of the five timesteps that you used, which is the largest to give acceptable results? Which one of the five is a particularly bad choice? Why?
In order to determine when the simulation reached equilibrium, the x-axes of the above three graphs were adjusted to a smaller scale.
From the zoomed-in graphs it followed that at t=0.3 equilibrium was reached, since from then onwards the fluctuation within the total energy, temperature and pressure remain relatively constant over the entire time domain.
A plot of the total energy against time for each of the different timesteps is shown below.
From figure 19 it can be seen that out of all the timesteps employed, the largest timestep (0.015) leads to the worst results, since the simulation does not reach an equilibrium state. Moreover, the graph also suggests that for each of the other timesteps used (0.001, 0.0025, 0.0075 and 0.01), the simulation always reaches an equilibrium state. Nevertheless, a close-up of figure 19, shown in figure 20, demonstrates, that there are still significant discrepancies between the 0.001, 0.0025, 0.0075 and 0.01 simulations.
From the expansion of figure 20, it follows that the two smallest timesteps (0.001 and 0.0025) fluctuate around a similar value of approximately -3.185 units. The larger two timesteps (0.0075 and 0.01) instead oscillate at a higher energy value and feature a greater variation about their averages. The results obtained for the 0.0075 and the 0.01 timesteps are thus less accurate than those for the 0.001 and 0.0025 timesteps. Consequently, the timestep leading to the best balance between accuracy and simulation time is the 0.0025 timestep, since it combines accurate results with relatively short simulation times.
Running Simulations under Specific Conditions
TASK: Choose 5 temperatures (above the critical temperature ), and two pressures (you can get a good idea of what a reasonable pressure is in Lennard-Jones units by looking at the average pressure of your simulations from the last section). This gives ten phase points — five temperatures at each pressure. Create 10 copies of npt.in, and modify each to run a simulation at one of your chosen points. You should be able to use the results of the previous section to choose a timestep. Submit these ten jobs to the HPC portal. While you wait for them to finish, you should read the next section.
From the simulations run in the previous section it followed that the equilibrium pressure was approximately 2.5 reduced units. With reference to this value, the two pressures selected for this task were 2.5 and 3.0. After having simulated the properties of the simple liquid at the two selected pressures, it was decided to study its properties at an even higher pressure, namely 3.5, in order to confirm the trends established for the previous simulations.
The temperatures that were chosen were all above the critical temperature of 1.5 in order to avoid the formation of a liquid-gas equilibrium, which would then lead to unsatisfactory results. The values chosen were thus 1.5, 2.5, 3.5, 4.5 and 5.5. The chosen interval was of 1.0 in order to simulate the behaviour of the simple liquid over an appropriate range of values.
Lastly, the timestep chosen was 0.0025, since this value guaranteed the best balance between accurate results and low computing time.
TASK: We need to choose so that the temperature is correct if we multiply every velocity . We can write two equations:
Solve these to determine .
Adding the two equations together leads to:
Since is a constant, it can be taken out of the sum term:
Inserting for:
Gives:
TASK: Use the manual page to find out the importance of the three numbers 100 1000 100000. How often will values of the temperature, etc., be sampled for the average? How many measurements contribute to the average?
fix aves all ave/time 100 1000 100000 v_dens v_temp v_press v_dens2 v_temp2 v_press2 run 100000
The fix command is used to calculate the average for the various thermodynamic properties in which we are interested.
The value 100 stands for Nevery and dictates LAMMPS to take a sample every 100 timesteps. The value 1000 indicates LAMMPS how many samples are included within the averaging process, in this case 1000. The value of 100000 instead tells LAMMPS when the first averaging process will occur. In this case 100000 dictates LAMMPS to take an average at every multiple of 100000 steps, including 100000 itself. In summary, the averaging process for the above conditions will occur at every multiple of 100000 (including 100000 itself), taking an average of 1000 values and each value being separated by the previous/following one by 100 timesteps.
Looking to the following line, how much time will you simulate? Since we have set our timestep to dt = 0.0025 and are simulating 100000 data points, we will simulate 250 reduced time units.
TASK: When your simulations have finished, download the log files as before. At the end of the log file, LAMMPS will output the values and errors for the pressure, temperature, and density . Use software of your choice to plot the density as a function of temperature for both of the pressures that you simulated. Your graph(s) should include error bars in both the x and y directions. You should also include a line corresponding to the density predicted by the ideal gas law at that pressure. Is your simulated density lower or higher? Justify this. Does the discrepancy increase or decrease with pressure?
The results for the ideal gas law were calculated by employing the equation:
This equation was rearranged for the number density as follows:
When performing calculations with reduced units, takes a value of 1, thus meaning that the above equation became:
The results obtained from the simulated and theoretical (Ideal Gas Law) data are shown in the diagram below.
Figure 21 demonstrates that for each pressure the simulated density was lower than the density predicted by the Ideal Gas Law. This is because the ideal gas law sets the potential energy of the system to zero[3], thus allowing the atoms to approach each other more closely and leading to shorter equilibrium distances between them. On the other hand, the addition of potential energy in the simulated systems, causes the atoms in the simulations to have a greater internuclear distance between each other due to the repulsive term in the Lennard-Jones potential. Ultimately this leads to the simulated densities to be lower than those predicted by the Ideal Gas Law.
As the pressure was increased from 2.5 to 3.0 the discrepancy between the ideal gas law and the simulated data increased. It was decided to run the simulations at one additional pressure value (3.5) in order to further support this trend. The reason for observing this trend can be attributed to one of the key assumptions of the ideal gas law, which assumes that for an ideal gas the particles do not interact. This assumption is more accurate at low pressures compared to high pressures, since at lower pressures, the particles will be further apart and thus interact less. Therefore, it follows that at lower pressures the deviation from the ideal gas law and the simulated data will be smaller, hence result in a better fit for expressing density as a function of temperature.
Another observed trend was that the discrepancy between simulated and predicted data became smaller as the temperature increased. This is because at higher temperatures the simulated gas behaved more like an ideal one. At low temperatures the particles under study were moving on average more slowly than at high temperatures, meaning that any effects related to intermolecular forces had a strong effect at low temperatures. On the other hand, at high temperatures, the molecules were moving faster on average, thus had a higher kinetic energy, which ultimately caused the effects of intermolecular forces to become negligible, hence lead to a more ideal gas behaviour.
A suggestion for future studies could be to compare the simulated data and the data predicted from the ideal gas law with the data predicted by the van der Waals Equation:
which is an adjusted form of the ideal gas law, that takes into account the potential energy of the particles in the system[4].
Calculating Heat Capacities Using Statistical Physics
TASK: As in the last section, you need to run simulations at ten phase points. In this section, we will be in density-temperature phase space, rather than pressure-temperature phase space. The two densities required at and , and the temperature range is . Plot as a function of temperature, where is the volume of the simulation cell, for both of your densities (on the same graph). Is the trend the one you would expect? Attach an example of one of your input scripts to your report.
The input code used for the simulation featuring a density of 0.2 and a temperature of 2.0 was:
### DEFINE SIMULATION BOX GEOMETRY ### variable density equal 0.2 lattice sc ${density} region box block 0 15 0 15 0 15 create_box 1 box create_atoms 1 box ### DEFINE PHYSICAL PROPERTIES OF ATOMS ### mass 1 1.0 pair_style lj/cut/opt 3.0 pair_coeff 1 1 1.0 1.0 neighbor 2.0 bin ### SPECIFY THE REQUIRED THERMODYNAMIC STATE ### variable T equal 2.0 variable timestep equal 0.0025 ### ASSIGN ATOMIC VELOCITIES ### velocity all create ${T} 12345 dist gaussian rot yes mom yes ### SPECIFY ENSEMBLE ### timestep ${timestep} fix nve all nve ### THERMODYNAMIC OUTPUT CONTROL ### thermo_style custom time etotal temp press thermo 10 ### RECORD TRAJECTORY ### dump traj all custom 1000 output-1 id x y z ### SPECIFY TIMESTEP ### ### RUN SIMULATION TO MELT CRYSTAL ### run 10000 unfix nve reset_timestep 0 ### BRING SYSTEM TO REQUIRED STATE ### variable tdamp equal ${timestep}*100 variable pdamp equal ${timestep}*1000 fix nvt all nvt temp ${T} ${T} ${tdamp} run 10000 reset_timestep 0 ### MEASURE SYSTEM STATE ### thermo_style custom step etotal temp atoms variable temp equal temp variable etotal equal etotal variable etotal2 equal etotal*etotal fix aves all ave/time 100 1000 100000 v_temp v_etotal v_etotal2 unfix nvt fix nve all nve run 100000 variable avetemp equal f_aves[1] variable avetemp2 equal f_aves[1]*f_aves[1] variable e2bracket equal f_aves[3] variable ebracket2 equal f_aves[2]*f_aves[2] variable cv equal atoms*atoms*(${e2bracket}-${ebracket2})/${avetemp2} variable volume equal vol variable cvv equal ${cv}/${volume} print "Averages" print "--------" print "Density: ${density}" print "AveTemperature: ${avetemp}" print "HeatCapacity: ${cv}" print "Volume: ${volume}" print "HeatCapacityOverVolume: ${cvv}"
Yes, the trend is as expected, with the isochoric heat capacity increasing as the density is increased. From the following line in every output file:
Loop time of 34.0948 on 8 procs for 100000 steps with 3375 atoms
it followed that the number of atoms in each simulation was 3375. Whilst the number of atoms in each simulation was constant, the volume occupied by these differed for the two densities. For a density of 0.2, the volume occupied by the particles amounted to a value of 16875 and for a density of 0.8 the volume occupied by the particles was 4218.75. The density of 0.8 thus contained more particles within a smaller volume of space. Since each atom contributes a finite amount towards the heat density, more atoms within a given volume leads to a higher heat capacity, thus justifying the results obtained.
The other trend noted within figure 22 was that the isochoric heat capacity over volume decreased as the temperature increased. This trend was very difficult to explain, however it was thought that as the temperature increased, the number of energy levels accessible to the system increased. As these energy levels were converging (i.e. their energy spacing decreased) the energy required to occupy these higher energy levels decreased, thus explaining why the isochoric heat capacity over volume decreased as the temperature increased.
Structural Properties and the Radial Distribution Function
TASK: perform simulations of the Lennard-Jones system in the three phases. When each is complete, download the trajectory and calculate and . Plot the RDFs for the three systems on the same axes, and attach a copy to your report. Discuss qualitatively the differences between the three RDFs, and what this tells you about the structure of the system in each phase. In the solid case, illustrate which lattice sites the first three peaks correspond to. What is the lattice spacing? What is the coordination number for each of the first three peaks?
The temperature and pressure selected for each system were:
Vapour: Density = 0.05 ; Temperature = 1.2
Liquid: Density = 0.8; Temperature = 1.2
Solid: Density = 1.2; Temperature = 1.2
The results obtained are illustrated in the following diagrams.
In a coordinate system that has its origin on an arbitrarily chosen atom, the radial distribution function (RDF) is the probability to find another particle at the distance r. The RDF is thus a useful tool to describe the structure and order of the system under study.
In figure 24, 25 and 26 the RDF assumes a value of zero between the interatomic distance 0 to 0.875. This is because at these values the repulsive forces between the atoms are so strong, that they do not allow the atoms to approach each other at such distances. Despite this similarity between all of the RDFs, the RDFs still vary very strongly between the solid, liquid and vapour system.
The RDF of the solid system shows the greatest number of peaks. As it is possible to observe well defined peaks at both short and long internuclear distances, it follows that the solid system possesses a short range and a long range order. Moreover, the solid RDF also displays the thinnest peaks compared to the RDF of the liquid and the vapour. The reason why we are observing the thinnest peaks in the solid is because in the solid the atoms move the least and any atomic motion corresponds to mere oscillations around lattice sites. If the solid system were to be cooled to 0 K, instead of observing very thin peaks, infinitely slim peaks, i.e. delta functions would be observed, because atomic motion would be minimised.
In comparison to the solid's RDF diagram, the liquid's RDF shows significantly fewer and broader peaks. Similarly to the crystal structure in the solid system, the liquid displays well defined peaks at short internculear distances, thus illustrating a certain degree of regularity and order. However due to the random Brownian motion of the atoms, the order of the system decreases as the internuclear distance increases, until no order can be detected at longer internuclear distances. This causes the liquid to display short range order, but no long range order. For this specific simulation, the liquid RDF shows three peaks, thus illustrating that short range order exists for the first three nearest neighbours, but ceases to exist for any further neighbours.
The gaseous RDF shows the least number of peaks, one, and also the broadest one. Displaying only one peak demonstrates that the vapour system has the lowest degree of order and does not display short or long range order.
To determine which lattices sites in the FCC lattice under study correspond to the first three peaks in the RDF it is necessary to analyse the structure of a single unit cell, which is shown in figure 28.
The shortest three interatomic distances within this unit cell have been labelled with the letters M, N and O. The shortest of these distances, labelled M, corresponds to the first peak in the RDF. The second shortest distance, labelled N, belongs to the second peak in the RDF. Lastly, the third shortest interatomic distance in the lattice, labelled O, is associated with the third peak in the RDF.
The actual length of the distances M, N and O was determined from the RDF against distance diagram shown below.
Figure 29 shows that the first peak occurs at a distance of 1.025 and the second and third peak occur at distances of 1.45 and 1.825 respectively. Since the second peak corresponds to the distance N, hence the length of the unit cell, it follows that the lattice spacing in the crystal system is equal to 1.45.
The coordination number for each of the first three peaks was obtained by looking at the running integral against distance diagram for the solid system, illustrated in figure 30.
As shown by figure 30, the RDF integral value associated to the first, second and third peak was 12, 6 and 24 respectively. To confirm the accuracy of this assignment, the assignment was compared with the structure of the crystal.
From figure 31 it can be seen that the reference atom is surrounded by 12 green atoms at a the distance M. Moreover, the reference atom is also surrounded by 6 blue atoms and 24 yellow atoms with internuclear separations of N and O respectively. This assignment of distances and atoms, thus agrees with the calculated RDF against distance diagram and the running integral against distance diagram.
Dynamical Properties and the Diffusion Coefficient
TASK: In the D subfolder, there is a file liq.in that will run a simulation at specified density and temperature to calculate the mean squared displacement and velocity autocorrelation function of your system. Run one of these simulations for a vapour, liquid, and solid. You have also been given some simulated data from much larger systems (approximately one million atoms). You will need these files later.
TASK: make a plot for each of your simulations (solid, liquid, and gas), showing the mean squared displacement (the "total" MSD) as a function of timestep. Are these as you would expect? Estimate in each case. Be careful with the units! Repeat this procedure for the MSD data that you were given from the one million atom simulations.
The mean squared displacement against timestep diagram obtained for the vapour system is illustrated in figure 32.
From the diagram in figure 32 it can be seen that the linear trendline only had an value of 0.9807. This was because at low timesteps, the data points obtained for the vapour system resembled a quadratic function, rather than a linear one. The reason for this parabolic behaviour at low timestep values is because during the initial stages of the simulations the atoms are very distant from each other and hence do not tend to collide with each other. The atoms are thus effectively moving at a constant velocity during the initial stages and the distance traveled is therefore proportional to time. Consequently, since the mean squared displacement is proportional to the square of the time, we will observe a quadratic relationship at the beginning of the simulation. Only after this initial parabolic behaviour, the atoms' motion truly resembles Brownian motion, which leads to the expected (and already predicted by Einstein[5] ) linear relationship of the mean squared displacement and timestep.
In order to reduce the effects of this quadratic behaviour on the linear trendline, it was decided to evaluate the simulated data over a different range of timestep values. The selected ranges of timestep values were 500-5000, 1000-5000 and 1500-5000. The results obtained are presented in the diagrams 33, 34 and 35.
Figures 33, 34 and 35 illustrate that as the timestep range was progressively decreased from 500-5000, to 1000-5000 and 1500-5000, the value of the linear trendline progressively increased. It was thus decided to use the slope of the linear trendline from the diagram with the 1500-5000 timestep range to calculate the diffusion coefficient of the vapour system. The projected trendline had a gradient of . Since the trendline was obtained for a diagram of the mean squared displacement against timestep, it was necessary to first convert the timestep units into time units in order to obtain a value with the correct units. As the timestep used was 0.002, the gradient obtained for the vapour system was converted from (in timestep units) to (in time units). To then calculate the diffusion coefficient the following equation was employed:
The diffusion coefficient obtained for the vapour system was .
The mean squared displacement against timestep diagram for the liquid system is shown in figure 36.
From the diagram it could be determined that the trendline obtained for the liquid system was linear; which was confirmed by the very high value (0.9999) of the trendline. The linear relationship between mean squared displacement and timestep is as expected, due to the Brownian motion of the atoms. In comparison to the vapour system, the liquid system did not show any initial quadratic relationship, since the atoms in the liquid system were in close contact from the beginning of the simulation.
Similarly to the vapour system, the slope of the trendline had to be converted from timestep into time units in order to calculate the diffusion coeffcient. The diffusion coefficient for the liquid system was found to be .
Lastly, the mean squared displacement against timestep diagram obtained for the solid system can be found below.
The diagram obtained for the solid system differed significantly from the ones of the vapour and the liquid systems. Whilst the vapour and the liquid systems both featured a constantly increasing mean squared displacement over timestep, the mean squared displacement in the solid system increased to value of approximately 0.02 and was then saturated at this value over the entire simulation. This is because in a solid system the atoms are tightly bound to their positions and after having explored the entire accessible space to them, they cannot be displaced any further, hence giving rise to this step-like trend.
The slope of the fitted linear trendline was , hence giving a diffusion coefficient of .
The simulated results for the one million atom systems follow below. As it can be seen from each of the graphs, increasing the number of atoms in the simulations did not cause any significant changes in the obtained trendlines and diffusion coefficients.
Similarly to the simulations with fewer atoms, the trendline of the vapour system was optimised in order to obtain a more accurate gradient, hence also a more accurate diffusion coefficient. The diffusion coefficient for the one million atom vapour system was found to be .
The diffusion coefficient for the one million atom liquid system was .
The diffusion coefficient for the one million atom solid system was .
Diffusion coefficient for the system with reduced number of atoms | Diffusion coefficient for the 1000000 atom system | |
---|---|---|
Vapour system | 2.909 | 2.895 |
Liquid system | 8.414*10-2 | 8.657*10-2 |
Solid system | 5.825*10-7 | 4.392*10-6 |
From the above data it follows that the results obtained for the diffusion coefficient are as expected for both sets of simulated data. For both sets of simulations, the vapour system features the largest diffusion coefficient and the solid system the smallest one. This makes sense, since in the vapour system we expect the particles to move the most, thus meaning that the vapour system must also have the largest diffusion coefficient. Moreover, comparison between the two sets of simulated data (1000000 atom system and otherwise) demonstrates that the results obtained for each system are in good agreement with each other. This shows that increasing the sample size did not have a significant impact on the simulation results.
INSERT TABLE!!
TASK: In the theoretical section at the beginning, the equation for the evolution of the position of a 1D harmonic oscillator as a function of time was given. Using this, evaluate the normalised velocity autocorrelation function for a 1D harmonic oscillator (it is analytic!):
The position for a classical harmonic oscillator can be modeled by the equation:
Differentiating the above equation using the chain rule gives:
where
and
So:
Similarly, when we can apply the same method to differentiate the equation:
To obtain:
Inserting the equations for and into the equation:
Gives:
Simplifying:
From the trigonometric identities it follows that:
So:
Thus, applying this trigonometric identity to the function leads to:
Multiplying out each term:
Separating the single integral into two integrals:
Taking out and from the integrals:
Simplifying:
Whilst the function is an unsymmetrical, hence an odd function, the function is symmetrical, hence an even function. Multiplying an even function by and odd function, results in another odd function, thus leading to an integral function with a value of zero.
Consequently:
Either equals to zero or an indeterminate result, depending on whether assumes a value of zero or non-zero. In either case the overall equation can be simplified to:
Be sure to show your working in your writeup. On the same graph, with x range 0 to 500, plot with and the VACFs from your liquid and solid simulations. What do the minima in the VACFs for the liquid and solid system represent? Discuss the origin of the differences between the liquid and solid VACFs. The harmonic oscillator VACF is very different to the Lennard Jones solid and liquid. Why is this? Attach a copy of your plot to your writeup.
Despite the task only stating to include the liquid, solid and the harmonic oscillator VACFs, it was decided to also add the VACF for a vapour system, in order to complete the picture for the reader.
Diagram 42 illustrates that the maxima for the solid and liquid VACFs occur at the beginning of the simulation, when the timestep is equal to zero. This is because at , will equal to , hence cause the VACF to assume a maximum value. In contrast to the maxima, the minima of the solid and the liquid VACFs occur when atoms collide against each other. During this event, the velocity vectors of the atoms become anti-parallel to one another, thus causing the difference between and to be maximised, hence lead to a minimum value in the VACF.
Additionally, as it can be seen from figure 42, the VACF trend obtained for the vapour system is signficantly different compared to those of the liquid and solid system. The reason for this behaviour is that in the vapour system the atoms are very distant from each other, thus meaning that the atoms only interact marginally with each other. Consequently, these weak interactions cause the velocity to decorrelate with time only slowly, as clearly demonstrated in the graph.
In contrast to the vapour system, the liquid and solid system are characterised by displaying strong interatomic forces between atoms, since the atoms are closely packed together. In these systems, the atoms occupy positions where the attractive and repulsive interatomic forces balance each other out, so that the atoms can minimise their energy. In the solid system, these locations are energetically significantly more stable than in liquids, thus causing the atoms in the solid to be constrained even more to their positions. Ultimately, this leads to the atomic motion in solids to be an oscillation, whereby the atoms vibrate forwards and backwards, reversing their velocity at the end of each oscillation. This justifies why the VACF of the solid system resembles an oscillating function. Nevertheless, it must be mentioned that as the timesteps progress, the oscillating nature of the VACF becomes smaller and ultimately tends towards zero at very high timesteps. This is because the oscillations of the atoms are not perfect and are affected by perturbations.
Although the atoms in the liquid system are also subject to strong interatomic forces, the atoms in the liquid are not bound as closely to their position, hence allowing diffusion to suppress any oscillation. The VACF of the liquid system thus only displays a single minimum before decaying to zero. Additionally, the weaker interatomic forces in the liquid also justify the less negative minimum observed in the VACF compared to the solid's VACF. As the atoms in the liquid were exposed to weaker intermolecular forces, the collision between the atoms must have been weaker than those in the solid, thus resulting in a less negative VACF minimum.
The harmonic oscillator VACF strongly differs from the VACF of the Lennard Jones liquid and solid, since the potential function for a harmonic oscillator is symmetrical, whilst it is not for the Lennard Jones liquid and solid. Moreover, whilst the VACF of the solid system is subject to perturbative forces, the VACF of the harmonic oscillator is not, thus resulting in a continuously oscillating function.
TASK: Use the trapezium rule to approximate the integral under the velocity autocorrelation function for the solid, liquid, and gas, and use these values to estimate in each case. You should make a plot of the running integral in each case. Are they as you expect? Repeat this procedure for the VACF data that you were given from the one million atom simulations. What do you think is the largest source of error in your estimates of D from the VACF?
Diffusion coefficient:
Diffusion coefficient:
Diffusion coefficient:
Diffusion coefficient:
Diffusion coefficient:
Diffusion coefficient:
Diffusion coefficient for the system with the reduced number of atoms | Diffusion coefficient for the 1000000 atom system | |
---|---|---|
Vapour system | 3.295 | 3.269 |
Liquid system | 9.787*10-2 | 9.009*10-2 |
Solid system | 1.833*10-4 | 4.325*10-5 |
For both arrays of simulations the vapour system had the largest diffusion coefficient and the solid system the smallest. This trend is as expected, since it corresponds to the one obtained when employing the mean squared displacement to calculate the diffusion coefficient.
Comparison between the two methods used (MSD and VACF) to calculate the diffusion coefficient for each system shows that the results obtained are relatively similar for each system. Nevertheless, there are still some significant discrepancies between the results results obtained from the two methods. A potential explanation for the discrepancy could be that the sample sizes used (approx. 10000 and 1000000 atoms) limits the calculations' accuracy in both methods. This means that in systems containing more atoms the results obtained from the two methods could potentially convey, hence lead to more similar results. Another source of error was brought by the integration method employed, i.e the use of the Trapezium rule. Whilst the Trapezium rule allows calculating very accurately the integral of periodic functions, other methods such as the Gaussian quadrature or the the Clenshaw-Curtis quadrature have been shown to deliver more accurate results[6] when dealing with non-periodic functions, such as the ones occurring in this study.
References
- ↑ Haslach, J. W. Jr. Maximum Dissipation Non-Equilibrium Thermodynamics and its Geometric Structure, Springer, New York (NY), 1st edn., 2011, ch. 5, 109-130
- ↑ Garrison, T. Enhanced Essentials of Oceanography, Thomson Learning, Boston (MA), 4th edn., 2006, ch. 6, 106-130
- ↑ Struchtrup, H. Thermodynamics and Energy Conversion, Springer, New York (NY), ch. 18, 414-423
- ↑ Reger, D.; Goode, S.; Ball, D. Chemistry Principles and Practice, Brooks Cole, Boston (MA), 3rd edn., 2010, ch. 6, 208-247
- ↑ Einstein, A. Ann. d. Phys., 1906, 17, 371-381
- ↑ Greenbaum, A.; Chartier, T.P. Numerical Methods: Design, Analysis, and Computer Implementation of Algorithms, Princeton University Press, Princeton (NJ), 1st edn., 2012, ch. 10, 227-250