Talk:Mod:JOB2602
Introduction to molecular dynamics simulation
Molecular dynamics is a field that involves modelling the physical attributes of atoms via a computational simulation, this allows for analysis of trajectory, energy and a plethora of other thermodynamic properties which are explored during this experiment. The software used to evaluate these properties is LAMMPS, with excel also being used to analyse the results.
Modelling a classic harmonic oscillator
The first major part of the experiment involved analysis of a classic harmonic oscillator using the velocity verlet algorithm. The velocity verlet algorithm allows for determination of the velocity of an atom and any given time provided one knows the initial conditions and The formula for the velocity can be written as:
The energies and absolute errors of a simple harmonic oscillator were plotted with different timestep values, the analytic data represents the classical solution for the position of a simple harmonic oscillator where It shows a clear and expected oscillation between two values over time with the maxima and minima indicating these values, these are 1 and -1 respectively. This is the expected trend as a harmonix oscillator is essentially a spring.The energy values are the total energy based on the solution of the velocity verlet algorithm and put into

Note that the total energy of the system does fluctuate but this is to be expected and the fluctuation is extremely minimal.
NJ: Why is it expected?



NJ: Be careful with units, these should be reduced.
Plotting the error maxima on a graph against time results in a linear graph with an equation of and an value of which shows a strong positive correlation and a precise trend in the maxima of the data.
NJ: Why do you think the error accumulates like this?
The behaviour of the total energy relative to the timestep was explored during this part of the experiment to determine how much it could be changed before having a detrimental effect on the total energy of the system and causing inaccuracies in the simulation. Increments of 0.05 were used to determine what timestep couldve been used and only cause fluctuations of 1% of the total energy, testing 0.15 and 0.2 yielded that a timestep of 0.2 reached the exact limit of a change of 1%. The variations can be seen below in fig. 5 and fig. 1

Knowing how much the total energy changes as a simulation runs is incredibly important and a very useful diagnostic tool to determine whether or not the simulation is running as intended, as if there are significant fluctuations or if the total energy changes drastically throughout rather than oscillating around a particular value then it doesn't follow one of the fundamental laws of physics, the connservation of energy; that the total energy of an isolated system will remain constant.
Lennard Jones potentials
This experiment utilised Lennard-Jones interactions[1] between the atoms in the system, a set of separation distance reliant attractive and repulsive forces. These interactions when displayed as a potential energy profile provide a potential well, this potential well's depth dictates the strength of the interaction between the atoms this is because for two atoms to be brought together energy is required to be put in.
The potential well depth is given by
The potential energy of the system is given by the summation of the well depths and the force can be calculated by the relation Note that at r0 which is the separation at which the potential energy is equal to zero is when r = σ and can be determined either by equating the potential to zero and rearranging or the much simpler method is purely by inspection and realising that to make the part in the brackets equal to zero r must equal σ. Thus one can define σ as the distance at which there are no interactions between the atoms.
At r0 the force is given by substitution of r = σ into the expression for the force
=
Substituting in r0 = σ
At equilibrium the net forces equal zero and as such the equilibrium separation can be found by equating the force to zero
This value can then be used to ascertain the potential well depth at equilibrium by inputting req into the equation
NJ: Something has gone awry here, it's
Integrating the potential well depth from σ to infinity, in this case
The terms with infinity as the denominators are equivalent to zero and thus
The same procedure is carried out with and , the values of the integrals are -0.00817675 and -0.0032901 respectively. This is expected as when we increase r, the inverse proportionality of the relation with respect to potential well depth results in a very large decrease in the strength of the intermolecular interactions to the point that they become negligible.
Boundary conditions
The simulation boxes have periodic boundaries in the sense that in order to maintain a constant number density any molecule leaving the simulation is then 'replaced' with a new one, for example if an atom exists at (0.5,0.5,0.5) and moves along (0.7,0.6,0.2) in a (0,0,0) to (1,1,1) box the atoms will then be at positions (0.2,0.1,0.7) as 'new' atoms have taken their place and moved the remaining units but from the opposite side of the simulation box.
One important thing to consider about the simulation boxes is the amount of atoms that is suitable for a simulation, this is illustrated by considering the amount of water molecules in a given volume. For 1 mL of water:
Molar mass of water
Moles
Number of molecules
Number of molecules
In contrast to this, the volume occupied by 10000 molecules is . From this it can be seen that establishing an appropriate value for the number of atoms as if it is too large, the calculations would take too long but if it's too small it wouldn't be indicative of the system.
Reduced Units
The final fundamental subject to consider is the usage of reduced units, these are units that have been scaled down by large factors from standard units in order to create easier and more convenient calculations.
Some examples of reduced units include which is equal to so given that and the actual value of r is, for a critical temperature of where the actual value of the critical temperature is and finally taking into consideration that the boltzmann constant is equal to 1 with reduced units, inputting this into the equation for the potential well depth gives a value of , assuming that is equal to
NJ: Carried forward from the previous section, should be about 1kJmol^-1. Be careful with case, it's kJ not Kj.
Equilibriation
The Simulation box
This part of the experiment focused on understanding the code behind the simulations and applying it to determine the optimum conditions for future simulations. The simulation box crafting is an incredibly important part of molecular dynamics as it acts as the basis for all computations and incorrectly modelling it could result in a complete failure of a calculation. The model works via the use of lattices to standardise initial atom coordinates thus standardising interatomic interactions which can have an adverse effect due to the distance dependence of certain interaction as well as the cutoff distances of certain commands.
Variance in lattice number density, the number of lattice points per unit volume, and the type of lattice, in this case simple cubic and face-centred cubic, was examined to determine the effect on the output, specifically the lattice spacing and number of atoms created.
An sc lattice with number density 0.8 provides a simulation box consisting of 1000 atoms and a lattice spacing of 1.0722.
In contrast to this an fcc lattice with number density of 1.2 provides a simulation box consisting of 4000 atoms and a lattice spacing of 1.4938.
NJ: How do you work this out?
The increase in atoms can be explained by the increase in number density and hence greater number of lattice points for the given volume as well as the difference in structure of a simple cubic and face-centred cubic.
Establishing the properties of the atoms
In this case the core commands required for this are mass pair_style and pair_coeff.
Mass 1 1.0 defines the mass of a particular atom type, in this case 1, the value assigned is the second term 1.0
Pair_style lj/cut 3.0 evaluates pair interactions, in this case it computes the lennard jones potential excluding coulombic forces with a cutoff value of 3.0 i.e the value of r at which it no longer applies. This was illustrated via the integrals in the previous section.
Pair coeff * * 1.0 1.0 gives the number of atoms types and the respective field that operates between them coefficients to determine the way they interact, The asterisk means all atom types from 1 to N with N being the total atom types.
NJ: What are the coefficients in this case?
Given that the initial velocities and positions are known the best algorithm to use to compute the velocities of the atoms is the velocity-verlet algorithm mentioned previously.
This code is run with multiple different timesteps and as such for the sake of convenience it contains the lines variable timestep equal 0.001 variable n_steps equal floor(100/${timestep}) variable n_steps equal floor(100/0.001) timestep ${timestep} timestep 0.001
These result in usage of the value of the timestep being substituted in whenever the text ${timestep} is used, thus when altering the timestep for calculations it can apply to multiple lines throughout the code with only one input of the value.
NJ: This section makes sure that you always simulate the same time (nsteps*dt), no matter what timestep is chosen.
Analysis
Plots of energy, temperature and pressure against time were run for the timesteps but for the sake of poignancy only the particularly relevant ones are shown

NJ: Nice plot



As can be seen in fig. 6 fig. 7 and fig. 8 at a timestep of 0.001 the system does reach equilibrium and does so at time = 0.19. The system rests at an energy, temperature and pressure of approximately -3.18, 1.23 and 2.42 respectively. It should be noted that due to the nature of the simulation very minor fluctuations are to be expected.
Evaluation of the graphs suggests that 0.0025 would be the best compromise as a timestep. This is due to the fact that it runs up to a time of 100 and has no fluctuations larger than 3 decimal places in the value of energy and as such is large enough to give a useful depiction of the system but small enough that it doesnt have a detrimental effect on the accuracy of the results.
The timestep of 0.015 proved to be exceptionally bad as it didnt maintain a constant energy and hence didnt reach equilibrium as it was much too big. This is depicted in fig. 9
Specific conditions
Ten different sets of conditions were prepared using an npt ensemble, i.e an ensemble that establishes isobaric and isothermal conditions for the simulation. Five temperatures were chosen above the critical temperature of 1.5, increments of 0.1 were used to establish a trend and 2 pressures were chosen for each, these being 1.5 and 2.0. A line following the ideal gas law was also incorporated in, this being calculated by substituting density = into the ideal gas law and resulting in Density = . The simulations control the temperature and pressure by calculating something called an instantaneous temperature via the equipartition equation in that the kinetic energy is equal to , this is then adapted through the use of a scaling factor which is applied to each velocity.


The value of can be determined by equating
Thus from this
Rearranging and cancelling the velocity and mass terms gives
NJ: Okay, but this is just for one atom. Where did the sum go?
Thus using this the target temperature can be achieved despite fluctuations that occur in the simulation.
Three key values are assigned to the piece of code that controls this, the values 100 1000 and 10000 are incredibly important in obtaining the thermodynamic information from the simulation. 100 is the number of timesteps at which the the given input values are used, 1000 is the number of times the input values are used and 100000 is the number that dictates how often the calculations are run in regards to the amount of timesteps.
Thus in this case since we are inputting every 100 timesteps with 1000 values inputted at each, we use 1000000 measurements to run the calculation.
The first and most obvious trend is that the density of both systems regardless of pressure decreased with an increase in temperature, this is the expected trend as the higher the temperature the more kinetic energy the molecules has and thus the more they fluctuate in their positions resulting in them being further apart on average and thus lowering the density.
As can be seen in both cases the ideal gas law predicts a density much higher than that of the simulation, one possible reason for this is the fact that the ideal gas law doesnt account for intermolecular interactions which would have a potentially huge effect on the density of a substance especially when you consider repulsive interactions which would induce a much lower density to be established.
NJ: How does the discrepancy between the ideal gas law and your simulation change with pressure?
Heat Capacity
The heat capacity of a system dictates how much heat is required to raise the temperature of the system by 1°C. For this experiment the volume was held constant and thus Cv = = . Graphs of the Heat capacity divided by the volume against the temperature of the system were plotted as seen in fig fig. 12


NJ: Adjust the x scale so that your data fills the plot - you shouldn't use the Excel "smooth lines" feature in a plot like this, it implies that there is a continuous function running though the five points. That isn't consistent with the functional form you suggested.
NJ: You've divided by in your formula - you wanted As expected for both cases the heat capacity decreases as the temperature increases, this trend is nearly linear with the 0.2 density simulation but loses some of it's linearity at the higher density value of 0.8. This is potentially a simulation error but it could also be due to the relation of which could result in a more complicated trend. Note that Heat capacity is an extensive property which means that it is reliant on the amount of substance in the system and as such with the larger number density and thus greater number of atoms the heat capacity shows a significant increase.
fig. 13 displays a picture of the input script used for this simulation.
Radial Distribution Functions and structural properties
A lennard Jones solid, liquid and gas were formed using conditions derived using a standard phase diagram for lennard jones materials, the conditions used to create them are detailed in table below
| Phase | Density | Temperature |
| Solid | 1.2 | 1 |
| Liquid | 0.8 | 1.2 |
| Gas | 0.2 | 0.5 |
NJ: The vapour is a little dense and a little cold - I think you may be seeing vapour/liquid coexistence.
NJ: Have you confused the labels on the liquid and vapour plots? I think they should be the other way around.
The radial distribution function or was computed by inputting the trajectory files of these scripts into VMD and running and . the output files were then exported into excel and analysed. The radial distribution fucntion provides the probability of finding atoms at a given separation and allows for determination of structural properties.

fig. 14 displays the expected trends for the given phases,one thing to note is that at very small separations there is no probability of finding atoms this is due to the strength of the repulsive forces making it near impossible for anything to exist at those separation values. Gases and liquids tend to have a more disordered structure and thus have broad peaks, liquids tend to fill up a volume and as such the probability of finding an atom at a given separation is rather even hence the low broad peaks and plateau. Gases on the other hand particularly ideal gases show a high lack of order in this case there is some behaviour that wouldnt be expected as ideally it would immediately plateau after the initial peak hence suggesting a complete lack of order however in this case there are some maxima suggesting at least moderate ordered structure to the system but regardless it isn't comparable the other phases and does indeed become near-featureless after a separation of r = 4.

Focusing on the solid case in particular, a face centred cubic lattice was created and the function is indicative of a successful simulation as it presents sharp peaks at r values that signify particular lattice spacings and lattice points, the fact that they are sharp peaks indicates a very ordered structure in the sense that theyre really only found at very specific separations and and are not found in moderate amounts over broad ranges. As illustrated in fig. 15 the first peak in the solid RDF gives the separation of one of the corner atoms and one on the same face with the value being equal to where a is the lattice constant i.e the separation of two corner lattice points, this is given by the second peak with the third correlating to the separation of a corner lattice point and one on a perpendicular face given by . The respective values of these separations are 1.025, 1.475 and 1.825. These values have some discrepancies from the theoretical calculated values based off the aforementioned relations of and as when calculated they equal 1.042 and 1.805 but they are close enough to be taken as significant and the errors can be attributed to fluctuations in the calculations.
NJ: You could measure the positions and the three peaks and get three estimates for a. What is the lattice spacing?
It is also possible through analysis of the integral of the radial distribution function to determine the coordination number, from fig. 16 one can determine that the first coordination number is 12, the second is 15 and the third is 32., The first being 12 is very indicative of a face-centred cubic structure.
NJ: The coordination number at each peak is only the integral under that peak, i.e. the first coordination shell has 12 atoms, the second has 15-12, and the third has 32-15. The 15 should actually be 18, but it's hard to see what's happened without seeing a "zoomed in" integral plot.

Dynamical properties and the Diffusion coefficient
The final part of the experiment involved calculating the mean squared displacement and the velocity autocorrelation function of systems of different phases, the conditions used to create the phases was the same as with the radial distribution functions. For each phase one was created during the experiment and another set of data detailing a 1 million atom simulation was provided. The MSD and VACF's could then be used to calculate the diffusion coefficients of the phases via two different relations
For MSD:
and for VACF
The mean squared displacement essentially provides insight into the random motion of the atoms, the higher the value the more the atoms have travelled around the system. The MSD plot for Solid in both cases is very similar both show a very sharp increase in displacement of the atoms initially until fluctuating slightly around 0.02 in the case of 1 million atoms and and 0.018 for the simulation created during the experiment, this shows strong agreement between the two and indicates that the amount in the system likely has very little to do with the MSD value. The very minor fluctuations in the solid MSD around the values specified is indicative of the ordered structures that solids tend to adopt, as once in a favourable fixed position relative to other atoms they are unlikely to move much from said position and most of the movement they make would be vibrational.
For the liquid phase, the trend is very linear for both cases, this makes sense when considering the fact that liquids tend to move constantly over time and such with timesteps the displacement consistently 'explores' the volume of the simulation. This is due to the less ordered nature of the liquid relative to the solid.
The gaseuous phase shows an exponential increase in both cases which is to be expected considering that gases are the most entropically favourable state and thus have exceptionally random movement thus as time goes by the displacement value increases exponentially as the gas expands across the system.
NJ: It's actually a parabola. It should become linear at long times, though.
Evaluating the diffusion coefficient The diffusion coefficient details how diffusive a substance is and is integral to Ficks Law[2], Using the formula above allows for calculation of this value, for the sake of clairy note that the value of dt in the formula isin time units not number of timesteps and as such given a timestep of 0.002 the actual time was computed to be 10 and as such using the whole simulation gives the value of






For the solids the values are
for the created simulation
for 1 million atoms These are extremely similar and thus increasing the atoms clearly had no effect on diffusivity and the value is very low especially compared to the values for liquids and gas that we will see in the next part.
For the Liquids the values are
for 1 million atoms Once again these values are incredibly close and display that the liquid as expected is more diffusive than the solid
For the gases the values are
for 1 million atoms
Note that in this case there is a huge disparity in the the diffusion coefficients calculated, this could be down to two reasons one being that it is very possible that the simulation didnt work correctly, however this was run with multiple different gas indicative densities and temperature and never reached a value as high as for that of 1 million atoms. The other could possibly be that due to the excessively random behaviour of gas molecules the displacement could be exacerbated in an almost cascading effect with more and more atoms causing others to move more randomly. Due to the fact that using the same conditions for the RDF resulted in satisfactory results for the gas RDF either seems likely and would require further testing to determine for certain, alas due to time constraints this wasn't possible.
NJ: I think it's probably that the temperature is much lower.
The expected general trend in diffusion coefficients is observed though with gas being greater than liquid and solid.
VACF
The velocity autocorrelation function informs one how the velocity changes at a given time relative to another time. In fig. 25, the solid displays characteristic peaks, the very large minima suggests strong forces affecting the atoms resulting in a large change in velocity initially, however as mentioned perciously solids occupy and ordered formation with relatively fixed positions and as such rather than the velocity increasing or decreasing drastically or even plateauing it actually begins to oscillate with the maxima and minima gradually getting smaller over the course of the simulation, this is indicative of the atoms vibrating in place.
The liquids also exhibit a large minima indicative of strong forces between atoms/molecules but they plateau rather than oscillate, this is because liquids don't take up fixed positions and instead flow and as such the forces begins to have a much weaker effect as the system continues on.
NJ: Good. The negative regions of the VACF correspond to collision with other atoms. The single minimum in the liquid VACF is due to collision with the "solvent cage".
The harmonic oscillator can also be applied to this via inputting into the formula for the VACF of a 1D oscillator
Differentiating
gives
and
Inputting these into and cancelling out from the numerator and denominator as well as
gives
NJ: This should be a ratio of integrals. The numerator and denominator are both incorrect - there is a missing squared in the denominator, and a missing sin term in the numerator.
Expanding via where and
Separating into two fractions gives integrals that are easier to evaluate
The integrals in the first function cancel out leaving and the second fraction is equal to zero as it is an overall odd function which when integrated over all space is equal to zero by definition thus NJ: cos(x) is an even function. Your denominator should be zero, meaning the second term is infinite...










Plotting this function on a graph producers a highly oscillatory function as expected of a harmonic oscillator[3] with the maxima and minima representing the points at which the the 'spring' is at its maximum extension and rest and beginning to recoil. Hence the very steep changes in velocity over time.
NJ: There's nothing for the HO to collide with, so its velocity can never become decorrelated.
The running integrals of the phases mimic the trends given by the VACF, the solid shows an initially large increase and then begins to plateau with the liquid and gas integrals steadily increasing over time. Once again there is a rather large discrepancy between the gas simulation and the 1 million atom data which is most likely due to a computational error which unfortunately due to time issues could'nt be rectified however the overall trend is still very similar just of a different magnitude.
The integrals can be used to determine the diffusion coefficients by multiplying the integrals by a third
The Diffusion coefficients are:
Solid: 41.17
1 million atoms Solid: 55.27
Liquid: 39.72
1 million atoms Liquid: 46.10
Gas: 33.36
1 million atoms gas: 488.73
These values are significantly different to those calculated from the MSD and this could be for many reasons, one is a computational error however the same code was used for both the MSD and VACF so it's nearly impossible, it could also be a mathematical error however the same results were produced from hand calculation and computational calculation.
NJ: You did the trapezium rule on all of this data by hand?
Despite this the core trend that the gas is by far the most diffusive phase remains true. A clear source of error would be the fact that the trapezium rule doesnt account for the curvature of the graphs it can over and under-estimate the value at any given point due to the linear shape of the trapeziums.
NJ: Your x scale for the VACFs is incorrect. It's not time, it's number of timesteps. You have to correct for this as you did in the MSD case.
Conclusion
In closing for the most part satisfactory results were obtained, despite some issues with magnitudes all the general trends were observed during each part of the experiment and as such it can be called successful. If time had allowed it more repeats of the gaseous phases would've been run to determine precisely the source of the discrepancies as for now only assumptions can be made.
References
1. Atkins, P, De paula, J. Atkinns' Physical chemistry. (8th ed.). England: Oxford university Press; 2006.
2. Wikipediaorg https://en.wikipedia.org/wiki/Fick's_laws_of_diffusion [Accessed 6 November 2015]. Last edited 21 September 2015.
3. Tuckerman, M. Statistical Mechanics: Theory and Molecular simulation. England: Oxford university Press; 2010.
Name: Jamie Boughenou CID: 00839001