Talk:Main Pagewiki.ch.ic.ac.uk/wiki/index.php?title=Mod:0N9999

From ChemWiki
Revision as of 15:37, 14 January 2016 by Npj12 (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

NJ: Take care with your written English. Some sections of this report either don't make sense, or are hard to interpret.

Year three Computational Lab: Liquid Simulation Report

Theory

Figure 1
Figure 2
Figure 3

Verlet Alorithm and Velocity-Verlet Algorithm simulation

Figure 4

Before doing any simulation, an introduction to the theory of the simulation was underwent. Verlet Algorithm and Velocity-Verlet Algorithm are useful in simulating atomic position, velocity, acceleration and force as a function of time.

A model of simple harmonic oscillator is being investigated. Both of the analytical data (using classical harmonic oscillator calculation) and velocity-Verlet Algorithm Simulation are carried out. Using timestep of 0.1, the result is compared and shown, in which figure 1 shows the dependence of time on the position using both of the method, figure 2 shows the energy calculated using velocity Verlet Algorithm and figure 3 shows the alsolute deviation between analytical data and the simulation used as a function of time. The result shows a good simulation on the simple harmonic oscillator, especially when a plot of maximum error against time illustrates the error does not grow big unless a huge amount of time is processed (figure 4). Using the figure 4, an approximation on the position of maximum error giving from velocity Verlet Algorithm towards simple harmonic oscillation could be estimated.

Timestep is one of the quality that could be adjusted to control the quality of simulation. The higher the timestep, the higher accuracy of the simulation is acheived, for example, a timestep of 1 cannot show any valid simulation on the simple harmonic model compared to the default timestep of 0.1. But a small timestep, means that more step is needed to take to achieve for a certain amount of time, in another word, 100 steps are need to produce 10 reduced time unit for a timestep of 0.1, but 1000 steps are needed to produce the same amount of time for a timestep of 0.01! And bear in mind that the more the step is, the more time needed to run a simulation. So in practice, a compromise between accuracy and duration of simulation is considered to find a suitable timescale.

Energy of a simple harmonic oscillation model should be a constant in the real world. However, due to the inaccuracy given rise by simulation, a fluctuation of energy is observed in Figure 2, and it is shown that the energy is rather like a cosine curve but not a straight line. Using different timestep, it is found that a timestep of 0.05 is needed to make the fluctuation of energy be not more than 1% of the total energy. It is particularly important to monitor the energy of a system in a simulation of molecules, as this is one of the essential thermal properties in a system and it should be kept in constant in a close or isolated system, which are very common when simulation is preformed.

NJ: You should find that a timestep of 0.02 will give you this energy drift. NJ: Why do you think the error accumulates in the way that it does?

Lennard-Jones interaction

In classical physics, it is determined that the force of an atom is acting on is dependent on the potential experience, which is concluded in the following equation:

0n99995.jpg

and the potential of a pair of atoms is given by this equation according to Lennard-Jones interaction[1]: 0n99996.jpg

There is a certain separation of the pair of atoms which gives a potential energy of zero, and that could be calculated by letting the potential be zero. This separation and its corresponding force are calculated as followed.

0n99997.jpg

Another value of separation that is in interest is that when potential is in minimum, the separation at this point is called equilibrium separation and denoted as req. In order to calculate the value of req, the derivation of Lennard-Jones interaction is set to be zero, and hence solving the value of req. Calculation of the value req and its corresponding potential are shown as followed.

0n99998.jpg

0n99999.jpg

Three of the integral are calculated, the importance of these three integral will be discussed later. 0n999910.jpg

NJ: Good

Periodic Boundary Condition

When simulating a particular system, it is very hard to take all the atom into account when the volume of it is in real unit. NJ: This doesn't make sense.

Comparing the number of molecules in 1mL of water and the volume of 10000 molecules of water, a huge difference is seen.

0n999911.jpg

NJ: This is too small by a factor of 10 It will take a huge amount of time to calculate a system taking in all atoms into account for a real system (1mL of water will have 3.35 x 1022 molecules to be calculated!). Therefore, a periodic boundary condition is set to simulate the real condition. Atoms are enclosed in a square or rectangular box, and repeat the box in all of the three dimensions infinitely. This can then be only needed to calculate the physical properties in that box to simulate the real condition. Note that when the atoms are moving, it is possible to hit the "wall" of the box, the atom will then be transferred to the other end in that dimension of the box when it hits the wall. For example, a box simulated from (0 0 0) to (1 1 1), when an atom in that box initially sitting in a position of (0.5 0.5 0.5) moves with a vector of (0.7 0.6 0.2), it will end up in a position (0.2 0.1 0.7).

Truncation

There is a problem given from the simulation using Periodic Boundary Condition. When calculating the potential energy of atoms using Lennard-Jones interaction, all of the atoms surrounded by the selected atom have to be taken into account. This will complicate the mathematics and as using Periodic Boundary Condition, it is assumed that the box is repeated infinitely, which makes it impossible to take all the interactions into account.

A solution to this is to set a finite pair of interaction, by eliminating the potential up to a certain distance from an atom. This could be done as the Lennard-Jones interaction decreases rapidly as distance increases, from the three integral calculated before, the overall potential contributed to the atom from a distance of0n999912.jpg to infinity is so small to be ignored, the overall mathematics could be simplified in this case. In another word, only the potential contributed from a distance of 0 to0n999912.jpg is considered as they compose the major part of the total potential in practice. Instead of 0n999912.jpg, the distance could be ended with 0n9999121.jpg or 0n9999122.jpg in order to simplify math, but notes the amount of energy ignored and check how much deviation of the simulation is wanted to avoid.

Reduced Unit

To a system that is stimulated, it is not very convenient to use the actual unit of parameters when it is in atomic scale, for example, using multiplying every distance parameter with 10-10 is very troublesome, especially when writing code for the program to work. Therefore, Lj reduced units are used. The fundamental units quantities are mass, sigma, epsilon, representing mass, distance, energy respectively. By this notation, the Boltzmann constant should be adjusted to be 1.

For example, the Lennard-Jones parameters for Argons are 0n999913.jpg,

the separation (r) will be 1.088 nm if r* is 3.2, since

and the well depth is -1.656 x 10-24 kJmol-1

NJ: I think you've missed the factor of Avogadro's number here. You should get something close to 1kJ mol^-1

a reduced temperature of 1.5 will be equal to 180K in real unit.

0n999914.jpg

Equilibrium

In the following discussion, data about a simulated box are processed by a set of molecular dynamic code named LAMMPS and an illustration programme named VMD

In order to simulate the atoms in fluid system, the atoms are set to be in a lattice structure and process them for a rather long range of time. It is not appropriate to put them in random position though, as there will be chances that more than one atoms appear in the same or very close position, this will cause a problem as it will produce a huge amount of repulsive force according to Lennard-Jones relationship. NJ: Good. Can you say a bit more about why this is a problem? Instead, an initial lattice structure is simulated and allow it to melt first to simulate a fluid system.

One of the first parameter to be set is the number density of the system, it denotes the number of atom present in one unit of volume. For example, a value of 0.8 number density represents 8 atoms in 10 unit of volume of a particular system. In another word, each atom occupies 1.25 unit of volume, 1.077 unit of distance in each of the dimension. If a face-centred cubic lattice of 1.2 number density is used for the simulation of a crystal, each atom is to be occupying 0.83 unit of volume and 0.94 unit of distance in each of the dimension, and 1500 atoms will be created if it is in a finite box of 10.7722 unit length in three of the dimensions.

NJ: No. How many atoms are there in an FCC unit cell?

0n999915.jpg

The Velocity-Verlet algorithm is used for this simulation when 0n999916.jpg are given.

It is very important to specify the timestep and set it as a variable. It is because this is a quantity that is used throughout the input script and may be used in various part of the script, and more importantly, the value of timestep may be varied to fit different trials of simulation, therefore, it is more convenient to write it as a variable and define it at the first place, but not changing every quantity of it in the script. It is not wise to put the line as followed

timestep 0.001

run 100000
Figure 4
Figure 5
Figure 6
Figure 7

because if the timestep is changed, the total amount of time to simulate will also be changed. Running 100000 times of a timestep of 0.001 will give a total reduced time of 100, but running 100000 times of a timestep of 0.0001 will only give a total reduced time of 10! Therefore, it is necessary to define the number of step as suggested, instead of typing the exact number of steps in the script.

NJ: Good, well spotted.

The following part of the laboratory section is to simulate a particular system with different timestep used, in order to have an equilibrium reached for those systems. The timestep used are 0.001, 0.0025, 0.0075, 0.01, 0.015. Looking at the simulation produced by timestep 0.001 (Figure 4), the total energy of it dropped initially, and coming to equilibrium very quickly, after around 0.3 unit of time, it shows some degree of fluctuation throughout the equilibrium, but was in an acceptable range, less than 0.06% of the total energy calculated. The temperature and pressure does also fluctuate for a bit before equilibrium is reached, both need around 0.3 unit of time before equilibrium was reached. A summary of the equilibrium quantity, percentage of fluctuation, and amount of time needed to reach equilibrium is presented in the following table.

Timestep Equilibrium Energy (% of fluctuation) Equilibrium Temp (% of fluctuation) Equilibrium pressure (% of fluctuation) Time reaching equilibrium
0.001 -3.184 (0.10%) 1.26 (4.8%) 2.63 (19.0%) 0.3
0.0025 -3.184 (0.05%) 1.26 (6.2%) 2.67 (22.7%) 0.3
0.0075 -3.182 (0.06%) 1.26 (7.0%) 2.64 (24.4%) 0.3
0.01 -3.181 (0.09%) 1.25 (5.4%) 2.70 (20.4%) 0.3
0.015 n/a 1.25 (6.7%) 2.71 (29.2%) 0.3

NJ: Lovely table, very thorough. Except for the total energy carried with 0.015 timestep, all the other quantities could reach equilibrium in a rather short period of time, and it shows that there is the greatest error in the simulated pressure. The value of equilibrium energy and temperature does not vary much using timestep from 0.001 to 0.0025, which is safe to assume that 0.0025 timestep is the best choice, as it is the biggest timestep to give the lowest equilibrium energy among the five timesteps used. Note that when comparing the energy simulated using the code given (Figure 7), the values of them are quite close for the first four of them, but the total energy of the one simulated using 0.015 timestep keeps rising up and does not reach an equilibrium, and thus this is a bad value of timestep to be used compared to the other four. NJ: It's not quite that it gives the lowest energy, it's that it is indistinguishable from the smaller timestep. That means that even though we know formally that the smaller timestep should give us a more accurate result, the difference is smaller than the numerical errors in our simulation.

Running simulations under specific conditions

In statistical thermodynamics, there is an important concept called ensemble, meaning some of the physical properties (usually three) of a system are in equilibrium and thus is easier to be analysed. The ensemble to be carried out in this section is the Npt (isobaric-isothermal ensemble), that keeps the number of molecules, pressure and temperature of the system in equilibrium.

Way to simulate to a desire temperature

One of the methods to keep temperature constant during the process of simulation, is to adjust the velocity of the molecules in the system. Using equipartition theorem[2], it can be assumed that the energy of the system equals to

0n999921.jpg

Letting the desire temperature of the system be 0n999922.jpg, a factor of 0n999923.jpg can be multiplied to velocity of every atoms, in order to achieve the desirable temperature. The value of 0n999923.jpg is calculated as followed.

0n999924.jpg

Simulating NpT ensemble

Figure 8
Using the template given, simulation of several systems are underwent, with temperature (1.6, 1.8, 2.0, 2.2, 2.4), pressure (2.63, 2.70) and timestep of 0.01. Looking at the input template script given, the line
fix aves all ave/time 100 1000 100000 v_dens v_temp v_press v_dens2 v_temp2 v_press2
Figure 9
indicates that in order to find the average of a particular quantity in a system, every 100 timesteps the data will be collected, and a total of 1000 values are used to calculate the average, and an average will be calculated every 100000 timesteps. In this case, as 100000 timesteps are run, there will be one average per quantity input (there are a total of six quantities input, so six values will be generated.)

Results are produced using LAMMPS, and the reduced density against temperature is plotted for the two pressures used (Figure 8 and 9). Error bar of the data is included as well, using the standard deviation given from the output file, but the standard deviation of simulated density is too small to be seen. Note that the reduced density decreases as the reduced temperature rises, in a linear relationship, and it makes sense because as temperature increases, atoms will gain more energy and need more spaces for it to move, so the density decreases as a result.

A point to note that density can also be predicted by ideal gas law, using the temperature and pressure calculated in the simulation, but when compare this quantity with the simulated density, the density using ideal gas law is higher than the one simulated, and the deviation decreases as pressure rises. The reason behind the mismatch of the two density is that the simulation does not fit with the ideal gas law assumptions, which state that there is no intermolecular forces between ideal gas molecules. However, when the simulation is made, it includes the Lennard-Jones interaction between atoms, which is one of the intermolecular forces, that makes the simulated atoms further away when compared to the ideal case, and thus the simulated density is a lot smaller than the one calculated using ideal gas law. However, the simulated density gives a more realistic result in this case as the Lennard-Jones interaction should be included in real circumstances. A better prediction of gas thermodynamic properties maybe used by Van der Waal's equation, but more investigation is yet to be underwent.

NJ: What specifically about the LJ potential makes the density behave like this? Does it seem reasonable to you that the discrepancy increases as pressure decreases? How does the discrepancy behave with temperature?

Calculating heat capacity using statistical physics

Using LAMMPS, heat capacity can be calculated for a system with certain number of atoms. As volumetric heat capacity is dependent of the variance of energy and inversely dependent of the square of temperature of the system.

A script is modified from the previous section to generate the value of heat capacity.
### DEFINE SIMULATION BOX GEOMETRY ###

variable rho equal 0.2

lattice sc ${rho}

region box block 0 15 0 15 0 15

create_box 1 box
create_atoms 1 box

### DEFINE PHYSICAL PROPERTIES OF ATOMS ###

mass 1 1.0

pair_style lj/cut/opt 3.0

pair_coeff 1 1 1.0 1.0

neighbor 2.0 bin

### SPECIFY THE REQUIRED THERMODYNAMIC STATE ###

variable T equal 2.2

variable timestep equal 0.001

### ASSIGN ATOMIC VELOCITIES ###

velocity all create ${T} 12345 dist gaussian rot yes mom yes

### SPECIFY ENSEMBLE ###

timestep ${timestep}
fix nve all nve

### THERMODYNAMIC OUTPUT CONTROL ###

thermo_style custom time etotal temp press
thermo 10

### RECORD TRAJECTORY ###

dump traj all custom 1000 output-1 id x y z

### SPECIFY TIMESTEP ###

### RUN SIMULATION TO MELT CRYSTAL ###

run 10000
unfix nve
reset_timestep 0

### BRING SYSTEM TO REQUIRED STATE ###

variable tdamp equal ${timestep}*100

fix nvt all nvt temp ${T} ${T} ${tdamp}

run 10000

unfix nvt

reset_timestep 0

fix nve all nve

### MEASURE SYSTEM STATE ###

thermo_style custom step etotal temp press density

variable toteng equal etotal

variable toteng2 equal etotal*etotal

variable temp equal temp

fix aves all ave/time 100 1000 100000 v_toteng v_toteng2 v_temp 

run 100000

variable avetemp equal f_aves[3]

variable avetoteng equal f_aves[1]

variable avetoteng2 equal f_aves[2]

variable volume equal (3375/${rho})

variable heatcapacity equal (3375^2)*(f_aves[2]-(f_aves[1]*f_aves[1]))/(f_aves[3]*f_aves[3])

print "Averages"

print "--------"

print "Temperature: ${avetemp}"

print "energy ${avetoteng}"

print "energy2 ${avetoteng2}"

print "heatcapacity ${heatcapacity}"

print "volume ${volume}"
Figure 8

The above script is for simulation of 0.2 reduced density and 2.2 reduced temperature. Combination of sets with different reduced temperature (2.0, 2.2, 2.4, 2.6, 2.8) in different reduced density (0.2, 0.8) were performed. A graph of volumetric heat capacity divided by volume against temperature is plotted (figure 8). It is shown that the volumetric heat capacity decreases linearly as temperature increases, and both of the densities share the same correlation (have the same gradient), but the 0.8 reduced density result have a higher value of heat capacity in general. This is due to the fact that with less density, meaning less atoms are in a particular volume and thus more space to move, so a less energy is needed to provide a certain increase in velocities of atoms to enhance a increase of temperature.

NJ: This is more about the number of degrees of freedom. In a denser system, there are more degrees of freedom per unit volume in which to store thermal energy.

The trend obtained agrees with the literature value for heat capacity of inert monoatomic liquid.[3] The values of heat capacities can be compared to the experimental result, but it will be hard as the density used is a very small value, which is about 0.2 and 0.8 kg/m3 in real unit (calculation is shown below), which is unlikely to exist in normal circumstance. 0n999928.jpg

NJ: Why do you think the dependence is like this?

Structural properties and radial distribution function

Figure 9

The structure of a crystal or fluid can be analysed by looking at it radial distribution function. A analysis of argon atoms is carried out using LAMMPS simulation during this experiment. Inputting different reduced density (0.2, 0.8, 1.2) of the same reduced temperature (1.0) could produce atomic model in three different phases: gas, liquid and solid according to Jean-Peirre Hansen and Loup Verlet[4]. For the simulation of solid phase of argon, face-centre cubic lattice structure is used.

The radial distribution function measures the density of atoms as a function of a reference atom, for an entirely structureless atom model, the radial distribution function should tends to be unity. The plot of radial distribution function against distance from the reference atom is plotted in figure 9. It is shown that there is a structured pattern for the solid function, and slowly goes to unity after long range of distance, in contrast, gas and liquid phase do not show the same as solid, the functions show some pattern at a closer distance of the reference atom, but quickly goes to unity after a long range of distance, this is especially true for the liquid phase function.

An explanation to the above statement could be that the atoms in solid phase are more fixed in position comparatively, showing that even after a long range of distance, there still be structured pattern, but it does slowly decay to unity as there are random motion on the atoms, they can vibrate around their position and thus could bring the RDF to unity after a long distance as more atoms are not in the position it should be, and smoothen the curve. If the theory is true, the solid curve should comes to unity more quickly with a higher temperature and lower density, as these will allow more space for them to move around and provide more kinetic energy for random vibration.

In comparison, both the gas and liquid phase curve do show some structure in a short range around the reference atom, with an evidence that both of them showed a peak when r*=1.125, but quickly loses the relationship as distance grows. This is because the interaction between atoms is relatively strong at short distance, making the formation of an obvious primary coordination shell, but as distance grows, the interaction becomes smaller and the kinetic energy of the atoms starts to dominate, and breaking the structure of the rest of the coordination shell.

NJ: Good. More succinctly, you might say that the solid has long and short range order, while the liquid has only short ranged order, and the gas has no order at all. Can you explain more about what is going on at distances smaller than r~1.125?

Figure 10

Looking at the radial distribution function of solid phase of argon, the computational result does somehow match with the theory. Figure 10, illustrated by Jc6613, showes clearly the real structure of a face-centre cubic lattice, which is the lattice type of the argon solid phase that we simulated. There are 12 atoms that are closest to the central atom, letting the distance between the central atom and its closest neighbour be r. The next closest neighbour will be 0n999930.jpg away from the central atom, with a coordination of 6. The following neighbour is 0n999931.jpg away from the central atom and there are 24 of them. Comparing this result with the data obtained in simulation, the first peak occurs at a radius of 1.025, with an intensity of 5, followed by the second peak, an intensity of 1 at radius 1.525, the third peak has an intensity of 3 at radius of 1.825. The simulated result and theoretical result agree in two main ways

  1. in term of the distance away from the central atom, both of the result share similar result
  2. In term of the radial distribution function, they gives similar result but does not completely agree with each other, which could be due to the fact that the atoms could vibrate and the theoretical calculation used consider only the centres of the atoms , but the edges of atoms could also contribute to the radial distribution. (Note: radial distribution is inversely proportional to the square of radius and directly proportional to the number of atoms in that area.)
Theoretical distance Simulated distance Theoretical g(r) Relative theoretical g(r) Relative Simulated g(r)
First neighbour r 1.025 0n999932.jpg 4 5
Second neighbour 0n999930.jpg= 1.414r 1.525 0n999933.jpg 1 1
Third neighbour 0n999931.jpg= 1.732r 1.825 0n999934.jpg 2.67 3

NJ: So what is the lattice spacing? You've shown what the coordination numbers for an FCC lattice should be, but that doesn't mean that that is consistent with your RDF - the intensities for g(r) that you've predicted aren't quite the same as the measured ones. You need to examine the integral of the RDF to determine the coordination numbers for your simulation.

Dynamical properties and the diffusion coefficient

The final part of this exercise is to determine the diffusion coefficient of a system using two different approaches, via the derivative of mean squared displacement and integration of velocity autocorrelation function. The following section will be discussing the coefficient determined by these two approaches respectively.

Mean squared displacement

Figure 11
Figure 12

The mean squared displacement, as its name has implied, is the mean of the square of displacement of all atoms after a certain time. Using this quantity, the rate of diffusion could be determined, as the higher the diffusion rate, the further the atoms could go and the higher the mean squared displacement is. The mean squared displacement is increased with respect to time, so the diffusion coefficient is given by:

0n999935.jpg

The graph of MSD against timestep is plotted for gas, liquid, and solid phases are shown in figure 11. As expected, graph for solid has a gradient of almost 0, which according to the equation above, gives a diffusion coefficient of 0, while for liquid, there is a gradient of 0.001, and a gradient of 0.0023 for gas. Using the equation above, the diffusion coefficient is estimated for the three phases.

8000 atoms 1M atoms
Phase gradient gradient (with time as the unit of Nominator) Diffusion coefficient gradient gradient (with time as the unit of Nominator) Diffusion coefficient
solid 0 0 0 0 0 0
liquid 0.001 0.5 0.083 0.001 0.5 0.083
gas 0.0097 4.85 0.81 0.03 15 2.5

Note that the diffusion coefficient calculated is in reduced unit, so may not be able to be compared with the literature value. The same analysis was carried out with 1 millions atoms instead of 8000 atoms (Figure 12). There are two interesting points to be looked at, first, looking at the gaseous MSD of 1M atoms, it is very clear that the curve is not linear initially, but turning to a straight line after quite some time. It is normal as in the beginning, the atoms are in their initial position and not moving in a constant velocity at all until the first collision occurs, the atoms start bouncing back and fro, and move randomly and reaching a constant velocity.

NJ: You've got the wrong end of the stick here. Initially, the atoms *do* move with constant velocity - that means that the mean squared displacement increases quadratically with time. Once the atoms begin to collide with each other, the motion becomes diffusive, and the the MSD becomes linear in time.

The same effect can also be seen in the 8000 atoms simulation, but not so obvious. The second point to be made is the comparison of diffusion coefficient between the simulation of 8000 atoms and 1 M atoms. For solid and liquid simulations, the coefficients are of similar values and thus showing consistent result. But looking at the diffusion coefficient of the gaseous condition, 1M atoms simulation has a much higher value than that of the 8000 atoms. It may seem weird in the first sight but as diffusion coefficient depends on the density of the system especially for gaseous condition, so there is no point comparing the two simulation without knowing the density of simulation of 1M atoms. A point to note that with decreasing density, the diffusion coefficient will increase, so the density of 1M atoms simulation should be much lower than that of the 8000 atoms simulation.

NJ: Exactly, well done./span>

Velocity Autocorrelation Function

Beside using displacement to determine the diffusion coefficient of a system, a similar approach could be applied using velocity autocorrelation function, which is defined as

0n999938.jpg

in which is the mean of the dot product of the initial velocity and the velocity after a period of time for a certain amount of atoms. For a totally random movement of atoms, the VACF should be zero as the velocity of atom should not dependent on the initial velocity of it. The relationship between VACF and diffusion coefficient is denoted as below:

0n999939.jpg

Going back to the simple harmonic oscillator that is simulated in the beginning of the report, a velocity autocorrelation function can be derived for that case, and the calculation of its VACF is shown below:

0n999940.jpg

Figure 13
Figure 14
Figure 15
Figure 16

A plot of VACF of solid, liquid, gas, and simple harmonic oscillator against timestep is shown in Figure 13. Looking at the curve of simple harmonic oscillator, it is obviously a cosine function and not going to set in zero as time progresses. This is due to the fact that the simple harmonic oscillation simulates only one atom, and it moves continuously with a certain pattern, without any external incidents, for example atom collision, that could produce a random motion. Hence, the VACF of SHM shows an excellent example of an one atom condition that does not exhibit brownian motion, in another word, diffusion.

Looking at the curve for solid and liquid in Figure 13, the minima represent a mean negative relationship between initial velocity and the velocity at that time, which implies that the atoms are moving in a reverse direction compared to the initial condition. This is because the atoms are very close initially, when there is a collision, most of the atoms collide with one another and reverse their velocity, illustrating the negative VACF. As shown in Figure 13, there are more uniform collision of atoms in the solid model compared to the liquid model, before systems exhibiting completely random motions, as the atoms in the solid state are closed packed together, when they experience the initial collision, they can then collide to the other atoms next to them and perform another uniform collision. But the collision will become less uniformed with increasing number of collisions. While for liquid simulation, the collision loses its structure after the first collision, as the atoms are less packed and have room to move in an different direction for a completely random motion to occur. Note that the gaseous VACF does not get down to zero after 500 timesteps, as the atoms are so far away from one another, and disfavour uniform collision from happening in the system, the velocities of the gaseous atoms after a short period of time do somehow dependent on their initial velocity.

NJ: Excellent

From the equation given above, the integration of VACF can give the value of diffusion coefficient for a system, the integral of VACF can be estimated using trapezium rule. The plot of integral of VACF in three different phases against timestep is shown in Figure 14.

integral of VACF after 5000 timestep integral of VACF in term of time unit Diffusion coefficient in reduced unit
solid -0.27 5.4 x 10-4 around 0
liquid 146.9 0.2938 0.098
gas 1327.5 2.655 0.885

The calculated diffusion coefficient using VACF agrees with those calculated using MSD.

NJ: Do they? They're not exactly the same. Do you know what sort of uncertainty you should expect?

The same approach is carried using system with 1M atoms. The figure 15 and 16 show the VACF and the integral of VACF against time respectively. It is seen that a similar pattern is shown as the one performed by 8000 atoms for both the solid and liquid model. However, it seems that the curve for gas model does not reach zero, and thus means that it has not reach a random motion condition yet after 5000 timestep. The integral of VACF will not give a valid diffusion coefficient as the velocity after 5000 timestep still somehow dependent on the initial velocity, so the diffusion coefficient will be underestimated. Therefore, biggest error in determining the diffusion coefficient using integral of VACF is that it is not known if the atoms are moving in random motion or not, if it is not, especially for the less dense gas simulation, the value of diffusion coefficient will be underestimated.

NJ: Where are the values for the 1M atom simulations? How well do they agree? You're right about the vapour simulation. Which of the two methods (MSD or VACF) is better, do you think?

  1. J.E. Lennard-Jones, "On the Determination of Molecular Fields", Proc. R. Soc. Lond. A, 1924, 106, 463-477, DOI: 10.1098/Frspa.1924.0082
  2. A. Kundt; E. Warburg, "Über die specifische Wärme des Quecksilbergases (On the specific heat of mercury gases)", Annalen der Physik, 1876, 157, 353-369. DOI: 10.1002/andp.18762330302
  3. P. Dashora; D. Parnami; P. Karnawat, "A theoretical study of heat capacity of inert liquids", Indian Journal of Pure & Applied Physics, 2003, 41, 438-442 DOI
  4. J.P. Hansen; L. Verlet, "Phase transitions of the Lennard-Jones system", Phys. Review, 1969 184, 151-161 DOI:10.1103/PhysRev.184.151