<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://chemwiki.ch.ic.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Hz1420</id>
	<title>ChemWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://chemwiki.ch.ic.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Hz1420"/>
	<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/wiki/Special:Contributions/Hz1420"/>
	<updated>2026-05-16T16:23:08Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814665</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814665"/>
		<updated>2024-02-13T21:06:07Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* The external coordinator: What is a batch system */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://icl-rcs-user-guide.readthedocs.io/en/latest/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/ RCS Wiki Page in ReadTheDocs] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/queues/classes-of-jobs/ RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/HPC-job-submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL14/17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread. &#039;&#039;&#039;Note:&#039;&#039;&#039; The word &#039;processor&#039; is not a very accurate term. Might be better with &#039;process&#039; (I am just too lazy to update that figure). Many modern CPUs supports sub-CPU threading, which means the number of logical CPUs is larger than physical CPUs, so it is possible to have multiple threads within 1 processor. However, it is also possible to use multiple processors for 1 process, or even 1 thread. &lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Tmp memory, Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, temporary memory, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For temporary memory large, high-frequency disks are used. It is allocated by job requests, which is not accessible by login nodes. Everything is erased after the job is terminated. &lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency. (And it is not something new for CX1 users to receive the RDS failure news email)&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
=== How to run a job in parallel: Things to consider ===&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814664</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814664"/>
		<updated>2024-02-13T20:40:38Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Feb. 2024), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/pilot/hx1/&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Step 0 : Environment Modules ==&lt;br /&gt;
Environment modules &amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; manages the compilation and running environment to ensure all of its dependencies are loaded when an executable is launched. For software on RDS-CMSG, typically a module file is prepared, which can be called by &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; command. The user is strongly suggested to add the following line to their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file on CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export MODULEPATH=&amp;quot;/rds/general/project/cmsg/live/etc/modulefiles:${MODULEPATH}&amp;quot;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That helps the &#039;module&#039; executable to find module files hosted on RDS-CMSG. Then &#039;module&#039; commands do not require real path. But in case that readers skip this section (as they always do), in the following text, the read path  is always used. Use the following command to update the environment variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a, OpenMP&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After configuration, use &amp;lt;code&amp;gt;source ~/.bashrc&amp;lt;/code&amp;gt; to enable alias commands.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
=== PC version ===&lt;br /&gt;
A few variants are available in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL&amp;lt;/code&amp;gt;, including executables compiled for Windows Subsystem for Linux (WSL) and MacOSx. Please check the readme files in saved in individual directories for specifications. &lt;br /&gt;
&lt;br /&gt;
A set of statically linked &#039;crystal&#039; and &#039;properties&#039; executables is available in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-intel2023-x86-intel2023&amp;lt;/code&amp;gt;, which do not have any prerequisite and can be run in serial on either Linux or MacOSx with x86 CPU.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2, OpenMP&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a, OpenMP&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
Executables are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-EBFOSS2023a-HX1-ompi&amp;lt;/code&amp;gt;. To setup the running environment on HX1, if the user has a copy of &#039;settings&#039; file, they need to modify the following keywords:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : No default, but please use &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt; unless a self-compiled version is used.&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a, OpenMP&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ONETEP v6.1.9.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1. Job submission script not available.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a, OpenMP, FFTW, Scalapack&lt;br /&gt;
* Libxc: Yes, version 5.1.2&lt;br /&gt;
&lt;br /&gt;
Executable path is &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/ONETEP/6.1.2.2__foss2022a/bin&amp;lt;/code&amp;gt;. Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/ONETEP_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814663</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814663"/>
		<updated>2024-02-13T20:38:43Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* A General Job Submission Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://icl-rcs-user-guide.readthedocs.io/en/latest/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/ RCS Wiki Page in ReadTheDocs] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/queues/classes-of-jobs/ RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/HPC-job-submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL14/17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread. &#039;&#039;&#039;Note:&#039;&#039;&#039; The word &#039;processor&#039; is not a very accurate term. Might be better with &#039;process&#039; (I am just too lazy to update that figure). Many modern CPUs supports sub-CPU threading, which means the number of logical CPUs is larger than physical CPUs, so it is possible to have multiple threads within 1 processor. However, it is also possible to use multiple processors for 1 process, or even 1 thread. &lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Tmp memory, Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, temporary memory, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For temporary memory large, high-frequency disks are used. It is allocated by job requests, which is not accessible by login nodes. Everything is erased after the job is terminated. &lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency. (And it is not something new for CX1 users to receive the RDS failure news email)&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814662</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814662"/>
		<updated>2024-02-13T20:38:21Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* A General Job Submission Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://icl-rcs-user-guide.readthedocs.io/en/latest/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/ RCS Wiki Page in ReadTheDocs] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/queues/classes-of-jobs/ RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/HPC-job-submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread. &#039;&#039;&#039;Note:&#039;&#039;&#039; The word &#039;processor&#039; is not a very accurate term. Might be better with &#039;process&#039; (I am just too lazy to update that figure). Many modern CPUs supports sub-CPU threading, which means the number of logical CPUs is larger than physical CPUs, so it is possible to have multiple threads within 1 processor. However, it is also possible to use multiple processors for 1 process, or even 1 thread. &lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Tmp memory, Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, temporary memory, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For temporary memory large, high-frequency disks are used. It is allocated by job requests, which is not accessible by login nodes. Everything is erased after the job is terminated. &lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency. (And it is not something new for CX1 users to receive the RDS failure news email)&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814661</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814661"/>
		<updated>2024-02-13T20:37:36Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Job Partition Guide */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://icl-rcs-user-guide.readthedocs.io/en/latest/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/ RCS Wiki Page in ReadTheDocs] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/queues/classes-of-jobs/ RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread. &#039;&#039;&#039;Note:&#039;&#039;&#039; The word &#039;processor&#039; is not a very accurate term. Might be better with &#039;process&#039; (I am just too lazy to update that figure). Many modern CPUs supports sub-CPU threading, which means the number of logical CPUs is larger than physical CPUs, so it is possible to have multiple threads within 1 processor. However, it is also possible to use multiple processors for 1 process, or even 1 thread. &lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Tmp memory, Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, temporary memory, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For temporary memory large, high-frequency disks are used. It is allocated by job requests, which is not accessible by login nodes. Everything is erased after the job is terminated. &lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency. (And it is not something new for CX1 users to receive the RDS failure news email)&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814660</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814660"/>
		<updated>2024-02-13T20:36:35Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://icl-rcs-user-guide.readthedocs.io/en/latest/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://icl-rcs-user-guide.readthedocs.io/en/latest/hpc/ RCS Wiki Page in ReadTheDocs] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://wiki.imperial.ac.uk/display/HPC/New+Job+sizing+guidance RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread. &#039;&#039;&#039;Note:&#039;&#039;&#039; The word &#039;processor&#039; is not a very accurate term. Might be better with &#039;process&#039; (I am just too lazy to update that figure). Many modern CPUs supports sub-CPU threading, which means the number of logical CPUs is larger than physical CPUs, so it is possible to have multiple threads within 1 processor. However, it is also possible to use multiple processors for 1 process, or even 1 thread. &lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Tmp memory, Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, temporary memory, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For temporary memory large, high-frequency disks are used. It is allocated by job requests, which is not accessible by login nodes. Everything is erased after the job is terminated. &lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency. (And it is not something new for CX1 users to receive the RDS failure news email)&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814626</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814626"/>
		<updated>2023-12-01T00:03:02Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Step 0 : Environment Modules ==&lt;br /&gt;
Environment modules &amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; manages the compilation and running environment to ensure all of its dependencies are loaded when an executable is launched. For software on RDS-CMSG, typically a module file is prepared, which can be called by &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt; command. The user is strongly suggested to add the following line to their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file on CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
export MODULEPATH=&amp;quot;/rds/general/project/cmsg/live/etc/modulefiles:${MODULEPATH}&amp;quot;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That helps the &#039;module&#039; executable to find module files hosted on RDS-CMSG. Then &#039;module&#039; commands do not require real path. But in case that readers skip this section (as they always do), in the following text, the read path  is always used. Use the following command to update the environment variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a, OpenMP&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After configuration, use &amp;lt;code&amp;gt;source ~/.bashrc&amp;lt;/code&amp;gt; to enable alias commands.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
=== PC version ===&lt;br /&gt;
A few variants are available in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL&amp;lt;/code&amp;gt;, including executables compiled for Windows Subsystem for Linux (WSL) and MacOSx. Please check the readme files in saved in individual directories for specifications. &lt;br /&gt;
&lt;br /&gt;
A set of statically linked &#039;crystal&#039; and &#039;properties&#039; executables is available in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-intel2023-x86-intel2023&amp;lt;/code&amp;gt;, which do not have any prerequisite and can be run in serial on either Linux or MacOSx with x86 CPU.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2, OpenMP&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a, OpenMP&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
Executables are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-EBFOSS2023a-HX1-ompi&amp;lt;/code&amp;gt;. To setup the running environment on HX1, if the user has a copy of &#039;settings&#039; file, they need to modify the following keywords:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : No default, but please use &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt; unless a self-compiled version is used.&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a, OpenMP&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ONETEP v6.1.9.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1. Job submission script not available.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a, OpenMP, FFTW, Scalapack&lt;br /&gt;
* Libxc: Yes, version 5.1.2&lt;br /&gt;
&lt;br /&gt;
Executable path is &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/ONETEP/6.1.2.2__foss2022a/bin&amp;lt;/code&amp;gt;. Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/ONETEP_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814625</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814625"/>
		<updated>2023-11-30T23:48:30Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CRYSTAL23 v1 (Intel) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a, OpenMP&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
=== PC version ===&lt;br /&gt;
A few variants are available in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL&amp;lt;/code&amp;gt;, including executables compiled for Windows Subsystem for Linux (WSL) and MacOSx. Please check the readme files in saved in individual directories for specifications. &lt;br /&gt;
&lt;br /&gt;
A set of statically linked &#039;crystal&#039; and &#039;properties&#039; executables is available in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-intel2023-x86-intel2023&amp;lt;/code&amp;gt;, which do not have any prerequisite and can be run in serial on either Linux or MacOSx with x86 CPU.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2, OpenMP&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a, OpenMP&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
Executables are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-EBFOSS2023a-HX1-ompi&amp;lt;/code&amp;gt;. To setup the running environment on HX1, if the user has a copy of &#039;settings&#039; file, they need to modify the following keywords:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : No default, but please use &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt; unless a self-compiled version is used.&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a, OpenMP&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ONETEP v6.1.9.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1. Job submission script not available.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a, OpenMP, FFTW, Scalapack&lt;br /&gt;
* Libxc: Yes, version 5.1.2&lt;br /&gt;
&lt;br /&gt;
Executable path is &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/ONETEP/6.1.2.2__foss2022a/bin&amp;lt;/code&amp;gt;. Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/ONETEP_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814624</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814624"/>
		<updated>2023-11-30T23:40:45Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a, OpenMP&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2, OpenMP&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a, OpenMP&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
Executables are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-EBFOSS2023a-HX1-ompi&amp;lt;/code&amp;gt;. To setup the running environment on HX1, if the user has a copy of &#039;settings&#039; file, they need to modify the following keywords:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : No default, but please use &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt; unless a self-compiled version is used.&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a, OpenMP&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ONETEP v6.1.9.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1. Job submission script not available.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a, OpenMP, FFTW, Scalapack&lt;br /&gt;
* Libxc: Yes, version 5.1.2&lt;br /&gt;
&lt;br /&gt;
Executable path is &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/ONETEP/6.1.2.2__foss2022a/bin&amp;lt;/code&amp;gt;. Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/ONETEP_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814623</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814623"/>
		<updated>2023-11-30T22:52:19Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* HX1 version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
Executables are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-EBFOSS2023a-HX1-ompi&amp;lt;/code&amp;gt;. To setup the running environment on HX1, if the user has a copy of &#039;settings&#039; file, they need to modify the following keywords:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : No default, but please use &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt; unless a self-compiled version is used.&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814622</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814622"/>
		<updated>2023-11-30T22:51:08Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* HX1 version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
Executables are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-EBFOSS2023a-HX1-ompi&amp;lt;/code&amp;gt;. To setup the running environment on HX1, if the user has a copy of &#039;settings&#039; file, they need to modify the following keywords:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814621</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814621"/>
		<updated>2023-11-30T22:50:51Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* HX1 version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
Saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/CRYSTAL/23v1/Linux-EBFOSS2023a-HX1-ompi&amp;lt;/code&amp;gt;. To setup the running environment on HX1, if the user has a copy of &#039;settings&#039; file, they need to modify the following keywords:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814620</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814620"/>
		<updated>2023-11-30T22:29:38Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CRYSTAL23 v1 (Intel) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814619</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814619"/>
		<updated>2023-11-30T22:29:05Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CRYSTAL23 v1 (Intel) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For detailed instructions and testing cases, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814618</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814618"/>
		<updated>2023-11-30T22:24:46Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CRYSTAL23 v1 (Intel) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
 ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
 if test -f ~/.bashrc; then&lt;br /&gt;
     source ~/.bashrc&lt;br /&gt;
 fi&lt;br /&gt;
 EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814617</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814617"/>
		<updated>2023-11-30T22:24:16Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CRYSTAL23 v1 (GNU) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
 # in ~/.bashrc&lt;br /&gt;
 alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
 # in settings, keep the column width&lt;br /&gt;
 mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814616</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814616"/>
		<updated>2023-11-30T22:21:29Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up an effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814615</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814615"/>
		<updated>2023-11-30T22:20:00Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814614</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814614"/>
		<updated>2023-11-30T22:18:44Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CX1 version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Type enter for default setups, which typically works fine. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814613</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814613"/>
		<updated>2023-11-30T22:18:20Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* GULP v6.1.2 (GNU) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else, so not useful).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814612</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814612"/>
		<updated>2023-11-30T22:17:42Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* GULP v6.1.2 (GNU) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 2.3.0&lt;br /&gt;
* PLUMED: Yes, version 2.9.0&lt;br /&gt;
* ALAMODE: Yes, git repo copied into &amp;lt;code&amp;gt;app/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
GULP force field libraries are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/GULP_Libraries&amp;lt;/code&amp;gt;. Executable name: &#039;gulp-mpi&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenKIM&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
KIM force field &amp;lt;ref&amp;gt;https://openkim.org/&amp;lt;/ref&amp;gt; are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/KIM_Models&amp;lt;/code&amp;gt;. To run OpenKIM executable and download new models, run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ module load /rds/general/project/cmsg/live/etc/modulefiles/OpenKIM/2.3.0-foss&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script configuration&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/GULP6/config_GULP6.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type enter for default setups, which typically works fine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job submission script commands&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pglp6   || Generate job submission files for &#039;gulp-mpi&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Xglp6   || Generate job submission files for user defined executable (by default nothing else).&lt;br /&gt;
|-&lt;br /&gt;
| SETglp6 || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPglp6|| Print the instructions of commands&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814611</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814611"/>
		<updated>2023-11-30T22:05:36Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties. Available for CX1 and HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Job submission file configured for CX1 and HX1, but only CX1 executable is shared as these executables are highly dependent on environment.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
Pseudopotential files are saved in &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/QE_PseudoP&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== GULP v6.1.2 (GNU) ==&lt;br /&gt;
Default for CX1. Not available for HX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* OpenKIM: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814610</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814610"/>
		<updated>2023-11-30T22:00:08Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Quantum Espresso v7.2 (GNU) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. &lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Default for CX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814609</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814609"/>
		<updated>2023-11-30T21:59:51Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CX1 version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. &lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Default for CX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814608</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814608"/>
		<updated>2023-11-30T21:59:14Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Configuration &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Quick References &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Code Examples &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG. Default for CX1. &lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
== Quantum Espresso v7.2 (GNU) ==&lt;br /&gt;
Default for CX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss2022a&lt;br /&gt;
* libxc: Yes, version 5.1.2&lt;br /&gt;
* hdf5: No&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script configuration &#039;&#039;&#039;&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/QE7/config_QE7.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure is the same as CRYSTAL. Please note that moving files between temporal and home directories are disabled as QE has built-in file management system.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Job submission script commands &#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| PWqe7   || Generate pw.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PHqe7   || Generate ph.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| CPqe7   || Generate cp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| PPqe7   || Generate pp.x job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xqe7    || Generate job submission files for user-defined executables&lt;br /&gt;
|-&lt;br /&gt;
| SETqe7  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPqe7 || Print the instructions of commands&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814607</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814607"/>
		<updated>2023-11-30T21:44:52Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CRYSTAL17 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 v2 (GNU)==&lt;br /&gt;
Not hosted on RDS-CMSG.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;For job submission scripts&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814606</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814606"/>
		<updated>2023-11-30T21:42:57Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General information of shared software available for CMSG group members on Imperial CX1 is summarized in this page. Software listed below, unless stated otherwise, is hosted under &#039;CMSG&#039; project of Imperial Research Data Store (RDS) &amp;lt;ref&amp;gt;https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/service-offering/rds/&amp;lt;/ref&amp;gt; (referred as RDS-CMSG below), which is accessible for both login and computational nodes of Imperial HPC CX1. For Imperial HPC HX1, by the time this page is upated (Nov. 2023), it is not accessible there. Only login node can visit RDS-CMSG disk after the pilot phase &amp;lt;ref&amp;gt;https://wiki.imperial.ac.uk/display/HPC/HX1+Cluster&amp;lt;/ref&amp;gt;. Therefore software on RDS-CMSG are classified into 2 catagories:&lt;br /&gt;
&lt;br /&gt;
# For CX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/app/&amp;lt;/code&amp;gt; (executables, libraries and headers) and &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/etc/&amp;lt;/code&amp;gt; (other files, such as pseudopotential). Executable can be called directly from the user&#039;s home directory.&lt;br /&gt;
# For HX1, use &amp;lt;code&amp;gt;/rds/general/project/cmsg/live/share/&amp;lt;/code&amp;gt;. Executables must be downloaded and uploaded to HX1. It is user&#039;s responsibility to set up a effective running environment according to &#039;readme&#039; files (if there is one) saved in the same, or upper, directory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Please note that accessibility to CMSG disk is granted for CMSG group members only. If you need to access it, please contact group PI [[Contributors | Prof. Nicholas Harrison]].&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (Intel) ==&lt;br /&gt;
The current default for CX1.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR). For MPPproperties please read the GNU version below.&lt;br /&gt;
&lt;br /&gt;
The general job submission script for Imperial HPC, developed by the author himself, is used here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 v1 (GNU) ==&lt;br /&gt;
Compared to Intel version above, GNU version is slightly slower to allow for compatibility with MPPproperties.&lt;br /&gt;
&lt;br /&gt;
=== CX1 version ===&lt;br /&gt;
* Compiling Env : gcc11.2.0, aocl4.0, mpich4.0.2&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
The same job submission script is used. However, GNU version is not the default option for CX1. To enable GNU version, if there is a &#039;settings&#039; file, change the following parameters:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/modulefiles/CRYSTAL/23v1-gcc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load /rds/general/project/cmsg/live/etc/compiler/gcc11.2.0-aocl&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Alternatively, rerun the configuration file (same as above) and specify 2 values during configuration.&lt;br /&gt;
&lt;br /&gt;
Note that by default, there is no command for &#039;MPPproperties&#039;. The user has to modify their &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; file to set the alias and &#039;settings&#039; file for executable flags. An example:&lt;br /&gt;
&lt;br /&gt;
  # in ~/.bashrc&lt;br /&gt;
  alias MPPprop23=&amp;quot;/rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/gen_sub -x mppprop -set ${HOME}/etc/runCRYSTAL23/settings&amp;quot;&lt;br /&gt;
  # in settings, keep the column width&lt;br /&gt;
  mppprop    mpiexec -np ${V_TPROC}                                       MPPproperties                                                Massive parallel properties calculation, OMP&lt;br /&gt;
&lt;br /&gt;
=== HX1 version ===&lt;br /&gt;
* Compiling Env : EasyBuild foss/2023a&lt;br /&gt;
* Note: MP2 (CRYSCOR and CRYSTAL2) is currently not available.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;EXEDIR&#039;&#039;&#039; : No default. The directory where the user put their executables in.&lt;br /&gt;
* &#039;&#039;&#039;MPIDIR&#039;&#039;&#039; : &amp;lt;code&amp;gt;module load foss/2023a&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, run the configuration file (The one with &#039;-HX1&#039; in the same directory of Github repo) and specify 2 values during configuration. Other options are consistent with CRYSTAL on CX1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814605</id>
		<title>Nano Electrochemistry Group</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814605"/>
		<updated>2023-11-30T20:45:02Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /*  CMSG disk and Shared Software on CX1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;padding: 20px; background: #87adde; border: 1px solid #FFAA99; font-family: Trebuchet MS, sans-serif; font-size: 105%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This page provides a series of tutorials designed to help with the computational modelling of electrochemical system; their aim is to provide general workflows and useful tip to model fundamental components and properties of electrochemical systems. The tutorials have been designed by the researchers of the Computational NanoElectrochemistry Group led by Dr Clotilde Cucinotta [link to group page] and collaborators. &lt;br /&gt;
&lt;br /&gt;
Several simulation packages (CP2K, LAMMPS, QuantumEspresso, etc.), as well as other tools, such as molecular visualisers or programming languages, are described in these tutorials; links to the relevant manuals are provided at the bottom of the page. &lt;br /&gt;
&lt;br /&gt;
Script and programs written by the components of the research group are also described in each tutorial; these tools have been devised to help with running calculations and with data analysis and can be found in the linked GitLub repository [https://gitlab.doc.ic.ac.uk/rgc]. &lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Compiling Codes and Running Calculations on a HPC cluster=&lt;br /&gt;
&lt;br /&gt;
===[[How to run on ARCHER 2]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Imperial CX1: Instructions and basic concepts of parallel computing]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A collection of useful resources and brief introductions to the basic concepts of parallel computing for beginners to use the high-performance computing service at Imperial.&lt;br /&gt;
&lt;br /&gt;
===[[CMSG disk and Shared Software on CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: About shared software hosted on RDS CMSG (&#039;&#039;For CMSG group members&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
===[[Compile CP2Kv9.1 on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Modelling and Visualising Materials=&lt;br /&gt;
&lt;br /&gt;
==Modelling of Interfaces and Adsorption processes==&lt;br /&gt;
&lt;br /&gt;
===[[Building structures with Pymatgen]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fei]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for generating crystal structure and surface with Python.&lt;br /&gt;
&lt;br /&gt;
===[[ASE and materials modelling]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Adsorption of molecule on surfaces|Adsorption of molecule on surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Paolo]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the adsorption energy of a molecule (or, more in general, any particle) over a specific surface.&lt;br /&gt;
&lt;br /&gt;
== Error Evaluation during Simulations==&lt;br /&gt;
&lt;br /&gt;
===[[Optimization of metallic surfaces parameters | CP2K: Optimizing parameters for metallic surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: Tutorials on how to define the appropriate set of parameters needed to model a metallic system: Basis set, CUTOFF and &#039;&#039;&#039;k&#039;&#039;&#039;-points grid;&lt;br /&gt;
:: Tutorials on how to calculate relevant quantities of metallic surfaces: work function, equilibrium lattice parameter and electronic structure;&lt;br /&gt;
: System: metallic surfaces (Platinum slab used as example);&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Hard_carbon | CP2K: Simulation of Hard Carbons]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Luke]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the simulation of hard carbon?&lt;br /&gt;
&lt;br /&gt;
===[[Convergence test of critical parameters by CRYSTAL | CRYSTAL: Convergence tests of critical parameters]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising simulation parameters using the DFT code CRYSTAL (LCAO-GTO basis set).&lt;br /&gt;
&lt;br /&gt;
===[[Memristors | Quantum Espresso: Simulation of Memristors]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Felix]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising simulation parameters using the DFT code QuantumEspresso (plane waves basis set). The simulated system is a ZnO surface.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Postprocessing==&lt;br /&gt;
&lt;br /&gt;
===[[Analysing AIMD runs with MATLAB in-house suit|Analysing AIMD runs with MATLAB in-house suit]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Rashid]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Surface analysis===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Calculation of radial average|Calculation of radial average]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;[[Contributors| Kalman]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the radial average ?.&lt;br /&gt;
&lt;br /&gt;
==Machine Learning Potentials==&lt;br /&gt;
&lt;br /&gt;
===[[Building ML potentials with AML|Building ML potentials with AML]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Anthony]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for building ML potentials with AML.&lt;br /&gt;
&lt;br /&gt;
==Activation Barriers==&lt;br /&gt;
&lt;br /&gt;
===[[NEB Calculation]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Lammps and plumed | Metadynamics with Lammps and plumed]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Frederik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial on how to use the PLUMED software package to perform biased molecular dynamics simulations in LAMMPS.&lt;br /&gt;
&lt;br /&gt;
==Methodologic developments==&lt;br /&gt;
&lt;br /&gt;
===[[Potential control and current flow using CP2K+SMEAGOL]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: How to run CP2K+SMEAGOL and SIESTA+SMEAGOL calculations&lt;br /&gt;
:: How to exploit SMEAGOL parallelism&lt;br /&gt;
: System: Au nanojunctions&lt;br /&gt;
: Computational package: CP2K, SIESTA, SMEAGOL.&lt;br /&gt;
&lt;br /&gt;
===[[Converging magnetic systems in CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: MULTIPLICITY keyword to calculate magnetic systems&lt;br /&gt;
:: &amp;amp;BS section and MAGNETIZATION keyword to improve convergence&lt;br /&gt;
: System: Metallic bulk Ni and slab in vacuum&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Running a HP-DFT calculation with CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Margherita ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A tutorial to run a HP-DFT calculation using CP2K&lt;br /&gt;
&lt;br /&gt;
===[[Solving 1D Poisson equation |Solving 1D Poisson equation]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Remi Khatib ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the solution of the 1D Poisson equations given a distribution of point charges&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Tutorials =&lt;br /&gt;
&lt;br /&gt;
===[[Dimers in gas phase|Dimers in gas phase]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fredrik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising dimers in the gas phase using Gaussian.&lt;br /&gt;
&lt;br /&gt;
===[[TrendsCatalyticActivity | Trends in catalytic Activity]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Clotilde Cucinotta]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for a computational experiment about trends in catalytic activity for hydrogen evolution. This experiment is part of the third year computational chemistry lab. &lt;br /&gt;
&lt;br /&gt;
=Others=&lt;br /&gt;
&lt;br /&gt;
== Becoming an Efficient Research Scientist ==&lt;br /&gt;
&lt;br /&gt;
===[[Writing a Project Proposal]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Nicholas Harrison ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Tools==&lt;br /&gt;
&lt;br /&gt;
===[https://www.cp2k.org/about CP2K]===&lt;br /&gt;
* [[CP2K_Tutorial|CP2K TUTORIAL]];&lt;br /&gt;
* [https://github.com/cp2k/cp2k/blob/master/INSTALL.md Download and install CP2K ];&lt;br /&gt;
* [https://manual.cp2k.org/#gsc.tab=0 Manual];&lt;br /&gt;
* [https://www.cp2k.org/howto Useful HOWTOs];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.quantum-espresso.org/ QUANTUM ESPRESSO]===&lt;br /&gt;
* [https://www.quantum-espresso.org/download Download and install QUANTUM ESPRESSO];&lt;br /&gt;
* [https://www.quantum-espresso.org/resources/tutorials Useful Tutorials];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.lammps.org/ LAMMPS]===&lt;br /&gt;
* [https://www.lammps.org/download.html Download LAMMPS];&lt;br /&gt;
* [https://docs.lammps.org/Manual.html Manual];&lt;br /&gt;
* [https://www.lammps.org/tutorials.html Tutorials];&lt;br /&gt;
&lt;br /&gt;
===[https://www.crystal.unito.it/index.html CRYSTAL]===&lt;br /&gt;
* [https://tutorials.crystalsolutions.eu/ CRYSTAL Tutorial Project]&lt;br /&gt;
* [https://www.crystal.unito.it/basis_sets.html CRYSTAL basis set database] - Paramaterised and tested for solid state calculations&lt;br /&gt;
* [https://www.basissetexchange.org/ Basis Set Exchange] - Note that this site usually contains very diffuse basis sets for quantum chemmistry, which might cause problems for solid state calculations.&lt;br /&gt;
* [https://vallico.net/mike_towler/crystal.html Mike Towler&#039;s basis set] - Parameterised around early 2000s&lt;br /&gt;
* [https://crysplot.crystalsolutions.eu/ CRYSPLOT] - A web-based visualisation tool&lt;br /&gt;
* [https://crystal-code-tools.github.io/CRYSTALpytools/ CRYSTALpytools] - A python-based toolbox for CRYSTAL inputs and outputs.&lt;br /&gt;
More information is available in [https://www.crystal.unito.it/documentation.html CRYSTAL23 official site].&lt;br /&gt;
&lt;br /&gt;
===[https://www.tcd.ie/Physics/Smeagol/SmeagolAbout.htm Smeagol]===&lt;br /&gt;
&lt;br /&gt;
==Molecular visualizers==&lt;br /&gt;
* [http://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [http://www.xcrysden.org/ Xcrysden]&lt;br /&gt;
* [https://jp-minerals.org/vesta/en/ VESTA]&lt;br /&gt;
* [https://gitlab.com/bmgcsc/dl-visualize-v3 DLV3]&lt;br /&gt;
&lt;br /&gt;
==Useful programming languages and environments== &lt;br /&gt;
* [http://www-eio.upc.edu/lceio/manuals/Fortran95-manual.pdf Fortran]&lt;br /&gt;
* [https://docs.python.org/3/ Python]&lt;br /&gt;
* [https://www.anaconda.com/ Anaconda]&lt;br /&gt;
* [https://wiki.fysik.dtu.dk/ase/ ASE]&lt;br /&gt;
* [https://pymatgen.org/ Pymatgen]&lt;br /&gt;
* [https://phonopy.github.io/phonopy/ Phonopy]&lt;br /&gt;
&lt;br /&gt;
==Crystallography==&lt;br /&gt;
* [https://it.iucr.org/ International Crystallography Table]&lt;br /&gt;
* [https://www.cryst.ehu.es/#retrievaltop Bilbao Crystallographic Server]&lt;br /&gt;
* [https://www.ccdc.cam.ac.uk/structures/ Cambridge Database]&lt;br /&gt;
* [https://stokes.byu.edu/iso/findsym.php Find Symmetry Web Service]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://wiki.ch.ic.ac.uk/wiki/index.php?title=Main_Page info]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Run_CRYSTALs_on_Imperial_CX1&amp;diff=814604</id>
		<title>Run CRYSTALs on Imperial CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Run_CRYSTALs_on_Imperial_CX1&amp;diff=814604"/>
		<updated>2023-11-30T20:44:34Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: Hz1420 moved page Run CRYSTALs on Imperial CX1 to CMSG disk and Shared Software on CX1: Rename the page to allow for more general proposes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[CMSG disk and Shared Software on CX1]]&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814603</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814603"/>
		<updated>2023-11-30T20:44:34Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: Hz1420 moved page Run CRYSTALs on Imperial CX1 to CMSG disk and Shared Software on CX1: Rename the page to allow for more general proposes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814602</id>
		<title>Nano Electrochemistry Group</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814602"/>
		<updated>2023-11-30T20:42:35Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Run CRYSTALs on Imperial CX1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;padding: 20px; background: #87adde; border: 1px solid #FFAA99; font-family: Trebuchet MS, sans-serif; font-size: 105%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This page provides a series of tutorials designed to help with the computational modelling of electrochemical system; their aim is to provide general workflows and useful tip to model fundamental components and properties of electrochemical systems. The tutorials have been designed by the researchers of the Computational NanoElectrochemistry Group led by Dr Clotilde Cucinotta [link to group page] and collaborators. &lt;br /&gt;
&lt;br /&gt;
Several simulation packages (CP2K, LAMMPS, QuantumEspresso, etc.), as well as other tools, such as molecular visualisers or programming languages, are described in these tutorials; links to the relevant manuals are provided at the bottom of the page. &lt;br /&gt;
&lt;br /&gt;
Script and programs written by the components of the research group are also described in each tutorial; these tools have been devised to help with running calculations and with data analysis and can be found in the linked GitLub repository [https://gitlab.doc.ic.ac.uk/rgc]. &lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Compiling Codes and Running Calculations on a HPC cluster=&lt;br /&gt;
&lt;br /&gt;
===[[How to run on ARCHER 2]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Imperial CX1: Instructions and basic concepts of parallel computing]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A collection of useful resources and brief introductions to the basic concepts of parallel computing for beginners to use the high-performance computing service at Imperial.&lt;br /&gt;
&lt;br /&gt;
===[[Run CRYSTALs on Imperial CX1 | CMSG disk and Shared Software on CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: About shared software hosted on RDS CMSG (&#039;&#039;For CMSG group members&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
===[[Compile CP2Kv9.1 on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Modelling and Visualising Materials=&lt;br /&gt;
&lt;br /&gt;
==Modelling of Interfaces and Adsorption processes==&lt;br /&gt;
&lt;br /&gt;
===[[Building structures with Pymatgen]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fei]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for generating crystal structure and surface with Python.&lt;br /&gt;
&lt;br /&gt;
===[[ASE and materials modelling]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Adsorption of molecule on surfaces|Adsorption of molecule on surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Paolo]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the adsorption energy of a molecule (or, more in general, any particle) over a specific surface.&lt;br /&gt;
&lt;br /&gt;
== Error Evaluation during Simulations==&lt;br /&gt;
&lt;br /&gt;
===[[Optimization of metallic surfaces parameters | CP2K: Optimizing parameters for metallic surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: Tutorials on how to define the appropriate set of parameters needed to model a metallic system: Basis set, CUTOFF and &#039;&#039;&#039;k&#039;&#039;&#039;-points grid;&lt;br /&gt;
:: Tutorials on how to calculate relevant quantities of metallic surfaces: work function, equilibrium lattice parameter and electronic structure;&lt;br /&gt;
: System: metallic surfaces (Platinum slab used as example);&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Hard_carbon | CP2K: Simulation of Hard Carbons]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Luke]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the simulation of hard carbon?&lt;br /&gt;
&lt;br /&gt;
===[[Convergence test of critical parameters by CRYSTAL | CRYSTAL: Convergence tests of critical parameters]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising simulation parameters using the DFT code CRYSTAL (LCAO-GTO basis set).&lt;br /&gt;
&lt;br /&gt;
===[[Memristors | Quantum Espresso: Simulation of Memristors]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Felix]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising simulation parameters using the DFT code QuantumEspresso (plane waves basis set). The simulated system is a ZnO surface.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Postprocessing==&lt;br /&gt;
&lt;br /&gt;
===[[Analysing AIMD runs with MATLAB in-house suit|Analysing AIMD runs with MATLAB in-house suit]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Rashid]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Surface analysis===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Calculation of radial average|Calculation of radial average]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;[[Contributors| Kalman]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the radial average ?.&lt;br /&gt;
&lt;br /&gt;
==Machine Learning Potentials==&lt;br /&gt;
&lt;br /&gt;
===[[Building ML potentials with AML|Building ML potentials with AML]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Anthony]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for building ML potentials with AML.&lt;br /&gt;
&lt;br /&gt;
==Activation Barriers==&lt;br /&gt;
&lt;br /&gt;
===[[NEB Calculation]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Lammps and plumed | Metadynamics with Lammps and plumed]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Frederik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial on how to use the PLUMED software package to perform biased molecular dynamics simulations in LAMMPS.&lt;br /&gt;
&lt;br /&gt;
==Methodologic developments==&lt;br /&gt;
&lt;br /&gt;
===[[Potential control and current flow using CP2K+SMEAGOL]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: How to run CP2K+SMEAGOL and SIESTA+SMEAGOL calculations&lt;br /&gt;
:: How to exploit SMEAGOL parallelism&lt;br /&gt;
: System: Au nanojunctions&lt;br /&gt;
: Computational package: CP2K, SIESTA, SMEAGOL.&lt;br /&gt;
&lt;br /&gt;
===[[Converging magnetic systems in CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: MULTIPLICITY keyword to calculate magnetic systems&lt;br /&gt;
:: &amp;amp;BS section and MAGNETIZATION keyword to improve convergence&lt;br /&gt;
: System: Metallic bulk Ni and slab in vacuum&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Running a HP-DFT calculation with CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Margherita ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A tutorial to run a HP-DFT calculation using CP2K&lt;br /&gt;
&lt;br /&gt;
===[[Solving 1D Poisson equation |Solving 1D Poisson equation]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Remi Khatib ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the solution of the 1D Poisson equations given a distribution of point charges&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Tutorials =&lt;br /&gt;
&lt;br /&gt;
===[[Dimers in gas phase|Dimers in gas phase]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fredrik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising dimers in the gas phase using Gaussian.&lt;br /&gt;
&lt;br /&gt;
===[[TrendsCatalyticActivity | Trends in catalytic Activity]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Clotilde Cucinotta]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for a computational experiment about trends in catalytic activity for hydrogen evolution. This experiment is part of the third year computational chemistry lab. &lt;br /&gt;
&lt;br /&gt;
=Others=&lt;br /&gt;
&lt;br /&gt;
== Becoming an Efficient Research Scientist ==&lt;br /&gt;
&lt;br /&gt;
===[[Writing a Project Proposal]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Nicholas Harrison ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Tools==&lt;br /&gt;
&lt;br /&gt;
===[https://www.cp2k.org/about CP2K]===&lt;br /&gt;
* [[CP2K_Tutorial|CP2K TUTORIAL]];&lt;br /&gt;
* [https://github.com/cp2k/cp2k/blob/master/INSTALL.md Download and install CP2K ];&lt;br /&gt;
* [https://manual.cp2k.org/#gsc.tab=0 Manual];&lt;br /&gt;
* [https://www.cp2k.org/howto Useful HOWTOs];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.quantum-espresso.org/ QUANTUM ESPRESSO]===&lt;br /&gt;
* [https://www.quantum-espresso.org/download Download and install QUANTUM ESPRESSO];&lt;br /&gt;
* [https://www.quantum-espresso.org/resources/tutorials Useful Tutorials];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.lammps.org/ LAMMPS]===&lt;br /&gt;
* [https://www.lammps.org/download.html Download LAMMPS];&lt;br /&gt;
* [https://docs.lammps.org/Manual.html Manual];&lt;br /&gt;
* [https://www.lammps.org/tutorials.html Tutorials];&lt;br /&gt;
&lt;br /&gt;
===[https://www.crystal.unito.it/index.html CRYSTAL]===&lt;br /&gt;
* [https://tutorials.crystalsolutions.eu/ CRYSTAL Tutorial Project]&lt;br /&gt;
* [https://www.crystal.unito.it/basis_sets.html CRYSTAL basis set database] - Paramaterised and tested for solid state calculations&lt;br /&gt;
* [https://www.basissetexchange.org/ Basis Set Exchange] - Note that this site usually contains very diffuse basis sets for quantum chemmistry, which might cause problems for solid state calculations.&lt;br /&gt;
* [https://vallico.net/mike_towler/crystal.html Mike Towler&#039;s basis set] - Parameterised around early 2000s&lt;br /&gt;
* [https://crysplot.crystalsolutions.eu/ CRYSPLOT] - A web-based visualisation tool&lt;br /&gt;
* [https://crystal-code-tools.github.io/CRYSTALpytools/ CRYSTALpytools] - A python-based toolbox for CRYSTAL inputs and outputs.&lt;br /&gt;
More information is available in [https://www.crystal.unito.it/documentation.html CRYSTAL23 official site].&lt;br /&gt;
&lt;br /&gt;
===[https://www.tcd.ie/Physics/Smeagol/SmeagolAbout.htm Smeagol]===&lt;br /&gt;
&lt;br /&gt;
==Molecular visualizers==&lt;br /&gt;
* [http://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [http://www.xcrysden.org/ Xcrysden]&lt;br /&gt;
* [https://jp-minerals.org/vesta/en/ VESTA]&lt;br /&gt;
* [https://gitlab.com/bmgcsc/dl-visualize-v3 DLV3]&lt;br /&gt;
&lt;br /&gt;
==Useful programming languages and environments== &lt;br /&gt;
* [http://www-eio.upc.edu/lceio/manuals/Fortran95-manual.pdf Fortran]&lt;br /&gt;
* [https://docs.python.org/3/ Python]&lt;br /&gt;
* [https://www.anaconda.com/ Anaconda]&lt;br /&gt;
* [https://wiki.fysik.dtu.dk/ase/ ASE]&lt;br /&gt;
* [https://pymatgen.org/ Pymatgen]&lt;br /&gt;
* [https://phonopy.github.io/phonopy/ Phonopy]&lt;br /&gt;
&lt;br /&gt;
==Crystallography==&lt;br /&gt;
* [https://it.iucr.org/ International Crystallography Table]&lt;br /&gt;
* [https://www.cryst.ehu.es/#retrievaltop Bilbao Crystallographic Server]&lt;br /&gt;
* [https://www.ccdc.cam.ac.uk/structures/ Cambridge Database]&lt;br /&gt;
* [https://stokes.byu.edu/iso/findsym.php Find Symmetry Web Service]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://wiki.ch.ic.ac.uk/wiki/index.php?title=Main_Page info]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814601</id>
		<title>Nano Electrochemistry Group</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814601"/>
		<updated>2023-11-30T20:39:50Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: Undo revision 814600 by Hz1420 (talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;padding: 20px; background: #87adde; border: 1px solid #FFAA99; font-family: Trebuchet MS, sans-serif; font-size: 105%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This page provides a series of tutorials designed to help with the computational modelling of electrochemical system; their aim is to provide general workflows and useful tip to model fundamental components and properties of electrochemical systems. The tutorials have been designed by the researchers of the Computational NanoElectrochemistry Group led by Dr Clotilde Cucinotta [link to group page] and collaborators. &lt;br /&gt;
&lt;br /&gt;
Several simulation packages (CP2K, LAMMPS, QuantumEspresso, etc.), as well as other tools, such as molecular visualisers or programming languages, are described in these tutorials; links to the relevant manuals are provided at the bottom of the page. &lt;br /&gt;
&lt;br /&gt;
Script and programs written by the components of the research group are also described in each tutorial; these tools have been devised to help with running calculations and with data analysis and can be found in the linked GitLub repository [https://gitlab.doc.ic.ac.uk/rgc]. &lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Compiling Codes and Running Calculations on a HPC cluster=&lt;br /&gt;
&lt;br /&gt;
===[[How to run on ARCHER 2]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Imperial CX1: Instructions and basic concepts of parallel computing]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A collection of useful resources and brief introductions to the basic concepts of parallel computing for beginners to use the high-performance computing service at Imperial.&lt;br /&gt;
&lt;br /&gt;
===[[Run CRYSTALs on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Intructions on how to submit a CRYSTAL job on CX1&lt;br /&gt;
&lt;br /&gt;
===[[Compile CP2Kv9.1 on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Modelling and Visualising Materials=&lt;br /&gt;
&lt;br /&gt;
==Modelling of Interfaces and Adsorption processes==&lt;br /&gt;
&lt;br /&gt;
===[[Building structures with Pymatgen]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fei]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for generating crystal structure and surface with Python.&lt;br /&gt;
&lt;br /&gt;
===[[ASE and materials modelling]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Adsorption of molecule on surfaces|Adsorption of molecule on surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Paolo]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the adsorption energy of a molecule (or, more in general, any particle) over a specific surface.&lt;br /&gt;
&lt;br /&gt;
== Error Evaluation during Simulations==&lt;br /&gt;
&lt;br /&gt;
===[[Optimization of metallic surfaces parameters | CP2K: Optimizing parameters for metallic surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: Tutorials on how to define the appropriate set of parameters needed to model a metallic system: Basis set, CUTOFF and &#039;&#039;&#039;k&#039;&#039;&#039;-points grid;&lt;br /&gt;
:: Tutorials on how to calculate relevant quantities of metallic surfaces: work function, equilibrium lattice parameter and electronic structure;&lt;br /&gt;
: System: metallic surfaces (Platinum slab used as example);&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Hard_carbon | CP2K: Simulation of Hard Carbons]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Luke]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the simulation of hard carbon?&lt;br /&gt;
&lt;br /&gt;
===[[Convergence test of critical parameters by CRYSTAL | CRYSTAL: Convergence tests of critical parameters]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising simulation parameters using the DFT code CRYSTAL (LCAO-GTO basis set).&lt;br /&gt;
&lt;br /&gt;
===[[Memristors | Quantum Espresso: Simulation of Memristors]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Felix]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising simulation parameters using the DFT code QuantumEspresso (plane waves basis set). The simulated system is a ZnO surface.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Postprocessing==&lt;br /&gt;
&lt;br /&gt;
===[[Analysing AIMD runs with MATLAB in-house suit|Analysing AIMD runs with MATLAB in-house suit]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Rashid]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Surface analysis===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Calculation of radial average|Calculation of radial average]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;[[Contributors| Kalman]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the radial average ?.&lt;br /&gt;
&lt;br /&gt;
==Machine Learning Potentials==&lt;br /&gt;
&lt;br /&gt;
===[[Building ML potentials with AML|Building ML potentials with AML]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Anthony]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for building ML potentials with AML.&lt;br /&gt;
&lt;br /&gt;
==Activation Barriers==&lt;br /&gt;
&lt;br /&gt;
===[[NEB Calculation]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Lammps and plumed | Metadynamics with Lammps and plumed]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Frederik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial on how to use the PLUMED software package to perform biased molecular dynamics simulations in LAMMPS.&lt;br /&gt;
&lt;br /&gt;
==Methodologic developments==&lt;br /&gt;
&lt;br /&gt;
===[[Potential control and current flow using CP2K+SMEAGOL]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: How to run CP2K+SMEAGOL and SIESTA+SMEAGOL calculations&lt;br /&gt;
:: How to exploit SMEAGOL parallelism&lt;br /&gt;
: System: Au nanojunctions&lt;br /&gt;
: Computational package: CP2K, SIESTA, SMEAGOL.&lt;br /&gt;
&lt;br /&gt;
===[[Converging magnetic systems in CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: MULTIPLICITY keyword to calculate magnetic systems&lt;br /&gt;
:: &amp;amp;BS section and MAGNETIZATION keyword to improve convergence&lt;br /&gt;
: System: Metallic bulk Ni and slab in vacuum&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Running a HP-DFT calculation with CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Margherita ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A tutorial to run a HP-DFT calculation using CP2K&lt;br /&gt;
&lt;br /&gt;
===[[Solving 1D Poisson equation |Solving 1D Poisson equation]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Remi Khatib ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the solution of the 1D Poisson equations given a distribution of point charges&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Tutorials =&lt;br /&gt;
&lt;br /&gt;
===[[Dimers in gas phase|Dimers in gas phase]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fredrik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising dimers in the gas phase using Gaussian.&lt;br /&gt;
&lt;br /&gt;
===[[TrendsCatalyticActivity | Trends in catalytic Activity]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Clotilde Cucinotta]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for a computational experiment about trends in catalytic activity for hydrogen evolution. This experiment is part of the third year computational chemistry lab. &lt;br /&gt;
&lt;br /&gt;
=Others=&lt;br /&gt;
&lt;br /&gt;
== Becoming an Efficient Research Scientist ==&lt;br /&gt;
&lt;br /&gt;
===[[Writing a Project Proposal]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Nicholas Harrison ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Tools==&lt;br /&gt;
&lt;br /&gt;
===[https://www.cp2k.org/about CP2K]===&lt;br /&gt;
* [[CP2K_Tutorial|CP2K TUTORIAL]];&lt;br /&gt;
* [https://github.com/cp2k/cp2k/blob/master/INSTALL.md Download and install CP2K ];&lt;br /&gt;
* [https://manual.cp2k.org/#gsc.tab=0 Manual];&lt;br /&gt;
* [https://www.cp2k.org/howto Useful HOWTOs];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.quantum-espresso.org/ QUANTUM ESPRESSO]===&lt;br /&gt;
* [https://www.quantum-espresso.org/download Download and install QUANTUM ESPRESSO];&lt;br /&gt;
* [https://www.quantum-espresso.org/resources/tutorials Useful Tutorials];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.lammps.org/ LAMMPS]===&lt;br /&gt;
* [https://www.lammps.org/download.html Download LAMMPS];&lt;br /&gt;
* [https://docs.lammps.org/Manual.html Manual];&lt;br /&gt;
* [https://www.lammps.org/tutorials.html Tutorials];&lt;br /&gt;
&lt;br /&gt;
===[https://www.crystal.unito.it/index.html CRYSTAL]===&lt;br /&gt;
* [https://tutorials.crystalsolutions.eu/ CRYSTAL Tutorial Project]&lt;br /&gt;
* [https://www.crystal.unito.it/basis_sets.html CRYSTAL basis set database] - Paramaterised and tested for solid state calculations&lt;br /&gt;
* [https://www.basissetexchange.org/ Basis Set Exchange] - Note that this site usually contains very diffuse basis sets for quantum chemmistry, which might cause problems for solid state calculations.&lt;br /&gt;
* [https://vallico.net/mike_towler/crystal.html Mike Towler&#039;s basis set] - Parameterised around early 2000s&lt;br /&gt;
* [https://crysplot.crystalsolutions.eu/ CRYSPLOT] - A web-based visualisation tool&lt;br /&gt;
* [https://crystal-code-tools.github.io/CRYSTALpytools/ CRYSTALpytools] - A python-based toolbox for CRYSTAL inputs and outputs.&lt;br /&gt;
More information is available in [https://www.crystal.unito.it/documentation.html CRYSTAL23 official site].&lt;br /&gt;
&lt;br /&gt;
===[https://www.tcd.ie/Physics/Smeagol/SmeagolAbout.htm Smeagol]===&lt;br /&gt;
&lt;br /&gt;
==Molecular visualizers==&lt;br /&gt;
* [http://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [http://www.xcrysden.org/ Xcrysden]&lt;br /&gt;
* [https://jp-minerals.org/vesta/en/ VESTA]&lt;br /&gt;
* [https://gitlab.com/bmgcsc/dl-visualize-v3 DLV3]&lt;br /&gt;
&lt;br /&gt;
==Useful programming languages and environments== &lt;br /&gt;
* [http://www-eio.upc.edu/lceio/manuals/Fortran95-manual.pdf Fortran]&lt;br /&gt;
* [https://docs.python.org/3/ Python]&lt;br /&gt;
* [https://www.anaconda.com/ Anaconda]&lt;br /&gt;
* [https://wiki.fysik.dtu.dk/ase/ ASE]&lt;br /&gt;
* [https://pymatgen.org/ Pymatgen]&lt;br /&gt;
* [https://phonopy.github.io/phonopy/ Phonopy]&lt;br /&gt;
&lt;br /&gt;
==Crystallography==&lt;br /&gt;
* [https://it.iucr.org/ International Crystallography Table]&lt;br /&gt;
* [https://www.cryst.ehu.es/#retrievaltop Bilbao Crystallographic Server]&lt;br /&gt;
* [https://www.ccdc.cam.ac.uk/structures/ Cambridge Database]&lt;br /&gt;
* [https://stokes.byu.edu/iso/findsym.php Find Symmetry Web Service]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://wiki.ch.ic.ac.uk/wiki/index.php?title=Main_Page info]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814600</id>
		<title>Nano Electrochemistry Group</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814600"/>
		<updated>2023-11-30T20:39:08Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Run CRYSTALs on Imperial CX1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;padding: 20px; background: #87adde; border: 1px solid #FFAA99; font-family: Trebuchet MS, sans-serif; font-size: 105%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This page provides a series of tutorials designed to help with the computational modelling of electrochemical system; their aim is to provide general workflows and useful tip to model fundamental components and properties of electrochemical systems. The tutorials have been designed by the researchers of the Computational NanoElectrochemistry Group led by Dr Clotilde Cucinotta [link to group page] and collaborators. &lt;br /&gt;
&lt;br /&gt;
Several simulation packages (CP2K, LAMMPS, QuantumEspresso, etc.), as well as other tools, such as molecular visualisers or programming languages, are described in these tutorials; links to the relevant manuals are provided at the bottom of the page. &lt;br /&gt;
&lt;br /&gt;
Script and programs written by the components of the research group are also described in each tutorial; these tools have been devised to help with running calculations and with data analysis and can be found in the linked GitLub repository [https://gitlab.doc.ic.ac.uk/rgc]. &lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Compiling Codes and Running Calculations on a HPC cluster=&lt;br /&gt;
&lt;br /&gt;
===[[How to run on ARCHER 2]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Imperial CX1: Instructions and basic concepts of parallel computing]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A collection of useful resources and brief introductions to the basic concepts of parallel computing for beginners to use the high-performance computing service at Imperial.&lt;br /&gt;
&lt;br /&gt;
===[[CMSG disk and Shared Software on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Intructions on shared software hosted on CMSG disk, available for Imperial CX1&lt;br /&gt;
&lt;br /&gt;
===[[Compile CP2Kv9.1 on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Modelling and Visualising Materials=&lt;br /&gt;
&lt;br /&gt;
==Modelling of Interfaces and Adsorption processes==&lt;br /&gt;
&lt;br /&gt;
===[[Building structures with Pymatgen]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fei]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for generating crystal structure and surface with Python.&lt;br /&gt;
&lt;br /&gt;
===[[ASE and materials modelling]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Adsorption of molecule on surfaces|Adsorption of molecule on surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Paolo]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the adsorption energy of a molecule (or, more in general, any particle) over a specific surface.&lt;br /&gt;
&lt;br /&gt;
== Error Evaluation during Simulations==&lt;br /&gt;
&lt;br /&gt;
===[[Optimization of metallic surfaces parameters | CP2K: Optimizing parameters for metallic surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: Tutorials on how to define the appropriate set of parameters needed to model a metallic system: Basis set, CUTOFF and &#039;&#039;&#039;k&#039;&#039;&#039;-points grid;&lt;br /&gt;
:: Tutorials on how to calculate relevant quantities of metallic surfaces: work function, equilibrium lattice parameter and electronic structure;&lt;br /&gt;
: System: metallic surfaces (Platinum slab used as example);&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Hard_carbon | CP2K: Simulation of Hard Carbons]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Luke]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the simulation of hard carbon?&lt;br /&gt;
&lt;br /&gt;
===[[Convergence test of critical parameters by CRYSTAL | CRYSTAL: Convergence tests of critical parameters]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising simulation parameters using the DFT code CRYSTAL (LCAO-GTO basis set).&lt;br /&gt;
&lt;br /&gt;
===[[Memristors | Quantum Espresso: Simulation of Memristors]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Felix]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising simulation parameters using the DFT code QuantumEspresso (plane waves basis set). The simulated system is a ZnO surface.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Postprocessing==&lt;br /&gt;
&lt;br /&gt;
===[[Analysing AIMD runs with MATLAB in-house suit|Analysing AIMD runs with MATLAB in-house suit]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Rashid]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Surface analysis===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Calculation of radial average|Calculation of radial average]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;[[Contributors| Kalman]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the radial average ?.&lt;br /&gt;
&lt;br /&gt;
==Machine Learning Potentials==&lt;br /&gt;
&lt;br /&gt;
===[[Building ML potentials with AML|Building ML potentials with AML]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Anthony]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for building ML potentials with AML.&lt;br /&gt;
&lt;br /&gt;
==Activation Barriers==&lt;br /&gt;
&lt;br /&gt;
===[[NEB Calculation]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Lammps and plumed | Metadynamics with Lammps and plumed]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Frederik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial on how to use the PLUMED software package to perform biased molecular dynamics simulations in LAMMPS.&lt;br /&gt;
&lt;br /&gt;
==Methodologic developments==&lt;br /&gt;
&lt;br /&gt;
===[[Potential control and current flow using CP2K+SMEAGOL]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: How to run CP2K+SMEAGOL and SIESTA+SMEAGOL calculations&lt;br /&gt;
:: How to exploit SMEAGOL parallelism&lt;br /&gt;
: System: Au nanojunctions&lt;br /&gt;
: Computational package: CP2K, SIESTA, SMEAGOL.&lt;br /&gt;
&lt;br /&gt;
===[[Converging magnetic systems in CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: MULTIPLICITY keyword to calculate magnetic systems&lt;br /&gt;
:: &amp;amp;BS section and MAGNETIZATION keyword to improve convergence&lt;br /&gt;
: System: Metallic bulk Ni and slab in vacuum&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Running a HP-DFT calculation with CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Margherita ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A tutorial to run a HP-DFT calculation using CP2K&lt;br /&gt;
&lt;br /&gt;
===[[Solving 1D Poisson equation |Solving 1D Poisson equation]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Remi Khatib ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the solution of the 1D Poisson equations given a distribution of point charges&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Tutorials =&lt;br /&gt;
&lt;br /&gt;
===[[Dimers in gas phase|Dimers in gas phase]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fredrik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising dimers in the gas phase using Gaussian.&lt;br /&gt;
&lt;br /&gt;
===[[TrendsCatalyticActivity | Trends in catalytic Activity]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Clotilde Cucinotta]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for a computational experiment about trends in catalytic activity for hydrogen evolution. This experiment is part of the third year computational chemistry lab. &lt;br /&gt;
&lt;br /&gt;
=Others=&lt;br /&gt;
&lt;br /&gt;
== Becoming an Efficient Research Scientist ==&lt;br /&gt;
&lt;br /&gt;
===[[Writing a Project Proposal]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Nicholas Harrison ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Tools==&lt;br /&gt;
&lt;br /&gt;
===[https://www.cp2k.org/about CP2K]===&lt;br /&gt;
* [[CP2K_Tutorial|CP2K TUTORIAL]];&lt;br /&gt;
* [https://github.com/cp2k/cp2k/blob/master/INSTALL.md Download and install CP2K ];&lt;br /&gt;
* [https://manual.cp2k.org/#gsc.tab=0 Manual];&lt;br /&gt;
* [https://www.cp2k.org/howto Useful HOWTOs];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.quantum-espresso.org/ QUANTUM ESPRESSO]===&lt;br /&gt;
* [https://www.quantum-espresso.org/download Download and install QUANTUM ESPRESSO];&lt;br /&gt;
* [https://www.quantum-espresso.org/resources/tutorials Useful Tutorials];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.lammps.org/ LAMMPS]===&lt;br /&gt;
* [https://www.lammps.org/download.html Download LAMMPS];&lt;br /&gt;
* [https://docs.lammps.org/Manual.html Manual];&lt;br /&gt;
* [https://www.lammps.org/tutorials.html Tutorials];&lt;br /&gt;
&lt;br /&gt;
===[https://www.crystal.unito.it/index.html CRYSTAL]===&lt;br /&gt;
* [https://tutorials.crystalsolutions.eu/ CRYSTAL Tutorial Project]&lt;br /&gt;
* [https://www.crystal.unito.it/basis_sets.html CRYSTAL basis set database] - Paramaterised and tested for solid state calculations&lt;br /&gt;
* [https://www.basissetexchange.org/ Basis Set Exchange] - Note that this site usually contains very diffuse basis sets for quantum chemmistry, which might cause problems for solid state calculations.&lt;br /&gt;
* [https://vallico.net/mike_towler/crystal.html Mike Towler&#039;s basis set] - Parameterised around early 2000s&lt;br /&gt;
* [https://crysplot.crystalsolutions.eu/ CRYSPLOT] - A web-based visualisation tool&lt;br /&gt;
* [https://crystal-code-tools.github.io/CRYSTALpytools/ CRYSTALpytools] - A python-based toolbox for CRYSTAL inputs and outputs.&lt;br /&gt;
More information is available in [https://www.crystal.unito.it/documentation.html CRYSTAL23 official site].&lt;br /&gt;
&lt;br /&gt;
===[https://www.tcd.ie/Physics/Smeagol/SmeagolAbout.htm Smeagol]===&lt;br /&gt;
&lt;br /&gt;
==Molecular visualizers==&lt;br /&gt;
* [http://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [http://www.xcrysden.org/ Xcrysden]&lt;br /&gt;
* [https://jp-minerals.org/vesta/en/ VESTA]&lt;br /&gt;
* [https://gitlab.com/bmgcsc/dl-visualize-v3 DLV3]&lt;br /&gt;
&lt;br /&gt;
==Useful programming languages and environments== &lt;br /&gt;
* [http://www-eio.upc.edu/lceio/manuals/Fortran95-manual.pdf Fortran]&lt;br /&gt;
* [https://docs.python.org/3/ Python]&lt;br /&gt;
* [https://www.anaconda.com/ Anaconda]&lt;br /&gt;
* [https://wiki.fysik.dtu.dk/ase/ ASE]&lt;br /&gt;
* [https://pymatgen.org/ Pymatgen]&lt;br /&gt;
* [https://phonopy.github.io/phonopy/ Phonopy]&lt;br /&gt;
&lt;br /&gt;
==Crystallography==&lt;br /&gt;
* [https://it.iucr.org/ International Crystallography Table]&lt;br /&gt;
* [https://www.cryst.ehu.es/#retrievaltop Bilbao Crystallographic Server]&lt;br /&gt;
* [https://www.ccdc.cam.ac.uk/structures/ Cambridge Database]&lt;br /&gt;
* [https://stokes.byu.edu/iso/findsym.php Find Symmetry Web Service]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://wiki.ch.ic.ac.uk/wiki/index.php?title=Main_Page info]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814526</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814526"/>
		<updated>2023-11-05T23:29:35Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CRYSTAL23 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compiling Env : EasyBuild Intel 2023a&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file if &#039;-nc&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file if &#039;-nd&#039; is not set&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value (1) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Memory requested per node. If not set the default value (512GB) is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814417</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814417"/>
		<updated>2023-10-16T18:30:18Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Secure your storage: Work directory and home directory */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://wiki.imperial.ac.uk/pages/viewpage.action?spaceKey=HPC&amp;amp;title=High+Performance+Computing RCS Wiki Page] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://wiki.imperial.ac.uk/display/HPC/New+Job+sizing+guidance RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread. &#039;&#039;&#039;Note:&#039;&#039;&#039; The word &#039;processor&#039; is not a very accurate term. Might be better with &#039;process&#039; (I am just too lazy to update that figure). Many modern CPUs supports sub-CPU threading, which means the number of logical CPUs is larger than physical CPUs, so it is possible to have multiple threads within 1 processor. However, it is also possible to use multiple processors for 1 process, or even 1 thread. &lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Tmp memory, Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, temporary memory, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For temporary memory large, high-frequency disks are used. It is allocated by job requests, which is not accessible by login nodes. Everything is erased after the job is terminated. &lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency. (And it is not something new for CX1 users to receive the RDS failure news email)&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814416</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814416"/>
		<updated>2023-10-16T18:23:12Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Divide a job: Nodes, Processors and Threads */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://wiki.imperial.ac.uk/pages/viewpage.action?spaceKey=HPC&amp;amp;title=High+Performance+Computing RCS Wiki Page] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://wiki.imperial.ac.uk/display/HPC/New+Job+sizing+guidance RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread. &#039;&#039;&#039;Note:&#039;&#039;&#039; The word &#039;processor&#039; is not a very accurate term. Might be better with &#039;process&#039; (I am just too lazy to update that figure). Many modern CPUs supports sub-CPU threading, which means the number of logical CPUs is larger than physical CPUs, so it is possible to have multiple threads within 1 processor. However, it is also possible to use multiple processors for 1 process, or even 1 thread. &lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency.&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814415</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814415"/>
		<updated>2023-10-16T16:32:32Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission GitHub repo], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission documentation].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814414</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814414"/>
		<updated>2023-10-16T16:31:48Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Code Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [README https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814413</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814413"/>
		<updated>2023-10-16T16:31:12Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814412</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814412"/>
		<updated>2023-10-16T16:29:57Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Code Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814411</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814411"/>
		<updated>2023-10-16T16:26:26Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Quick References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial OMP CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial OMP CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub Page].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814410</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814410"/>
		<updated>2023-10-16T16:25:47Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub Page].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814409</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814409"/>
		<updated>2023-10-16T16:25:28Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
  ~$ cat &amp;lt;&amp;lt; EOF &amp;gt; ~/.bash_profile&lt;br /&gt;
  if test -f ~/.bashrc; then&lt;br /&gt;
      source ~/.bashrc&lt;br /&gt;
  fi&lt;br /&gt;
  EOF&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub Page].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814408</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814408"/>
		<updated>2023-10-16T16:24:01Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ cat &amp;lt;&amp;lt; EOF &amp;gt; ~/.bash_profile&lt;br /&gt;
  &lt;br /&gt;
if test -f ~/.bashrc; then&lt;br /&gt;
&lt;br /&gt;
    source ~/.bashrc&lt;br /&gt;
&lt;br /&gt;
fi&lt;br /&gt;
  &lt;br /&gt;
EOF  &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub Page].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814407</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814407"/>
		<updated>2023-10-16T16:23:41Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ cat &amp;lt;&amp;lt; EOF &amp;gt; ~/.bash_profile  &lt;br /&gt;
if test -f ~/.bashrc; then  &lt;br /&gt;
    source ~/.bashrc  &lt;br /&gt;
fi  &lt;br /&gt;
EOF  &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub Page].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814406</id>
		<title>CMSG disk and Shared Software on CX1</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=CMSG_disk_and_Shared_Software_on_CX1&amp;diff=814406"/>
		<updated>2023-10-16T16:22:36Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* CRYSTAL23 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic information of [https://www.crystal.unito.it/index.html CRYSTAL] DFT code previously and currently used within the group on Imperial CX1 are collected in this page. Instructions on submitting jobs are included.&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL23 ==&lt;br /&gt;
* Version : 1.0.1&lt;br /&gt;
* Compilor : gcc/11.2.0 + aocl/4.0&lt;br /&gt;
* MPI : mpich4.0.2, OMP turned on&lt;br /&gt;
* Note: MPPproperties and MP2 (CRYSCOR and CRYSTAL2) are currently not available, which involves separate code packages (DMat2 and CRYSCOR)&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
&#039;&#039;Thanks to Mr. K Tallat-Kelpsa for testing&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Configuration ====&lt;br /&gt;
&lt;br /&gt;
The source code (written in bash) of CX1 general job submission script is available on group [https://github.com/cmsg-icl/HPC-job-submission/tree/main/Imperial-HPC-Job-Submission], though it is not needed in practice. Useful for developing new features. On CX1, use the following command to configure your local settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ bash /rds/general/project/cmsg/live/share/HPC-job-submission/Imperial-HPC-Job-Submission/CRYSTAL23/config_CRYSTAL23.sh&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the instructions on the screen. Typically the default values are good. To use default vaules, press enter. After configuration is finished, the information is stored in a &#039;settings&#039; file which is by default saved as &amp;lt;code&amp;gt;${HOME}/etc/runCRYSTAL23/settings&amp;lt;/code&amp;gt;. It functions as a dictionary for job sumbmission scripts to refer to. It is also editable according to the user&#039;s needs.&lt;br /&gt;
&lt;br /&gt;
Use the following command to activate alias commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ source ~/.bashrc&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the settings of CX1, the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command should be executed everytime you login. To aviod this, try the following commands:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ cat &amp;lt;&amp;lt; EOF &amp;gt; ~/.bash_profile&lt;br /&gt;
if test -f ~/.bashrc; then&lt;br /&gt;
    source ~/.bashrc&lt;br /&gt;
fi&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Quick References ====&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available commands&lt;br /&gt;
|-&lt;br /&gt;
! Command    !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| Pcrys23    || Generate parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| MPPcrys23  || Generate massive parallel CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Pprop23    || Generate parallel CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Scrys23    || Generate serial CRYSTAL23 job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Sprop23    || Generate serial CRYSTAL23 properties job submission files&lt;br /&gt;
|-&lt;br /&gt;
| Xcrys23    || User-defined executables and multiple jobs (see below for code examples)&lt;br /&gt;
|-&lt;br /&gt;
| SETcrys23  || Print the local &#039;settings&#039; file&lt;br /&gt;
|-&lt;br /&gt;
| HELPcrys23 || Print the instructions of commands&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of in-line flags&lt;br /&gt;
|-&lt;br /&gt;
! Command !! Definition&lt;br /&gt;
|-&lt;br /&gt;
| -in     || Input .d12 or .d3 files&lt;br /&gt;
|-&lt;br /&gt;
| -nd     || Number of nodes requested. Number of CPUs per node will be read from the settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nc     || Number of CPUs requested. Number of nodes will be automatically decided by Number of CPUs per node from settings file&lt;br /&gt;
|-&lt;br /&gt;
| -nt     || Number of threads per process. If not set the default value is used.&lt;br /&gt;
|-&lt;br /&gt;
| -wt     || Walltime for the job. Or walltime for each job if multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -ref    || Optional, name of the reference file(s). No extension&lt;br /&gt;
|-&lt;br /&gt;
| -x      || Xcrys23 only. The alias, i.e., lable, of MPI+executable in-line command pair&lt;br /&gt;
|-&lt;br /&gt;
| -name   || Xcrys23 only. Set a common name for multiple jobs&lt;br /&gt;
|-&lt;br /&gt;
| -set    || Developer only. The path to local &#039;settings&#039; file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:auto&amp;quot;&lt;br /&gt;
|+ List of available labels for &#039;-x&#039; flag&lt;br /&gt;
|-&lt;br /&gt;
! Label   !! Acutal in-line command &lt;br /&gt;
|-&lt;br /&gt;
| pcrys   || mpiexec Pcrystal       &lt;br /&gt;
|-&lt;br /&gt;
| mppcrys || mpiexec MPPcrystal     &lt;br /&gt;
|-&lt;br /&gt;
| pprop   || mpiexec Pproperties    &lt;br /&gt;
|-&lt;br /&gt;
| scrys   || Scrystal &amp;lt; INPUT        &lt;br /&gt;
|-&lt;br /&gt;
| sprop   || Sproperties &amp;lt; INPUT     &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Note: New labels can be defined in local &#039;settings&#039; file if a new MPI+executable in-line command is needed&lt;br /&gt;
&lt;br /&gt;
==== Code Examples ====&lt;br /&gt;
&lt;br /&gt;
To generate a qsub file for a parallel crystal23 job on &#039;mgo.d12&#039;, the following command can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pcrys23 -in mgo.d12 -wt 01:00 -nd 1&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That generates a &#039;mgo.qsub&#039; file requesting for 1 node and the maximum running time is 1 hour. Similarly, after the job is done, using the following command can run a parallel properties calculation based on &#039;mgo-band.d3&#039; and data from the previous &#039;mgo&#039; SCF calculation on 12 CPUs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Pprop23 -in mgo-band.d3 -nc 12 -wt 00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, the user can also integrate 2 jobs into the same qsub file using &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -name mgo-band -nd 1 -x pcrys -in mgo.d12 -wt 01:00 -ref no -x pprop -in mgo-band.d3 -wt  00:30 -ref mgo&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that -x, -in, -wt should have the same lengths while -ref should be either 0 or the same length as -x. In the case above, &#039;no&#039; is a reserved keyword for no reference. It should be noted that specifying multiple -in, -wt and -ref flags in &amp;lt;code&amp;gt;Pcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;MPPcrys&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Pprop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Scrys&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;Sprop&amp;lt;/code&amp;gt; will lead to error.&lt;br /&gt;
&lt;br /&gt;
Besides, &amp;lt;code&amp;gt;Xcrys23&amp;lt;/code&amp;gt; also supports user-defined MPI+executable command pairs in &#039;settings&#039; file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ Xcrys23 -x pcrys_other -in mgo.d12 -wt 01:00 &lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more detailed instructions, test cases and keyword list of settings file, please refer to the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub Page].&lt;br /&gt;
&lt;br /&gt;
== CRYSTAL17 ==&lt;br /&gt;
* Version : 1.0.2&lt;br /&gt;
* Compilor : gcc/6.2.0&lt;br /&gt;
* MPI : mpich/3.4.3&lt;br /&gt;
* CPU Architecture : Intel Xeon&lt;br /&gt;
&lt;br /&gt;
=== Launch Job with CX1 General Job Submission Script ===&lt;br /&gt;
The configuration and usage are identical to CRYSTAL23. Please refer to the previous section and substitute &#039;23&#039; with &#039;17&#039;. Also, please be noted that CRYSCOR (MP2 calculation) is not available for CRYSTAL17 and &#039;MPP&#039; as a massive parallel strategy is limited to &#039;crystal&#039; calculations, i.e., only MPPcrystal is released.&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814402</id>
		<title>Nano Electrochemistry Group</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814402"/>
		<updated>2023-10-06T14:15:02Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;padding: 20px; background: #87adde; border: 1px solid #FFAA99; font-family: Trebuchet MS, sans-serif; font-size: 105%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This page provides a series of tutorials designed to help with the computational modelling of electrochemical system; their aim is to provide general workflows and useful tip to model fundamental components and properties of electrochemical systems. The tutorials have been designed by the researchers of the Computational NanoElectrochemistry Group led by Dr Clotilde Cucinotta [link to group page] and collaborators. &lt;br /&gt;
&lt;br /&gt;
Several simulation packages (CP2K, LAMMPS, QuantumEspresso, etc.), as well as other tools, such as molecular visualisers or programming languages, are described in these tutorials; links to the relevant manuals are provided at the bottom of the page. &lt;br /&gt;
&lt;br /&gt;
Script and programs written by the components of the research group are also described in each tutorial; these tools have been devised to help with running calculations and with data analysis and can be found in the linked GitLub repository [https://gitlab.doc.ic.ac.uk/rgc]. &lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Compiling Codes and Running Calculations on a HPC cluster=&lt;br /&gt;
&lt;br /&gt;
===[[How to run on ARCHER 2]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Imperial CX1: Instructions and basic concepts of parallel computing]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A collection of useful resources and brief introductions to the basic concepts of parallel computing for beginners to use the high-performance computing service at Imperial.&lt;br /&gt;
&lt;br /&gt;
===[[Run CRYSTALs on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Intructions on how to submit a CRYSTAL job on CX1&lt;br /&gt;
&lt;br /&gt;
===[[Compile CP2Kv9.1 on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Modelling and Visualising Materials=&lt;br /&gt;
&lt;br /&gt;
==Interface and Adsorption Modelling==&lt;br /&gt;
&lt;br /&gt;
===[[Building structure with Pymatgen]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fei]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for generating crystal structure and surface with Python.&lt;br /&gt;
&lt;br /&gt;
===[[ASE and materials modelling]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Adsorption of molecule on surfaces|Adsorption of molecule on surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Paolo]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the adsorption energy of a molecule (or, more in general, any particle) over a specific surface.&lt;br /&gt;
&lt;br /&gt;
==Error Evaluation during Simulations==&lt;br /&gt;
&lt;br /&gt;
===[[Optimization of metallic surfaces parameters | CP2K: Optimizing parameters for metallic surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: Tutorials on how to define the appropriate set of parameters needed to model a metallic system: Basis set, CUTOFF and &#039;&#039;&#039;k&#039;&#039;&#039;-points grid;&lt;br /&gt;
:: Tutorials on how to calculate relevant quantities of metallic surfaces: work function, equilibrium lattice parameter and electronic structure;&lt;br /&gt;
: System: metallic surfaces (Platinum slab used as example);&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Hard_carbon | CP2K: Simulation of Hard Carbons]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Luke]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the simulation of hard carbon?&lt;br /&gt;
&lt;br /&gt;
===[[Convergence test of critical parameters by CRYSTAL | CRYSTAL: Convergence tests of critical parameters]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A tutorial for performing convergence tests with LCAO-GTO DFT code, CRYSTAL.&lt;br /&gt;
&lt;br /&gt;
===[[Memristors | Quantum Espresso: Simulation of Memristors]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Felix]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising parameters for memristors using QuantumEspresso.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Running Different Systems: Examples=&lt;br /&gt;
&lt;br /&gt;
===[[Dimers in gas phase|Dimers in gas phase]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fredrik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising dimers in the gas phase using Gaussian.&lt;br /&gt;
&lt;br /&gt;
===[[TrendsCatalyticActivity | Trends in catalytic Activity]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Clotilde Cucinotta]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for a computational experiment about trends in catalytic activity for hydrogen evolution. This experiment is part of the third year computational chemistry lab. &lt;br /&gt;
&lt;br /&gt;
==Postprocessing==&lt;br /&gt;
&lt;br /&gt;
===[[Analysing AIMD runs with MATLAB in-house suit|Analysing AIMD runs with MATLAB in-house suit]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Rashid]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Surface analysis===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Calculation of radial average|Calculation of radial average]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;[[Contributors| Kalman]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the radial average ?.&lt;br /&gt;
&lt;br /&gt;
==Machine Learning Potentials==&lt;br /&gt;
&lt;br /&gt;
===[[Building ML potentials with AML|Building ML potentials with AML]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Anthony]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for building ML potentials with AML.&lt;br /&gt;
&lt;br /&gt;
==Activation Barriers==&lt;br /&gt;
&lt;br /&gt;
===[[NEB Calculation]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Lammps and plumed | Metadynamics with Lammps and plumed]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Frederik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial on how to use the PLUMED software package to perform biased molecular dynamics simulations in LAMMPS.&lt;br /&gt;
&lt;br /&gt;
==Methodologic developments==&lt;br /&gt;
&lt;br /&gt;
===[[Transport calculations using SMEAGOL]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: How to run CP2K+SMEAGOL and SIESTA+SMEAGOL calculations&lt;br /&gt;
:: How to exploit SMEAGOL parallelism&lt;br /&gt;
: System: Au nanojunctions&lt;br /&gt;
: Computational package: CP2K, SIESTA, SMEAGOL.&lt;br /&gt;
&lt;br /&gt;
===[[Converging magnetic systems in CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: MULTIPLICITY keyword to calculate magnetic systems&lt;br /&gt;
:: &amp;amp;BS section and MAGNETIZATION keyword to improve convergence&lt;br /&gt;
: System: Metallic bulk Ni and slab in vacuum&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Running a HP-DFT calculation with CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Margherita ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A tutorial to run a HP-DFT calculation using CP2K&lt;br /&gt;
&lt;br /&gt;
===[[Solving 1D Poisson equation |Solving 1D Poisson equation]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Remi Khatib ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the solution of the 1D Poisson equations given a distribution of point charges&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:20px; margin:auto&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Others=&lt;br /&gt;
&lt;br /&gt;
== Becoming an Efficient Research Scientist ==&lt;br /&gt;
&lt;br /&gt;
===[[Writing a Project Proposal]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Nicholas Harrison ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Tools==&lt;br /&gt;
&lt;br /&gt;
===[https://www.cp2k.org/about CP2K]===&lt;br /&gt;
* [[CP2K_Tutorial|CP2K TUTORIAL]];&lt;br /&gt;
* [https://github.com/cp2k/cp2k/blob/master/INSTALL.md Download and install CP2K ];&lt;br /&gt;
* [https://manual.cp2k.org/#gsc.tab=0 Manual];&lt;br /&gt;
* [https://www.cp2k.org/howto Useful HOWTOs];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.quantum-espresso.org/ QUANTUM ESPRESSO]===&lt;br /&gt;
* [https://www.quantum-espresso.org/download Download and install QUANTUM ESPRESSO];&lt;br /&gt;
* [https://www.quantum-espresso.org/resources/tutorials Useful Tutorials];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.lammps.org/ LAMMPS]===&lt;br /&gt;
* [https://www.lammps.org/download.html Download LAMMPS];&lt;br /&gt;
* [https://docs.lammps.org/Manual.html Manual];&lt;br /&gt;
* [https://www.lammps.org/tutorials.html Tutorials];&lt;br /&gt;
&lt;br /&gt;
===[https://www.crystal.unito.it/index.html CRYSTAL]===&lt;br /&gt;
* [https://tutorials.crystalsolutions.eu/ CRYSTAL Tutorial Project]&lt;br /&gt;
* [https://www.crystal.unito.it/basis_sets.html CRYSTAL basis set database] - Paramaterised and tested for solid state calculations&lt;br /&gt;
* [https://www.basissetexchange.org/ Basis Set Exchange] - Note that this site usually contains very diffuse basis sets for quantum chemmistry, which might cause problems for solid state calculations.&lt;br /&gt;
* [https://vallico.net/mike_towler/crystal.html Mike Towler&#039;s basis set] - Parameterised around early 2000s&lt;br /&gt;
* [https://crysplot.crystalsolutions.eu/ CRYSPLOT] - A web-based visualisation tool&lt;br /&gt;
* [https://crystal-code-tools.github.io/CRYSTALpytools/ CRYSTALpytools] - A python-based toolbox for CRYSTAL inputs and outputs.&lt;br /&gt;
More information is available in [https://www.crystal.unito.it/documentation.html CRYSTAL23 official site].&lt;br /&gt;
&lt;br /&gt;
===[https://www.tcd.ie/Physics/Smeagol/SmeagolAbout.htm Smeagol]===&lt;br /&gt;
&lt;br /&gt;
==Molecular visualizers==&lt;br /&gt;
* [http://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [http://www.xcrysden.org/ Xcrysden]&lt;br /&gt;
* [https://jp-minerals.org/vesta/en/ VESTA]&lt;br /&gt;
* [https://gitlab.com/bmgcsc/dl-visualize-v3 DLV3]&lt;br /&gt;
&lt;br /&gt;
==Useful programming languages and environments== &lt;br /&gt;
* [http://www-eio.upc.edu/lceio/manuals/Fortran95-manual.pdf Fortran]&lt;br /&gt;
* [https://docs.python.org/3/ Python]&lt;br /&gt;
* [https://www.anaconda.com/ Anaconda]&lt;br /&gt;
* [https://wiki.fysik.dtu.dk/ase/ ASE]&lt;br /&gt;
* [https://pymatgen.org/ Pymatgen]&lt;br /&gt;
* [https://phonopy.github.io/phonopy/ Phonopy]&lt;br /&gt;
&lt;br /&gt;
==Crystallography==&lt;br /&gt;
* [https://it.iucr.org/ International Crystallography Table]&lt;br /&gt;
* [https://www.cryst.ehu.es/#retrievaltop Bilbao Crystallographic Server]&lt;br /&gt;
* [https://www.ccdc.cam.ac.uk/structures/ Cambridge Database]&lt;br /&gt;
* [https://stokes.byu.edu/iso/findsym.php Find Symmetry Web Service]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://wiki.ch.ic.ac.uk/wiki/index.php?title=Main_Page info]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814401</id>
		<title>Nano Electrochemistry Group</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Nano_Electrochemistry_Group&amp;diff=814401"/>
		<updated>2023-10-06T13:56:19Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div style=&amp;quot;padding: 10px; background: #87adde; border: 1px solid #FFAA99; font-family: Trebuchet MS, sans-serif; font-size: 105%;&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:4px;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This page provides a series of tutorials designed to help with the computational modelling of electrochemical system; their aim is to provide general workflows and useful tip to model fundamental components and properties of electrochemical systems. The tutorials have been designed by the researchers of the Computational NanoElectrochemistry Group led by Dr Clotilde Cucinotta [link to group page] and collaborators. &lt;br /&gt;
&lt;br /&gt;
Several simulation packages (CP2K, LAMMPS, QuantumEspresso, etc.), as well as other tools, such as molecular visualisers or programming languages, are described in these tutorials; links to the relevant manuals are provided at the bottom of the page. &lt;br /&gt;
&lt;br /&gt;
Script and programs written by the components of the research group are also described in each tutorial; these tools have been devised to help with running calculations and with data analysis and can be found in the linked GitLub repository [https://gitlab.doc.ic.ac.uk/rgc]. &lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:4px&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Compiling Codes and Running Calculations on a HPC cluster=&lt;br /&gt;
&lt;br /&gt;
===[[How to run on ARCHER 2]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Imperial CX1: Instructions and basic concepts of parallel computing]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A collection of useful resources and brief introductions to the basic concepts of parallel computing for beginners to use the high-performance computing service at Imperial.&lt;br /&gt;
&lt;br /&gt;
===[[Run CRYSTALs on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Intructions on how to submit a CRYSTAL job on CX1&lt;br /&gt;
&lt;br /&gt;
===[[Compile CP2Kv9.1 on Imperial CX1]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:4px&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Modelling and Visualising Materials=&lt;br /&gt;
&lt;br /&gt;
==Interface and Adsorption Modelling==&lt;br /&gt;
&lt;br /&gt;
===[[Building structure with Pymatgen]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fei]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for generating crystal structure and surface with Python.&lt;br /&gt;
&lt;br /&gt;
===[[ASE and materials modelling]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Adsorption of molecule on surfaces|Adsorption of molecule on surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Paolo]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the adsorption energy of a molecule (or, more in general, any particle) over a specific surface.&lt;br /&gt;
&lt;br /&gt;
==Error Evaluation during Simulations==&lt;br /&gt;
&lt;br /&gt;
===[[Optimization of metallic surfaces parameters | CP2K: Optimizing parameters for metallic surfaces]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Margherita]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: Tutorials on how to define the appropriate set of parameters needed to model a metallic system: Basis set, CUTOFF and &#039;&#039;&#039;k&#039;&#039;&#039;-points grid;&lt;br /&gt;
:: Tutorials on how to calculate relevant quantities of metallic surfaces: work function, equilibrium lattice parameter and electronic structure;&lt;br /&gt;
: System: metallic surfaces (Platinum slab used as example);&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Hard_carbon | CP2K: Simulation of Hard Carbons]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Luke]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the simulation of hard carbon?&lt;br /&gt;
&lt;br /&gt;
===[[Convergence test of critical parameters by CRYSTAL | CRYSTAL: Convergence tests of critical parameters]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Huanyu]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A tutorial for performing convergence tests with LCAO-GTO DFT code, CRYSTAL.&lt;br /&gt;
&lt;br /&gt;
===[[Memristors | Quantum Espresso: Simulation of Memristors]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Felix]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising parameters for memristors using QuantumEspresso.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:4px&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Running Different Systems: Examples=&lt;br /&gt;
&lt;br /&gt;
===[[Dimers in gas phase|Dimers in gas phase]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Fredrik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for optimising dimers in the gas phase using Gaussian.&lt;br /&gt;
&lt;br /&gt;
===[[TrendsCatalyticActivity | Trends in catalytic Activity]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Clotilde Cucinotta]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for a computational experiment about trends in catalytic activity for hydrogen evolution. This experiment is part of the third year computational chemistry lab. &lt;br /&gt;
&lt;br /&gt;
==Postprocessing==&lt;br /&gt;
&lt;br /&gt;
===[[Analysing AIMD runs with MATLAB in-house suit|Analysing AIMD runs with MATLAB in-house suit]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Rashid]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Surface analysis===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Songyuan]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===[[Calculation of radial average|Calculation of radial average]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;[[Contributors| Kalman]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for calculating the radial average ?.&lt;br /&gt;
&lt;br /&gt;
==Machine Learning Potentials==&lt;br /&gt;
&lt;br /&gt;
===[[Building ML potentials with AML|Building ML potentials with AML]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors | Anthony]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for building ML potentials with AML.&lt;br /&gt;
&lt;br /&gt;
==Activation Barriers==&lt;br /&gt;
&lt;br /&gt;
===[[NEB Calculation]]===&lt;br /&gt;
: Currently left blank&lt;br /&gt;
&lt;br /&gt;
===[[Lammps and plumed | Metadynamics with Lammps and plumed]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Frederik]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial on how to use the PLUMED software package to perform biased molecular dynamics simulations in LAMMPS.&lt;br /&gt;
&lt;br /&gt;
==Methodologic developments==&lt;br /&gt;
&lt;br /&gt;
===[[Transport calculations using SMEAGOL]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: How to run CP2K+SMEAGOL and SIESTA+SMEAGOL calculations&lt;br /&gt;
:: How to exploit SMEAGOL parallelism&lt;br /&gt;
: System: Au nanojunctions&lt;br /&gt;
: Computational package: CP2K, SIESTA, SMEAGOL.&lt;br /&gt;
&lt;br /&gt;
===[[Converging magnetic systems in CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Chris]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Contents:&lt;br /&gt;
:: MULTIPLICITY keyword to calculate magnetic systems&lt;br /&gt;
:: &amp;amp;BS section and MAGNETIZATION keyword to improve convergence&lt;br /&gt;
: System: Metallic bulk Ni and slab in vacuum&lt;br /&gt;
: Computational package: CP2K.&lt;br /&gt;
&lt;br /&gt;
===[[Running a HP-DFT calculation with CP2K]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Margherita ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: A tutorial to run a HP-DFT calculation using CP2K&lt;br /&gt;
&lt;br /&gt;
===[[Solving 1D Poisson equation |Solving 1D Poisson equation]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Remi Khatib ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
: Tutorial for the solution of the 1D Poisson equations given a distribution of point charges&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;border:1px solid #90C0FF; background:#ffffff; width:98%; padding:4px&amp;quot;&amp;gt;&lt;br /&gt;
=Others=&lt;br /&gt;
&lt;br /&gt;
== Becoming an Efficient Research Scientist ==&lt;br /&gt;
&lt;br /&gt;
===[[Writing a Project Proposal]]===&lt;br /&gt;
: &amp;lt;small&amp;gt;&#039;&#039;by [[Contributors| Nicholas Harrison ]]&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Computational Tools==&lt;br /&gt;
&lt;br /&gt;
===[https://www.cp2k.org/about CP2K]===&lt;br /&gt;
* [[CP2K_Tutorial|CP2K TUTORIAL]];&lt;br /&gt;
* [https://github.com/cp2k/cp2k/blob/master/INSTALL.md Download and install CP2K ];&lt;br /&gt;
* [https://manual.cp2k.org/#gsc.tab=0 Manual];&lt;br /&gt;
* [https://www.cp2k.org/howto Useful HOWTOs];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.quantum-espresso.org/ QUANTUM ESPRESSO]===&lt;br /&gt;
* [https://www.quantum-espresso.org/download Download and install QUANTUM ESPRESSO];&lt;br /&gt;
* [https://www.quantum-espresso.org/resources/tutorials Useful Tutorials];&lt;br /&gt;
* Reading inputs and outputs (commented files and examples);&lt;br /&gt;
&lt;br /&gt;
===[https://www.lammps.org/ LAMMPS]===&lt;br /&gt;
* [https://www.lammps.org/download.html Download LAMMPS];&lt;br /&gt;
* [https://docs.lammps.org/Manual.html Manual];&lt;br /&gt;
* [https://www.lammps.org/tutorials.html Tutorials];&lt;br /&gt;
&lt;br /&gt;
===[https://www.crystal.unito.it/index.html CRYSTAL]===&lt;br /&gt;
* [https://tutorials.crystalsolutions.eu/ CRYSTAL Tutorial Project]&lt;br /&gt;
* [https://www.crystal.unito.it/basis_sets.html CRYSTAL basis set database] - Paramaterised and tested for solid state calculations&lt;br /&gt;
* [https://www.basissetexchange.org/ Basis Set Exchange] - Note that this site usually contains very diffuse basis sets for quantum chemmistry, which might cause problems for solid state calculations.&lt;br /&gt;
* [https://vallico.net/mike_towler/crystal.html Mike Towler&#039;s basis set] - Parameterised around early 2000s&lt;br /&gt;
* [https://crysplot.crystalsolutions.eu/ CRYSPLOT] - A web-based visualisation tool&lt;br /&gt;
* [https://crystal-code-tools.github.io/CRYSTALpytools/ CRYSTALpytools] - A python-based toolbox for CRYSTAL inputs and outputs.&lt;br /&gt;
More information is available in [https://www.crystal.unito.it/documentation.html CRYSTAL23 official site].&lt;br /&gt;
&lt;br /&gt;
===[https://www.tcd.ie/Physics/Smeagol/SmeagolAbout.htm Smeagol]===&lt;br /&gt;
&lt;br /&gt;
==Molecular visualizers==&lt;br /&gt;
* [http://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [http://www.xcrysden.org/ Xcrysden]&lt;br /&gt;
* [https://jp-minerals.org/vesta/en/ VESTA]&lt;br /&gt;
* [https://gitlab.com/bmgcsc/dl-visualize-v3 DLV3]&lt;br /&gt;
&lt;br /&gt;
==Useful programming languages and environments== &lt;br /&gt;
* [http://www-eio.upc.edu/lceio/manuals/Fortran95-manual.pdf Fortran]&lt;br /&gt;
* [https://docs.python.org/3/ Python]&lt;br /&gt;
* [https://www.anaconda.com/ Anaconda]&lt;br /&gt;
* [https://wiki.fysik.dtu.dk/ase/ ASE]&lt;br /&gt;
* [https://pymatgen.org/ Pymatgen]&lt;br /&gt;
* [https://phonopy.github.io/phonopy/ Phonopy]&lt;br /&gt;
&lt;br /&gt;
==Crystallography==&lt;br /&gt;
* [https://it.iucr.org/ International Crystallography Table]&lt;br /&gt;
* [https://www.cryst.ehu.es/#retrievaltop Bilbao Crystallographic Server]&lt;br /&gt;
* [https://www.ccdc.cam.ac.uk/structures/ Cambridge Database]&lt;br /&gt;
* [https://stokes.byu.edu/iso/findsym.php Find Symmetry Web Service]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[https://wiki.ch.ic.ac.uk/wiki/index.php?title=Main_Page info]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814336</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814336"/>
		<updated>2023-06-09T10:42:28Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://wiki.imperial.ac.uk/pages/viewpage.action?spaceKey=HPC&amp;amp;title=High+Performance+Computing RCS Wiki Page] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://wiki.imperial.ac.uk/display/HPC/New+Job+sizing+guidance RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread:&lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency.&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814335</id>
		<title>Imperial CX1: Instructions and basic concepts of parallel computing</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Imperial_CX1:_Instructions_and_basic_concepts_of_parallel_computing&amp;diff=814335"/>
		<updated>2023-06-08T15:59:09Z</updated>

		<summary type="html">&lt;p&gt;Hz1420: /* Usage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This tutorial is divided into 2 separate sections. In the fist section, introductions and available resources of CX1 are listed and classified. Since the [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/ Research Computing Service (RCS)] team already developed great tutorials on their webpages, this part functions as a guide towards RCS webpages with necessary supplementary comments. In the second section, basic concepts of parallel computing and explanations of important terms are introduced. The main focus of this section is helping beginners to understand how high-performance computers (HPC) works on the basis of their daily practise. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This tutorial was initially written between Feb. and Mar. 2022 to be shared within the group for induction and training proposes &amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Connect-to-the-Imperial-Cluster/&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;https://spica-vir.github.io/posts/Structure-and-usage-of-clusters/&amp;lt;/ref&amp;gt;. Special thanks to Mr K. Tallat-Kelpsa, Ms A. Arber, Dr G. Mallia and Prof N. M. Harrison.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction to CX1 ==&lt;br /&gt;
CX1 is the old name of the first HPC that served the whole college. New facilities (known as CX2) were gradually installed and integrated with the old system (CX3, a rather short-lived domain), while CX1 remains to be the most popular name that generally referring to the college-owned clusters. To grant a student access to CX1, the group PI can, on behave of that student, ask RCS team to add the specified account into HPC active user mailing list.&lt;br /&gt;
=== Connect to CX1 ===&lt;br /&gt;
CX1 is typically accessed via ssh (secured shell). Linux command line (Linux &amp;amp; MacOS) / sub-system (Windows 10,11) &amp;lt;ref&amp;gt;https://learn.microsoft.com/en-us/windows/wsl/install&amp;lt;/ref&amp;gt; / SSH client (such as XShell &amp;lt;ref&amp;gt;https://www.xshell.com/en/xshell/&amp;lt;/ref&amp;gt;) can be used. VPN is needed for off-campus users. &lt;br /&gt;
&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/connect-communicate/remote-access/virtual-private-network-vpn/ Step-by-step guide to setup VPN]&lt;br /&gt;
* [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/getting-started/using-ssh/ How to use ssh]&lt;br /&gt;
&lt;br /&gt;
In linux command line, use the following command to connect CX1:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh -XY username@login.hpc.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
P.S. &amp;lt;code&amp;gt;-XY&amp;lt;/code&amp;gt; option can be omitted for most of cases, if you do not need GUI to run that program.&lt;br /&gt;
&lt;br /&gt;
Alternatively, when the VPN service is unstable or even not available, it is possible to channel through the gateway of the cluster via a client, which is an &#039;agent&#039;. To visit CX1, type the previous command in the client&#039;s command line.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ ssh username@sshgw.ic.ac.uk&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command to upload / download files, which is similar to &amp;lt;code&amp;gt;ssh&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;cp&amp;lt;/code&amp;gt; command. For example, to upload a file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ scp /local/path/file_name username@login.hpc.ic.ac.uk:/path/file_name&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
The [https://wiki.imperial.ac.uk/pages/viewpage.action?spaceKey=HPC&amp;amp;title=High+Performance+Computing RCS Wiki Page] contains information needed. [https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/get-support/contact-us/ The support page], [https://wiki.imperial.ac.uk/display/HPC/Attend+a+clinic online clinic] and [https://wiki.imperial.ac.uk/display/HPC/Courses courses from graduate school] are available. To examine the status of CX1, use [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== Environmental Variables and Disk Space ====&lt;br /&gt;
Use &amp;lt;code&amp;gt;env&amp;lt;/code&amp;gt; to access all the environmental variables - be careful, the output is &#039;&#039;&#039;HUGE&#039;&#039;&#039;. Some useful environmental variables:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;${USER}&amp;lt;/code&amp;gt; The user&#039;s college account, i.e., login credential.&lt;br /&gt;
* &amp;lt;code&amp;gt;${HOME}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/home&#039;, or &#039;~&#039;, which has 1 TB disk space for data backups. &lt;br /&gt;
* &amp;lt;code&amp;gt;${EPHEMERAL}&amp;lt;/code&amp;gt; &#039;/rds/general/user/${USER}/ephemeral&#039; Temporal unlimited disk space lasting for 30 days. Suitable for running calculations.&lt;br /&gt;
* &amp;lt;code&amp;gt;${PATH}&amp;lt;/code&amp;gt; Path to the executable can be attached for quick access. The Environment Modules package (see below) can automatically do that.&lt;br /&gt;
&lt;br /&gt;
==== Software Management ====&lt;br /&gt;
&lt;br /&gt;
The Environment Modules&amp;lt;ref&amp;gt;https://modules.readthedocs.io/en/latest/&amp;lt;/ref&amp;gt; package is implemented on CX1 to manage computing software (see the following section for introductions). Basic commands are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;module avail&amp;lt;/code&amp;gt; List the available modules&lt;br /&gt;
* &amp;lt;code&amp;gt;module load mod_name&amp;lt;/code&amp;gt; Load a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module rm mod_name&amp;lt;/code&amp;gt; Remove a specific module, &#039;mod_name&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;module list&amp;lt;/code&amp;gt; List all the loaded modules in the current environment&lt;br /&gt;
* &amp;lt;code&amp;gt;module help mod_name&amp;lt;/code&amp;gt; Check the instructions of the module &#039;mod_name&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: There is a CRYSTAL14 module in the list. For users in NMH&#039;s group, the latest CRYSTAL edition is available, so do not use that module.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Job Partition Guide ==== &lt;br /&gt;
A hierachy of jobs is designed for the optimial efficiency of CX1. The current job partition guide is available on [https://wiki.imperial.ac.uk/display/HPC/New+Job+sizing+guidance RCS Wiki Page]&lt;br /&gt;
&lt;br /&gt;
==== Batch System ====&lt;br /&gt;
&lt;br /&gt;
The PBS batch system &amp;lt;ref&amp;gt;https://en.wikipedia.org/wiki/Portable_Batch_System&amp;lt;/ref&amp;gt; is used on CX1 (see the following section for the meaning of batch system). Basic commands of PBS are listed below:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;availability&amp;lt;/code&amp;gt; Check the availability of computational resources  &lt;br /&gt;
* &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; filename.qsub&amp;lt;/code&amp;gt; Submit the job &#039;filename&#039;&lt;br /&gt;
* &amp;lt;code&amp;gt;qstat&amp;lt;/code&amp;gt; Check the state of submitted jobs&lt;br /&gt;
* &amp;lt;code&amp;gt;qdel jobID&amp;lt;/code&amp;gt; Kill the process with the ID number &#039;jobID&#039;&lt;br /&gt;
&lt;br /&gt;
To examine the queue status across the whole system, use [https://status.rcs.imperial.ac.uk/d/u0zkcYQ7z/rcs-status?orgId=2&amp;amp;refresh=1m RCS status page].&lt;br /&gt;
&lt;br /&gt;
==== A General Job Submission Script ====&lt;br /&gt;
A general job submission script for CX1 is developed by the author himself. See the [https://github.com/cmsg-icl/crystal_shape_control/tree/main/Imperial-HPC-Job-Submission GitHub repository] of CMSG for details. Parameterised software includes: CRYSTAL17/23, Quantum Espresso 7, LAMMPS, GROMACS, GULP6.&lt;br /&gt;
&lt;br /&gt;
== Basic Concepts of Parallel Computing ==&lt;br /&gt;
A brief introduction to parallel computing is given in this section by taking CX1, a medium-sized general-propose cluster, as an example.&lt;br /&gt;
&lt;br /&gt;
=== Divide a job: Nodes, Processors and Threads ===&lt;br /&gt;
&lt;br /&gt;
;Node&lt;br /&gt;
:A bunch of CPUs and probably with GPUs / coprocessors for acceleration. Memory and input files are shared by processors in the same node, so a node can be considered as an independent computer. The communication between nodes are achieved by ultra-fast network, which is the bottleneck of modern clusters. &lt;br /&gt;
&lt;br /&gt;
;Processor&lt;br /&gt;
:The unit to deal with a &#039;process&#039;, also known as &#039;central processing unit&#039;, or CPU. Processors in the same node communicate via shared memory. &lt;br /&gt;
&lt;br /&gt;
;Thread&lt;br /&gt;
:Subdivision of a process. Multiple threads in the same process share the resources allocated to the CPU. &lt;br /&gt;
&lt;br /&gt;
The figure on the right hand side illustrates the hierarchy of node, processor, and thread:&lt;br /&gt;
&lt;br /&gt;
[[File:Job_Partition.png|450px|right|Job Partition]]&lt;br /&gt;
&lt;br /&gt;
==== Multiple processes vs multiple threads ====&lt;br /&gt;
&lt;br /&gt;
From the figure above, it is not difficult to distinguish the differences between a &#039;process&#039; and a &#039;thread&#039;: process is the smallest unit for resource allocation; thread is part of a process. The idea of &#039;thread&#039; is introduced to address the huge difference in the speed of CPU and RAM. CPU is always several orders of magnitude faster than RAM, so typically the bottleneck of a process is loading the required environment from RAM, rather than computations in CPU. By using multiple threads in the same process, various branches of the same program can be executed simultaneously. Therefore, the shared environmental requirements doesn&#039;t need to be read from RAM for multiple times, and the loading time for threads is much smaller than for processes. &lt;br /&gt;
&lt;br /&gt;
However, multithreading is not always advantageous. A technical prerequisite is that the program should be developed for multithread proposes. Python, for example, is a pseudo-multithread language, while Java is a real one. Sometimes multithreading can lead to catastrophic results. Since threads share the same resource allocation (CPU, RAM, I/O, etc.), when a thread fails, the whole process fails as well. Comparatively, in multiple processes, other processes will be protected if a process fails. &lt;br /&gt;
&lt;br /&gt;
In practice, users can either run each process in serial (i.e., number of threads = 1), or in parallel (i.e., number of threads &amp;gt; 1) on clusters. However, &#039;&#039;&#039;the former one is recommended&#039;&#039;&#039;, because of more secured resource managements. The latter is not advantageous. Besides the problem mentioned above, it might lead to problems such as memory leak when running programs either: not developed for multithreading / requires improper packages (Here is [https://docs.archer2.ac.uk/known-issues/#oom-due-to-memory-leak-in-libfabric-added-2022-02-23 a famous issue] with libfabric on ARCHER2 identified in early 2022).&lt;br /&gt;
&lt;br /&gt;
==== More nodes vs more CPUs ====&lt;br /&gt;
&lt;br /&gt;
When the allocated memory permits, from my experience, using more CPUs/processes per node is usually a better idea, considering that all nodes have independent memory space and the inter-node communications are achieved by wired networks. It almost always takes longer to coordinate nodes than to coordinate processors within the same node.&lt;br /&gt;
&lt;br /&gt;
=== The internal coordinator: What is MPI ===&lt;br /&gt;
&lt;br /&gt;
Message passing interface, or MPI, is a standard for communicating and transferring data between nodes and therefore distributed memories. It is utilised via MPI libraries. The most popular implementations include: &lt;br /&gt;
&lt;br /&gt;
* MPICH &amp;lt;ref&amp;gt;https://www.mpich.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* Intel MPI &amp;lt;ref&amp;gt;https://www.intel.com/content/www/us/en/developer/tools/oneapi/mpi-library.html#gs.xld8oa&amp;lt;/ref&amp;gt; - a popular implementation of MPICH especially optimised for Intel CPUs&lt;br /&gt;
* OpenMPI &amp;lt;ref&amp;gt;https://www.open-mpi.org/&amp;lt;/ref&amp;gt; - an open-source library&lt;br /&gt;
* OpenMP &amp;lt;ref&amp;gt;https://www.openmp.org/&amp;lt;/ref&amp;gt; - Not MPI; parallelization based on shared memory, so only implemented in a single node; can be used for multithreading&lt;br /&gt;
&lt;br /&gt;
In practice, a hybrid parallelization combining MPI and OpenMP to run multithread jobs on cluster is allowed, though sometimes not recommended. The first process (probably not a node or a processor) is usually allocated for I/O, and the rest is used for parallel computing.&lt;br /&gt;
&lt;br /&gt;
So far, MPI only supports C/C++ and FORTRAN, which explains why all parallel computing software is based on these languages. To launch an executable in parallel, one should use: &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Secure your storage: Work directory and home directory ===&lt;br /&gt;
&lt;br /&gt;
Almost all the modern clusters have separate disk spaces for differently proposes, namely, work directory and home directory. This originates again from the famous speed difference between CPU and RAM/ROM. 2 distinctly kinds of disks are used respectively to improve the overall efficiency and secure important data:&lt;br /&gt;
&lt;br /&gt;
* For work directory, large, high-frequency disks are used. Data stored in work directory is usually not backed up, and in the case of CX1, will be automatically cleaned after a fixed time length.  &lt;br /&gt;
* For home directory, mechanical disks with slower read/write frequency but better robustness are used. Usually files in home space are backed up.&lt;br /&gt;
&lt;br /&gt;
For large clusters like ARCHER2 &amp;lt;ref&amp;gt;https://www.archer2.ac.uk/&amp;lt;/ref&amp;gt;, the work directory and the home directory are completely separated, i.e., directory is only viable by login nodes; work directory is viable by both job and login nodes. Job submission in home directory is prohibited. For more flexible clusters like Imperial CX1, submitting jobs in home directory and visiting of home directory by job nodes are allowed, but storing temporary files during calculation in home directory is still not recommended because of the potential influence on other files and the reduced overall efficiency.&lt;br /&gt;
&lt;br /&gt;
=== Setup your environment: What does an application need? ===&lt;br /&gt;
&lt;br /&gt;
==== Executable ==== &lt;br /&gt;
The binary executable should, theoretically, all be stored in &#039;\usr\bin&#039;. This never happens in practice, unless you are a fanatical fundamentalist of the early Linux releases. To guide your system to the desired executable, you can either laboriously type its absolute path every time you need it or add the path to the environmental variable:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export PATH=${PATH}:path_to_bin&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Running any executable in parallel requires mpi to coordinate all the processes/threads. The path to mpi executable is also required. Besides, many scientific codes require other specific environmental variables such as linear algebra packages. Read their documentations for further information.&lt;br /&gt;
&lt;br /&gt;
==== .lib/.a/.o files ==== &lt;br /&gt;
&lt;br /&gt;
When writing a script, you might need some extra packages to do more complex jobs. Those packages are developed by experts in computer science and can be called by a line of code. The same thing happens when people were developing applications like CRYSTAL and ONETEP. &lt;br /&gt;
&lt;br /&gt;
However, scientific computing codes are usually distributed in the form of source code. Source codes in FORTRAN/C/C++ need be compiled into a binary executable. There are 2 options during compiling:&lt;br /&gt;
&lt;br /&gt;
# Include the whole package as long as one of its functions is called, also known as a &#039;static lib&#039;.&lt;br /&gt;
# Only include a &#039;table of contents&#039; when compiling, also known as &#039;dynamic lib&#039;. The packages needed are separately stored in &#039;.dll/.so&#039; files, making it possible for multiple applications sharing the same lib.&lt;br /&gt;
&lt;br /&gt;
Details about compilation are beyond the scope of this post. The thing is: when running a dynamically linked application, information should be given to help the code find the libs needed. This can be specified by: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
~$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:path_to_lib`&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For statically linked applications, usually you need not worry about it - but the volume of the compiled executable might make you wonder whether there is an alternative way.&lt;br /&gt;
&lt;br /&gt;
==== Conflicts ====&lt;br /&gt;
&lt;br /&gt;
Improper previous settings may lead to a wrong application, or a wrong version, if multiple applications with similar functions are installed in the system, such as Intel compiler and GCC, OpenMPI and MPICH - a common phenomenon for shared computing resources. To avoid this, the path to the undesired application or lib should be removed from the environmental variables.&lt;br /&gt;
&lt;br /&gt;
==== Environmental Modules ====&lt;br /&gt;
&lt;br /&gt;
Environmental Modules &amp;lt;ref&amp;gt;http://modules.sourceforge.net/&amp;lt;/ref&amp;gt; is a popular software managing the necessary environmental setups and conflicts for each application. It can easily add or erase the environmental variables by commands (such as &amp;lt;code&amp;gt;module load&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;module rm&amp;lt;/code&amp;gt;) and modulefiles written in Tool Command Language (TCL)&amp;lt;ref&amp;gt;https://www.tcl.tk/&amp;lt;/ref&amp;gt;. The default directory of modulefiles is given in the environmental variable &amp;lt;code&amp;gt;${MODULEPATH}&amp;lt;/code&amp;gt;, but files in other directories can also be loaded by their absolute path.&lt;br /&gt;
&lt;br /&gt;
Both Imperial CX1 and ARCHER2 adopt this application, with which pre-compiled applications are offered.&lt;br /&gt;
&lt;br /&gt;
=== The external coordinator: What is a batch system ===&lt;br /&gt;
&lt;br /&gt;
Always bear in mind that the computational resources are limited, so you need to acquire reasonable resources for your job. Besides, the cluster also needs to calculate your budget, coordinate jobs submitted by various users, and make the best of available resources. When job is running, maybe you also want to check its status. All of this are fulfilled by batch systems.&lt;br /&gt;
&lt;br /&gt;
In practice, a Linux shell script is needed. Parameters of the batch system of are set in the commented lines at the top of the file. After the user submit the script to batch system, the system will:&lt;br /&gt;
&lt;br /&gt;
# Examine the parameters  &lt;br /&gt;
# Allocate and coordinate the requested resources  &lt;br /&gt;
# Set up the environments, such as environmental variables, package dependency, and sync the same setting to all nodes&lt;br /&gt;
# Launch a parallel calculation - see mpi part&lt;br /&gt;
# Post-process&lt;br /&gt;
&lt;br /&gt;
Note that a &#039;walltime&#039; is usually required for a batch job, i.e., the maximum allowed time for the running job. The job will be &#039;killed&#039;, or suspended, when the time exceeds the walltime, and the rest part of the script will not be executed. &amp;lt;code&amp;gt;timeout&amp;lt;/code&amp;gt; command can be used to set another walltime for a specific command.&lt;br /&gt;
&lt;br /&gt;
Common batch systems include PBS and Slurm &amp;lt;ref&amp;gt;https://slurm.schedmd.com/overview.html&amp;lt;/ref&amp;gt;. For Imperial cluster CX1 and MMM Hub Young (managed by UCL) &amp;lt;ref&amp;gt;http://mmmhub.ac.uk/young/&amp;lt;/ref&amp;gt;, PBS system is implemented; for ARCHER2 and Tianhe-2 LvLiang(天河二号-吕梁), Slurm is implemented. Tutorials of batch systems are not covered here, since they are heavily tailored according to specific machines - usually modifications are made to enhance the efficiency. Refer to the specific user documentations for more information.&lt;br /&gt;
&lt;br /&gt;
Successfully setting and submitting a batch job script symbolises that you do not need this tutorial any more. Before being able to do that, some considerations might be important:&lt;br /&gt;
&lt;br /&gt;
* How large is my system? Is it efficient to use the resources I requested(Note that it is not a linear-scaling problem... Refer to [https://tutorials.crystalsolutions.eu/tutorial.html?td=tuto_HPC&amp;amp;tf=tuto_hpc#scale this test] on CRYSTAL17)?  &lt;br /&gt;
* To which queue should I submit my job? Is it too long/not applicable/not available?&lt;br /&gt;
* Is it safe to use multi-threading?  &lt;br /&gt;
* Is it memory, GPU etc. demanding?  &lt;br /&gt;
* Roughly how long will it take?  &lt;br /&gt;
* What is my budget code? Do I have enough resources?  &lt;br /&gt;
* Which MPI release version is my code compatible with? Should I load a module or set variables?  &lt;br /&gt;
* Any other specific environmental setups does my code need?  &lt;br /&gt;
* Do I have any post-processing script after MPI part is finished? How long does it take?&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;/div&gt;</summary>
		<author><name>Hz1420</name></author>
	</entry>
</feed>