<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://chemwiki.ch.ic.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ipolyak</id>
	<title>ChemWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://chemwiki.ch.ic.ac.uk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ipolyak"/>
	<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/wiki/Special:Contributions/Ipolyak"/>
	<updated>2026-05-16T17:11:12Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Resgrp:comp-photo-hpc&amp;diff=612460</id>
		<title>Resgrp:comp-photo-hpc</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Resgrp:comp-photo-hpc&amp;diff=612460"/>
		<updated>2017-04-20T09:35:22Z</updated>

		<summary type="html">&lt;p&gt;Ipolyak: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;IC has two centrally-managed HPC computer systems: a PC cluster CX1, and a small Silicon Graphics AX1.&lt;br /&gt;
We mainly use CX1, as it has many more accessible nodes and processors. Also, we own two nodes, which have a dedicated batch queue (pqmb) for the group, mainly for short test calculations. The chemistry queue is pqchem.&lt;br /&gt;
&lt;br /&gt;
More details here: [http://www.hpc.ic.ac.uk high performance computing]&lt;br /&gt;
&lt;br /&gt;
Join the [https://mailman.ic.ac.uk/mailman/listinfo/hpc-announce mailing list]. If you have problems, ask around within the group first, otherwise contact Matt Harvey in HPC support directly (m.j.harvey@imperial.ac.uk).&lt;br /&gt;
&lt;br /&gt;
== Using The Cluster ==&lt;br /&gt;
Before running calculations on the cluster, look at the tutorial:&lt;br /&gt;
&lt;br /&gt;
[https://www.ch.ic.ac.uk/wiki/index.php/Using_the_cluster_:_tutorial%2C_examples Using the cluster : tutorial and examples]&lt;br /&gt;
&lt;br /&gt;
Below is a summary / reminder.&lt;br /&gt;
&lt;br /&gt;
== Connecting ==&lt;br /&gt;
To connect to the PC cluster and forward display information for X-windows, use &#039;&#039;&#039;ssh -Y myname@login.cx1.hpc.ic.ac.uk&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use your IC college account (as &#039;myname&#039;).&lt;br /&gt;
&lt;br /&gt;
This connects to one of two front-end nodes. All cluster nodes (the rest are for running calculations through a queueing system) share common file systems.&lt;br /&gt;
(To compile Gaussian code, need to specify login-0 explicitly when connecting, as this is the node the supported Gaussian compiler is licensed for).&lt;br /&gt;
&lt;br /&gt;
Once connected, using the command id should give something like the following:&lt;br /&gt;
&lt;br /&gt;
 [login-0 ~]$ id&lt;br /&gt;
 uid=45751(mjbear) gid=11000(hpc-users) groups=1010(gaussian-users),11000(hpc-users),11100(gaussian-devel),11232(pgi-users)&lt;br /&gt;
&lt;br /&gt;
To access the current development version of Gaussian, you will need to be in the &#039;&#039;gaussian-devel&#039;&#039; group (and sign the developer&#039;s license agreement).&lt;br /&gt;
To access run-time libraries to run Gaussian, you also need to be in the &#039;&#039;pgi-users&#039;&#039; groups.&lt;br /&gt;
Both should have been set up when your account was created.&lt;br /&gt;
&lt;br /&gt;
== Queuing system ==&lt;br /&gt;
&lt;br /&gt;
PBS queuing system. &#039;&#039;&#039;xpbs&#039;&#039;&#039; gives you an interactive display.&lt;br /&gt;
Need to supply a script for this queuing system that requests the resources to run the calculation you want.&lt;br /&gt;
Examples in &lt;br /&gt;
 /home/gaussian-devel/test_h11&lt;br /&gt;
(This is for the development version of Gaussian that was current as of --[[User:Mjbear|Mjbear]] 11:46, 23 March 2011 (UTC))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using files in this directory, the command &#039;&#039;&#039;qsub jobscript_test009&#039;&#039;&#039; sends the following script to the queueing system:&lt;br /&gt;
&lt;br /&gt;
 [login-0 test_g01]$ cat jobscript_test009&lt;br /&gt;
 #PBS -l ncpus=2&lt;br /&gt;
 #PBS -l mem=1700mb&lt;br /&gt;
 #PBS -l walltime=00:09:00&lt;br /&gt;
 #PBS -joe&lt;br /&gt;
 &lt;br /&gt;
 module load gaussian/devel-modules&lt;br /&gt;
 module load gdvh11&lt;br /&gt;
 &lt;br /&gt;
 gdv &amp;lt; /home/mjbear/test_h11/test009.com &amp;gt; $WORK/test009.log&lt;br /&gt;
&lt;br /&gt;
In general, it&#039;s best to request the resources you really need! Rather than try to second-guess the queueing system.&lt;br /&gt;
&lt;br /&gt;
Queue suggestion: &#039;&#039;xdbg&#039;&#039; is for 4 processor calculations using up to 15600 MB for 15 mins. This is basically for running calculations on one node - but an nprocshared=2 nproclinda=2 calculation will also work if there&#039;s space.&lt;br /&gt;
&lt;br /&gt;
We have two of our own nodes, which are 8 processor and 12 GB.&lt;br /&gt;
These have slow memory access, so are best used for short test calculations to check larger jobs will run.&lt;br /&gt;
&lt;br /&gt;
To run a calculation on our private nodes you need to:&lt;br /&gt;
&lt;br /&gt;
1) use qsub -q pqmb&lt;br /&gt;
to send the job to our private queue.&lt;br /&gt;
(This can be included in the job script itself, as a PBS option)&lt;br /&gt;
&lt;br /&gt;
2) &#039;&#039;&#039;select=1:ncpus=8&#039;&#039;&#039;&lt;br /&gt;
-will give you 8 cores, 1 node (shared memory parallelism only).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;select=2:ncpus=8&#039;&#039;&#039;&lt;br /&gt;
-will give you 16 cores, 2 nodes (shared + distributed memory parallelism). See test160.com for an example.&lt;br /&gt;
&lt;br /&gt;
Output files go on $WORK filesystem. This is not backed up!&lt;br /&gt;
&lt;br /&gt;
To Do:&lt;br /&gt;
# using &#039;&#039;&#039;cpsub&#039;&#039;&#039; and &#039;&#039;&#039;cpcomchk&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Nodes on pqmb ==&lt;br /&gt;
&lt;br /&gt;
We have an own queue named &amp;quot;pqmb&amp;quot; on CX1 (see above). Jobs can be directed to pqmb using &amp;lt;code&amp;gt;#PBS -q pqmb&amp;lt;/code&amp;gt;. As of April 2017, there exist three groups of nodes accessible by their microarchitecture variable through PBS:&lt;br /&gt;
&lt;br /&gt;
{|class=wikitable&lt;br /&gt;
|-&lt;br /&gt;
! Group&lt;br /&gt;
! Nodes&lt;br /&gt;
! Cores/Node&lt;br /&gt;
! Memory /Node (GB)&lt;br /&gt;
! Microarchitecture&lt;br /&gt;
! Gaussian&lt;br /&gt;
|-&lt;br /&gt;
| 104&lt;br /&gt;
| 2&lt;br /&gt;
| 12&lt;br /&gt;
| 50&lt;br /&gt;
| westmere&lt;br /&gt;
| G03+G09&lt;br /&gt;
|-&lt;br /&gt;
| 5&lt;br /&gt;
| 8&lt;br /&gt;
| 16&lt;br /&gt;
| 132&lt;br /&gt;
| sandybridge&lt;br /&gt;
| G03+G09+G16&lt;br /&gt;
|- &lt;br /&gt;
| 100&lt;br /&gt;
| 8&lt;br /&gt;
| 24&lt;br /&gt;
| 264&lt;br /&gt;
| broadwell&lt;br /&gt;
| G03+G09+G16&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This table shows the maximum resources available for each node. A single node with 8 cores available in the Broadwell group may be selected using &amp;lt;code&amp;gt;nodes=1:broadwell:ppn=8&amp;lt;/code&amp;gt;. Replacing &amp;lt;code&amp;gt;broadwell&amp;lt;/code&amp;gt; with one of the microarchitecture variables above will allow you to specify which node to run on. Note that Gaussian 16 will not run on the old Westmere nodes.&lt;br /&gt;
&lt;br /&gt;
Example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:broadwell:ppn=8&lt;br /&gt;
#PBS -l mem=16000mb&lt;br /&gt;
#PBS -l walltime=2096:00:00&lt;br /&gt;
#PBS -q pqmb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script will request one node in the Broadwell group with 8 CPUs and 16000MB of RAM. Note that the current maximum walltime on the private queue is 2096 hours.&lt;br /&gt;
&lt;br /&gt;
With the above notation, multiple nodes may be selected in a job. &amp;lt;code&amp;gt;ppn&amp;lt;/code&amp;gt; defines the processors per node to be used.&lt;br /&gt;
&lt;br /&gt;
One can also run jobs across different types of nodes, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;#PBS -l nodes=2:broadwell:ppn=24+sandyb:ppn=16&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Further, a &#039;&#039;&#039;specific node&#039;&#039;&#039; can be assigned using the same notation:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;#PBS -l nodes=cx1-100-4-3:ppn=4&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-l select&amp;lt;/code&amp;gt; argument can also be used, but does not seem to work well for running across several nodes. If used, the following format can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#PBS -l select=1:ncpus=8:broadwell=true&lt;br /&gt;
#PBS -l mem=16000mb&lt;br /&gt;
#PBS -l walltime=2096:00:00&lt;br /&gt;
#PBS -q pqmb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ipolyak</name></author>
	</entry>
	<entry>
		<id>https://chemwiki.ch.ic.ac.uk/index.php?title=Resgrp:comp-photo-hpc&amp;diff=612448</id>
		<title>Resgrp:comp-photo-hpc</title>
		<link rel="alternate" type="text/html" href="https://chemwiki.ch.ic.ac.uk/index.php?title=Resgrp:comp-photo-hpc&amp;diff=612448"/>
		<updated>2017-04-10T17:29:53Z</updated>

		<summary type="html">&lt;p&gt;Ipolyak: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;IC has two centrally-managed HPC computer systems: a PC cluster CX1, and a small Silicon Graphics AX1.&lt;br /&gt;
We mainly use CX1, as it has many more accessible nodes and processors. Also, we own two nodes, which have a dedicated batch queue (pqmb) for the group, mainly for short test calculations. The chemistry queue is pqchem.&lt;br /&gt;
&lt;br /&gt;
More details here: [http://www.hpc.ic.ac.uk high performance computing]&lt;br /&gt;
&lt;br /&gt;
Join the [https://mailman.ic.ac.uk/mailman/listinfo/hpc-announce mailing list]. If you have problems, ask around within the group first, otherwise contact Matt Harvey in HPC support directly (m.j.harvey@imperial.ac.uk).&lt;br /&gt;
&lt;br /&gt;
== Using The Cluster ==&lt;br /&gt;
Before running calculations on the cluster, look at the tutorial:&lt;br /&gt;
&lt;br /&gt;
[https://www.ch.ic.ac.uk/wiki/index.php/Using_the_cluster_:_tutorial%2C_examples Using the cluster : tutorial and examples]&lt;br /&gt;
&lt;br /&gt;
Below is a summary / reminder.&lt;br /&gt;
&lt;br /&gt;
== Connecting ==&lt;br /&gt;
To connect to the PC cluster and forward display information for X-windows, use &#039;&#039;&#039;ssh -Y myname@login.cx1.hpc.ic.ac.uk&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Use your IC college account (as &#039;myname&#039;).&lt;br /&gt;
&lt;br /&gt;
This connects to one of two front-end nodes. All cluster nodes (the rest are for running calculations through a queueing system) share common file systems.&lt;br /&gt;
(To compile Gaussian code, need to specify login-0 explicitly when connecting, as this is the node the supported Gaussian compiler is licensed for).&lt;br /&gt;
&lt;br /&gt;
Once connected, using the command id should give something like the following:&lt;br /&gt;
&lt;br /&gt;
 [login-0 ~]$ id&lt;br /&gt;
 uid=45751(mjbear) gid=11000(hpc-users) groups=1010(gaussian-users),11000(hpc-users),11100(gaussian-devel),11232(pgi-users)&lt;br /&gt;
&lt;br /&gt;
To access the current development version of Gaussian, you will need to be in the &#039;&#039;gaussian-devel&#039;&#039; group (and sign the developer&#039;s license agreement).&lt;br /&gt;
To access run-time libraries to run Gaussian, you also need to be in the &#039;&#039;pgi-users&#039;&#039; groups.&lt;br /&gt;
Both should have been set up when your account was created.&lt;br /&gt;
&lt;br /&gt;
== Queuing system ==&lt;br /&gt;
&lt;br /&gt;
PBS queuing system. &#039;&#039;&#039;xpbs&#039;&#039;&#039; gives you an interactive display.&lt;br /&gt;
Need to supply a script for this queuing system that requests the resources to run the calculation you want.&lt;br /&gt;
Examples in &lt;br /&gt;
 /home/gaussian-devel/test_h11&lt;br /&gt;
(This is for the development version of Gaussian that was current as of --[[User:Mjbear|Mjbear]] 11:46, 23 March 2011 (UTC))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Using files in this directory, the command &#039;&#039;&#039;qsub jobscript_test009&#039;&#039;&#039; sends the following script to the queueing system:&lt;br /&gt;
&lt;br /&gt;
 [login-0 test_g01]$ cat jobscript_test009&lt;br /&gt;
 #PBS -l ncpus=2&lt;br /&gt;
 #PBS -l mem=1700mb&lt;br /&gt;
 #PBS -l walltime=00:09:00&lt;br /&gt;
 #PBS -joe&lt;br /&gt;
 &lt;br /&gt;
 module load gaussian/devel-modules&lt;br /&gt;
 module load gdvh11&lt;br /&gt;
 &lt;br /&gt;
 gdv &amp;lt; /home/mjbear/test_h11/test009.com &amp;gt; $WORK/test009.log&lt;br /&gt;
&lt;br /&gt;
In general, it&#039;s best to request the resources you really need! Rather than try to second-guess the queueing system.&lt;br /&gt;
&lt;br /&gt;
Queue suggestion: &#039;&#039;xdbg&#039;&#039; is for 4 processor calculations using up to 15600 MB for 15 mins. This is basically for running calculations on one node - but an nprocshared=2 nproclinda=2 calculation will also work if there&#039;s space.&lt;br /&gt;
&lt;br /&gt;
We have two of our own nodes, which are 8 processor and 12 GB.&lt;br /&gt;
These have slow memory access, so are best used for short test calculations to check larger jobs will run.&lt;br /&gt;
&lt;br /&gt;
To run a calculation on our private nodes you need to:&lt;br /&gt;
&lt;br /&gt;
1) use qsub -q pqmb&lt;br /&gt;
to send the job to our private queue.&lt;br /&gt;
(This can be included in the job script itself, as a PBS option)&lt;br /&gt;
&lt;br /&gt;
2) &#039;&#039;&#039;select=1:ncpus=8&#039;&#039;&#039;&lt;br /&gt;
-will give you 8 cores, 1 node (shared memory parallelism only).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;select=2:ncpus=8&#039;&#039;&#039;&lt;br /&gt;
-will give you 16 cores, 2 nodes (shared + distributed memory parallelism). See test160.com for an example.&lt;br /&gt;
&lt;br /&gt;
Output files go on $WORK filesystem. This is not backed up!&lt;br /&gt;
&lt;br /&gt;
To Do:&lt;br /&gt;
# using &#039;&#039;&#039;cpsub&#039;&#039;&#039; and &#039;&#039;&#039;cpcomchk&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Nodes on pqmb ==&lt;br /&gt;
&lt;br /&gt;
We have an own queue named &amp;quot;pqmb&amp;quot; on CX1 (see above). Jobs can be directed to pqmb using &amp;lt;code&amp;gt;#PBS -q pqmb&amp;lt;/code&amp;gt;. As of April 2017, there exist three groups of nodes accessible by their microarchitecture variable through PBS:&lt;br /&gt;
&lt;br /&gt;
{|class=wikitable&lt;br /&gt;
|-&lt;br /&gt;
! Group&lt;br /&gt;
! Nodes&lt;br /&gt;
! Cores/Node&lt;br /&gt;
! Memory /Node (GB)&lt;br /&gt;
! Microarchitecture&lt;br /&gt;
! Gaussian&lt;br /&gt;
|-&lt;br /&gt;
| 104&lt;br /&gt;
| 2&lt;br /&gt;
| 12&lt;br /&gt;
| 50&lt;br /&gt;
| westmere&lt;br /&gt;
| G03+G09&lt;br /&gt;
|-&lt;br /&gt;
| 5&lt;br /&gt;
| 8&lt;br /&gt;
| 16&lt;br /&gt;
| 132&lt;br /&gt;
| sandybridge&lt;br /&gt;
| G03+G09+G16&lt;br /&gt;
|- &lt;br /&gt;
| 100&lt;br /&gt;
| 8&lt;br /&gt;
| 24&lt;br /&gt;
| 264&lt;br /&gt;
| broadwell&lt;br /&gt;
| G03+G09+G16&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This table shows the maximum resources available for each node. A single node with 8 cores available in the Broadwell group may be selected using &amp;lt;code&amp;gt;nodes=1:broadwell:ppn=8&amp;lt;/code&amp;gt;. Replacing &amp;lt;code&amp;gt;broadwell&amp;lt;/code&amp;gt; with one of the microarchitecture variables above will allow you to specify which node to run on. Note that Gaussian 16 will not run on the old Westmere nodes.&lt;br /&gt;
&lt;br /&gt;
Example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:broadwell:ppn=8&lt;br /&gt;
#PBS -l mem=16000mb&lt;br /&gt;
#PBS -l walltime=2096:00:00&lt;br /&gt;
#PBS -q pqmb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This script will request one node in the Broadwell group with 8 CPUs and 16000MB of RAM. Note that the current maximum walltime on the private queue is 2096 hours.&lt;br /&gt;
&lt;br /&gt;
With the above notation, multiple nodes may be selected in a job. &amp;lt;code&amp;gt;ppn&amp;lt;/code&amp;gt; defines the processors per node to be used.&lt;br /&gt;
&lt;br /&gt;
One can also run jobs across different types of nodes, as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;#PBS -l nodes=2:broadwell:ppn=24+sandyb:ppn=16&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;-l select&amp;lt;/code&amp;gt; argument can also be used, but does not seem to work well for running across several nodes. If used, the following format can be used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#PBS -l select=1:ncpus=8:broadwell=true&lt;br /&gt;
#PBS -l mem=16000mb&lt;br /&gt;
#PBS -l walltime=2096:00:00&lt;br /&gt;
#PBS -q pqmb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ipolyak</name></author>
	</entry>
</feed>