Skip to content

Commit

Permalink
Merge pull request #28 from mathiasbockwoldt/builderrors
Browse files Browse the repository at this point in the history
Builderrors
  • Loading branch information
bast committed Feb 26, 2018
2 parents 528c55e + 96bced1 commit 2b0c759
Show file tree
Hide file tree
Showing 35 changed files with 148 additions and 159 deletions.
Binary file removed .DS_Store
Binary file not shown.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
_build/
venv/
*~
.DS_Store
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,15 @@

Served via http://hpc-uit.readthedocs.org.

Copyright (c) 2015
Copyright (c) 2018
Radovan Bast,
Mathias Bockwoldt,
Roy Dragseth,
Stig Rune Jensen,
Dan Jonsson,
Jonas Juselius,
Elena Malkin,
Marte Skadsem,
Espen Tangen,
Giacomo Tartari,
Steinar Traedal-Henden,
Expand Down
Binary file removed applications/.DS_Store
Binary file not shown.
Binary file removed applications/chemistry/.DS_Store
Binary file not shown.
9 changes: 1 addition & 8 deletions applications/chemistry/ADF/ADF.rst
Original file line number Diff line number Diff line change
Expand Up @@ -79,11 +79,4 @@ Here are links to more information about ADF and Band on Stallo:
firsttime_adf
Band
firsttime_band








advanced
10 changes: 5 additions & 5 deletions applications/chemistry/ADF/ADFprog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,15 @@ Related information:
.. toctree::
:maxdepth: 1

firstime_adf.rst
firsttime_adf

General Information:
====================

Description:
-------------

According to the vendor, ADF (Amsterdam Density Functional) is a DFT program particularly strong in understanding and predicting structure, reactivity, and spectra of molecules. It is a Fortran program for calculations on atoms and molecules (in gas phase or solution). It can be used for the study of such diverse fields as molecular spectroscopy, organic and inorganic chemistry, crystallography and pharmacochemistry.
According to the vendor, ADF (Amsterdam Density Functional) is a DFT program particularly strong in understanding and predicting structure, reactivity, and spectra of molecules. It is a Fortran program for calculations on atoms and molecules (in gas phase or solution). It can be used for the study of such diverse fields as molecular spectroscopy, organic and inorganic chemistry, crystallography and pharmacochemistry.

The underlying theory is the Kohn-Sham approach to Density-Functional Theory (DFT). The software is a DFT-only first-principles electronic structure calculations program system, and consists of a rich variety of packages.

Expand Down Expand Up @@ -48,7 +48,7 @@ to see which versions of ADF which are available. Use
$ module load ADF/<version> # i.e adf2017.108
to get access to any given version of ADF.
to get access to any given version of ADF.

The first time you run an ADF job?
----------------------------------
Expand All @@ -59,9 +59,9 @@ Get the information you need here:
.. toctree::
:maxdepth: 1

firstime_adf.rst
firsttime_adf.rst


Here we hold information for how to run on Stallo for the first time, and for using SLURM for the first time.
Here we hold information for how to run on Stallo for the first time, and for using SLURM for the first time.


12 changes: 6 additions & 6 deletions applications/chemistry/ADF/Band.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Related information:
.. toctree::
:maxdepth: 1

firstime_band.rst
firsttime_band.rst

General Information:
====================
Expand All @@ -21,7 +21,7 @@ Description:

BAND is an atomic-orbital based DFT program for periodic systems (crystals, slabs, chains and molecules).

The Amsterdam Density Functional Band-structure program - BAND - can be used for calculations on periodic systems, i.e. polymers, slabs and crystals, and is supplemental to the molecular ADF program for non-periodic systems. It employs density functional theory in the Kohn-Sham approach. BAND is very similar to ADF in the chosen algorithms, although important differences remain.
The Amsterdam Density Functional Band-structure program - BAND - can be used for calculations on periodic systems, i.e. polymers, slabs and crystals, and is supplemental to the molecular ADF program for non-periodic systems. It employs density functional theory in the Kohn-Sham approach. BAND is very similar to ADF in the chosen algorithms, although important differences remain.

BAND makes use of atomic orbitals, it can handle elements throughout the periodic table, and has several analysis options available. BAND can use numerical atomic orbitals, so that the core is described very accurately. Because of the numerical orbitals BAND can calculate accurate total energies. Furthermore it can handle basis functions for arbitrary l-values.

Expand Down Expand Up @@ -49,22 +49,22 @@ to see which versions of Band which are available. Use
$ module load ADF/<version> # i.e adf2017.108
to get access to any given version of Band.
to get access to any given version of Band.


The first time you run an Band job?
----------------------------------
-----------------------------------

Get the information you need here:


.. toctree::
:maxdepth: 1

firstime_band.rst
firsttime_band.rst


Here we hold information for how to run on Stallo for the first time, and for using SLURM for the first time.
Here we hold information for how to run on Stallo for the first time, and for using SLURM for the first time.



Expand Down
44 changes: 44 additions & 0 deletions applications/chemistry/ADF/advanced.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
.. _adf_advanced:

===============================
Information for advanced users:
===============================

Scaling behaviour:
------------------

Since ADF is a very complex code, able to solve a vast range of chemistry problems - giving a unified advice regarding scaling is difficult. We will try to inspect scaling behaviour related to most used areas of application. For a standard geometry optimization, it seems to scale well in the region of 4-6 full nodes (60-100 cores) at least. For linear transit we would currently stay at no more than 4 full nodes or less currently.Unless having tests indicating otherwise, users who want to run large jobs should allocate no more than the prescribed numbers of processors. More information will come.

Memory allocation:
------------------

On Stallo there are 32 GB and 128GB nodes. Pay close attention to memory usage of job on the nodes where you run, and if necessary redistribute the job so that it uses less than all cores on the node until the limit of 32 GB/core. More than that, you will need to ask for access to the highmem-queue. As long as you do not ask for more than 2 GB/core, using the pmem-flag for torque does in principle give no meaning.


-------------------------
How to restart an ADF job
-------------------------

#. In the directory where you started your job, rename or copy the job-output t21 file into $SCM_TMPDIR/TAPE21.

#. In job.inp file, put RESTART TAPE21 just under the comment line.

#. Submit job.inp file as usual.

This might also be automized, we are working on a solution for restart after unexpected downtime.

------------------------------
How to run ADF using fragments
------------------------------

This is a brief introduction to how to create fragments necessary for among other things, BSSE calculations and proper broken symmetry calculations.

**Running with fragments:**

* Download and modify script for fragment create run, e.g. this template: Create.TZP.sh (modify ACCOUNT, add desired atoms and change to desired basis and desired functional)
* Run the create job in the same folder as the one where you want to run your main job(s) (qsub Create.TZP.sh).
* Put the line cp $init/t21.* . in your ADF run script (in your $HOME/bin directory)
* In job.inp, specify correct file name in FRAGMENT section, e.g. “H t21.H_tzp”. * Submit job.inp as usual.


Running with fragments is only necessary when you want to run a BSSE calculations or manipulate charge and/or atomic mass for a given atom (for instance modeling heavy isotope labelled samples for frequency calculations).
17 changes: 0 additions & 17 deletions applications/chemistry/ADF/advanced/adf_fragments.rst

This file was deleted.

13 changes: 0 additions & 13 deletions applications/chemistry/ADF/advanced/adf_restart.rst

This file was deleted.

12 changes: 0 additions & 12 deletions applications/chemistry/ADF/advanced/advanced.md

This file was deleted.

37 changes: 19 additions & 18 deletions applications/chemistry/Gaussian/Gaussian.rst
Original file line number Diff line number Diff line change
@@ -1,64 +1,65 @@
.. _Gaussian:

===========================================
===========================
The GAUSSIAN program system
===========================================
===========================

Information regarding the quantum chemistry program system Gaussian

Related information:
=======================
====================

.. toctree::
:maxdepth: 1

GaussView.rst
firsttime_gaussian.rst
gaussian_on_stallo.rst
advanced.rst


General Information:
=====================
====================

Description:
---------------
------------

Gaussian is a computational chemistry software program system initially released in 1970. Gaussian has a rather low user threshold and a tidy user setup, which, together with a broad range of possibilities and a graphical user interface (gaussview), might explain its popularity in academic institutions.

Online info from vendor:
--------------------------
------------------------

* Homepage: http://www.gaussian.com
* Homepage: http://www.gaussian.com
* Documentation: http://www.gaussian.com/g_tech/g_ur/g09help.htm


License information:
----------------------
--------------------

The license of GAUSSIAN is commercial/proprietary.

The license of Gaussian constitutes of 4 site licenses for the 4 current host institutions of NOTUR installations; NTNU, UiB, UiO, UiT. In principle, only person from one of these institutions have access to the Gaussian Software system installed on Stallo.
The license of Gaussian constitutes of 4 site licenses for the 4 current host institutions of NOTUR installations; NTNU, UiB, UiO, UiT. In principle, only person from one of these institutions have access to the Gaussian Software system installed on Stallo.

* To get access to the code, you need to be in the gaussian group of users.
* To be in the gaussian group of users, you need either to be a member of the abovementioned
institutions or provide proof of holding a license on your own.

Citation
----------
--------

When publishing results obtained with the referred software referred, please do check the developers web page in order to find the correct citat\
ion(s).


Additional online info about Gaussian on Stallo:
=================================================
================================================

Usage
------
-----

Since Gaussian is a rather large and versatile program system with a range of different binaries, we would in general advice users to check whether their jobs are parallelized or not before submitting jobs. It would in general, unless every step is entirely well parallelized, always be more efficient to split a complex many-step job into smaller parallel and serial parts/jobs so that also the overall utilization of hardware is improved. I you are in doubt whether or not your job will scale outside one node (=shared memory), go to the Gaussian application home folder and check if there is an *.exel version of the executable(s) you will be using. If yes, your job will generally work ok in parallel up to approx 300 cores (this is for the more advanced users).
Since Gaussian is a rather large and versatile program system with a range of different binaries, we would in general advice users to check whether their jobs are parallelized or not before submitting jobs. It would in general, unless every step is entirely well parallelized, always be more efficient to split a complex many-step job into smaller parallel and serial parts/jobs so that also the overall utilization of hardware is improved. I you are in doubt whether or not your job will scale outside one node (=shared memory), go to the Gaussian application home folder and check if there is an \*.exel version of the executable(s) you will be using. If yes, your job will generally work ok in parallel up to approx 300 cores (this is for the more advanced users).

We generally wants users to run as many nodes as possible to limit the walltime length of running jobs.
We generally wants users to run as many nodes as possible to limit the walltime length of running jobs.

Use

Expand All @@ -72,7 +73,7 @@ to see which versions of Gaussian are available. Use
$ module load Gaussian/<version> # i.e 09.d01
to get access to any given version of Gaussian.
to get access to any given version of Gaussian.


The first time you run a Gaussian job?
Expand All @@ -87,9 +88,9 @@ Get the information you need here:


About the Gaussian version(s) installed on Stallo
--------------------------------------------------
-------------------------------------------------

Since the installs we have made on Stallo are somewhat different from
Since the installs we have made on Stallo are somewhat different from
installs of Gaussian elsewhere, we have gathered some information about
this here:

Expand All @@ -98,6 +99,6 @@ this here:

gaussian_on_stallo.rst

Here we also address issues related to running Gaussian in parallel, number of cores to use,
Here we also address issues related to running Gaussian in parallel, number of cores to use,
memory allocation and special issues taken care of at install.

26 changes: 13 additions & 13 deletions applications/chemistry/Gaussian/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ Gaussian 09
===========

Gaussian 09 is the current major release of the Gaussian Program System
for computational chemistry. For online manual, see here:
for computational chemistry. For online manual, see here:
http://www.gaussian.com/g_tech/g_ur/g09help.htm.

**Before you do start using Gaussian 09, we encourage you to take a look at these pages at the Gaussian web site:**
**Before you do start using Gaussian 09, we encourage you to take a look at these pages at the Gaussian web site:**

- Information on program limits for Gaussian 09: http://www.gaussian.com/g_tech/g_ur/b_proglimits.htm
- Information on efficiency considerations for Gaussian 09:http://www.gaussian.com/g_tech/g_ur/m_eff.htm
Expand All @@ -19,7 +19,7 @@ http://www.gaussian.com/g_tech/g_ur/g09help.htm.
- Content in the .tsnet.config file.
- Advised number of nodes and cpus in parallel runs.

Please do read these sections carefully.
Please do read these sections carefully.

**Information related to inputs and output:**

Expand All @@ -39,7 +39,7 @@ Currently, both the minor revision b.01 and c.01 are available on Stallo, the de

$ module load Gaussian/09.b01

If you download a run script (see below), this will be taken care of by the script (though you have to change to your preferred flavor of the Gaussian 09 code). Two run scripts are available (for running in serial and parallel, respectively). Download one of the scripts to your home/$USER/bin and use it as described.
If you download a run script (see below), this will be taken care of by the script (though you have to change to your preferred flavor of the Gaussian 09 code). Two run scripts are available (for running in serial and parallel, respectively). Download one of the scripts to your home/$USER/bin and use it as described.

Scriptfile-examples
-------------------
Expand All @@ -62,22 +62,22 @@ For the serial script, the number of cores is set to 1, thus you only set name o
That means running the job water.inp on a single core for 2 hours and 00 minutes. Command line argument 1 ($1) is name of input file without its extension (which is expected to be *.inp*) and command line argument 2 ($2) is walltime on the form hours:minutes.

For parallel jobs you need to add the number of cpus, preferably on the form nodes and number of cores/node also on the command line, giving a total of four inputs from the command line to enter the script. Command line argument $1 is unchanged, but $2 will now be number of nodes and $3 will be number of cores/node, with $4 as walltime on the same form as for the serial script::

$ g09parallel.pbs water 2 16 2:00

meaning job water.inp running on 2 nodes with 16 cores on each, a total of 32 cores, for 2 hours 00 minutes - having a total cpu-time count of 64 hours. See also http://www.gaussian.com/g_tech/g_ur/m_linda.htm for info on parallel running of Gaussian 09.

**NOTE:** Running Gaussian 09 jobs in parallel requires the additional
keywords *%LindaWorkers* and *%NProcshared* in the Link 0 part of the
input file. This is further discussed here: `gaussian_input`. If you
run Gaussian 09-jobs using the scripts discussed above, this is taken
**NOTE:** Running Gaussian 09 jobs in parallel requires the additional
keywords *%LindaWorkers* and *%NProcshared* in the Link 0 part of the
input file. This is further discussed here: `gaussian_input`. If you
run Gaussian 09-jobs using the scripts discussed above, this is taken
care of automatically. If not, you need to put this information in your input file manually.

If you plan to run on >1 node (using %NProcLinda), make a new file in your $HOME directory::

.tsnet.config
.tsnet.config

containing only the line::
containing only the line::

Tsnet.Node.lindarsharg:/global/apps/bin/pbsdshwrapper.py

Expand Down Expand Up @@ -110,8 +110,8 @@ Restart of jobs
^^^^^^^^^^^^^^^
Retrieve the .chk file from the temporary directory and add the restart command to the input (opt=restart or scf=restart, depending on job). Make sure that the *.chk* and and the *.inp* files have the same firstname. Submit as usual.

Restart from g03 checkpoint file\
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Restart from g03 checkpoint file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To do this, you need to convert the g03 .chk file to g09 .chk file using a script called c8609 in the g09 folder. Using global reference, it would look like this on Stallo::

$ /global/apps/gaussian/g09.b01/g09/c8609 water.chk.
Expand Down

0 comments on commit 2b0c759

Please sign in to comment.