Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/uit-no/hpc-doc
Browse files Browse the repository at this point in the history
  • Loading branch information
eta000 committed Dec 1, 2017
2 parents e126a0e + 02a8052 commit 677ddfd
Show file tree
Hide file tree
Showing 4 changed files with 66 additions and 25 deletions.
Binary file modified .DS_Store
Binary file not shown.
15 changes: 15 additions & 0 deletions applications/physics/COMSOL/COMSOL.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,21 @@ If you want to know whether there are available license tokens or not, load the
$ lmstat -c $LMCOMSOL_LICENSE_FILE -a
The first time you run an COMSOL job on Stallo?
----------------------------------

Get the information you need here:


.. toctree::
:maxdepth: 1

firstime_comsol.rst


Here we hold information for how to run on Stallo for the first time, and for using SLURM for the first time.


Happy calculations!


Expand Down
44 changes: 44 additions & 0 deletions applications/physics/COMSOL/firsttime_comsol.rst.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
.. _first_time_comsol:

===================================
First time you run an COMSOL job?
===================================

This page contains info aimed at first time
users of COMSOL on Stallo, but may also be usefull to
more experienced users. Please look carefully through the
provided examples. Also note that the job-script example is rather richly
commented to provide additional and relevant info.

If you want to run this testjob, download the copies of the scripts and put them
into your test job folder (which I assume you have created in advance).

COMSOL input example:
--------------------

.. include:: ../files/comsol_smalltest.mph
:literal:

This file can also be downloaded here: :download:`Small test for COMSOL <../files/comsol_smalltest.mph>`.

Place this file in a job folder of choice, say COMSOLFIRSTJOB in your home directory on Stallo.

then, copy the job-script as seen here:

.. include:: ../files/run_comsol.sh
:literal:

(Can also be downloaded from here: :download:`run_comsol.sh <../files/run_comsol.sh>`)

Before you can submit a job, you need to know what "type" of study you want to perform (please read more about that on the vendor support page). For example purpose, I have chosed study 4 - named std04; and this is the second variable argument to the run script (see comments in script).

Place this script in the same folder and type:

.. code-block:: bash

sbatch run_comsol.sh comsol_smalltest std04

You may now have submitted your first COMSOL job.

Good luck with your (MULTI)physics!

32 changes: 7 additions & 25 deletions news/slurm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,35 +7,17 @@
.. _slurm:


We have migrated to the SLURM resource manager
==============================================
We have fully migrated to the SLURM resource manager
=====================================================

As of October 1st 2016 stallo.uit.no will have a new official queuing system,
named `SLURM <http://slurm.schedmd.com/>`_ (Simple Linux Utility for Resource
The current and only working resource manager system on Stallo is now `SLURM <http://slurm.schedmd.com/>`_ (Simple Linux Utility for Resource
Management).

To get a soft start for all, the old queuing system will still be functional
and accept jobs as usual. We will only start out with 152 nodes in the SLURM
part of the cluster, the rest will stay with the old torque. Slowly we will
move more and more nodes from torque into SLURM, as nodes free up from running
jobs.
To access highmem nodes; you still need to be member of the highmem group of users.

Since the cluster has 2 queuing systems you will have to check both queues to
get the full picture of the load on the system, this is of course unfortunate
but it has to stay this way until the transition is fully completed.
Also, you would need to submit to the highmem partition::

The highmem nodes will remain in torque for now.
$ #SBATCH --partition=highmem

Jobs already submitted to torque will stay within this queue, so if you would
like to move your jobs to SLURM you have to resubmit yourself.

If you use the `Abel
<http://www.uio.no/english/services/it/research/hpc/abel/>`_ cluster at UiO you
are already familiar with SLURM and should find it rather easy to switch. The
new NOTUR supercomputers will also run SLURM.

The torque software has served us well for many years, however it is no longer
actively maintained. We have encountered bugs in the system that diminish the
utilization of the cluster, as this is not acceptable we have chosen to switch
to SLURM. By this we hope that you will find the user experience across
By this migration we hope that you will find the user experience across
Norwegian HPC systems will be more uniform.

0 comments on commit 677ddfd

Please sign in to comment.