Skip to content

Commit

Permalink
Spelling Pass
Browse files Browse the repository at this point in the history
  • Loading branch information
The-Scott-Flinders committed Feb 23, 2022
1 parent fe52cbc commit c2dc2ec
Show file tree
Hide file tree
Showing 12 changed files with 26 additions and 25 deletions.
8 changes: 4 additions & 4 deletions docs/source/FileTransfers/FileTransfersIntro.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ All file-transfers are done via Secure File Transfer Protocol (SFTP), or Secure

The HPC is a little different that your desktop at home when it comes to storage, not just computing power. It's a shared resource, so we cant store everybody's data for all time - there just isn't enough space.

On DeepThought, are two main storage tiers, with a smaller pool for your documents and scripts. Firstly our bulk storage (approx 250TB) is the 'Scratch' area (located at /scratch/user/$FAN) - and is slower, spinning Hard-Disk Drives (HDD's). The smaller, hyper-fast NVMe Solid-State Drives (located at /local) are roughly 400GB on the 'standard' nodes (1-16) and 1.5TB on the 'high-capacity' nodes (19-21).
On DeepThought, are two main storage tiers, with a smaller pool for your documents and scripts. Firstly our bulk storage (approx. 250TB) is the 'Scratch' area (located at /scratch/user/$FAN) - and is slower, spinning Hard-Disk Drives (HDD's). The smaller, hyper-fast NVMe Solid-State Drives (located at /local) are roughly 400GB on the 'standard' nodes (1-16) and 1.5TB on the 'high-capacity' nodes (19-21).

There is a critical difference between these two locations. The /scratch area is a common storage area. You can access it from all of the login, management and compute nodes on the HPC. This is not the same as /local, which is only available on each compute node. That is - if you job is running on Node001, the /local only exists on that particular node - you cannot access it anywhere else on the HPC.

Expand All @@ -27,7 +27,7 @@ The old /r_drive/ mount points where a legacy implementation left over from the

Your 'home' directories. This is a small amount of storage (~11TB total) to store your small bits and pieces. This is the analogous to the Windows 'Documents' folder.

At a command promp, your home directory usually gets shortened to ~/.
At a command prompt, your home directory usually gets shortened to ~/.

#### What to store in /home

Expand Down Expand Up @@ -71,8 +71,8 @@ Enter your password when prompted. This will put the file in your home directory

To download files from DeepThought, you simply need to invert that command to point to either:

- A name of a Computer that Deepthough 'knows' about.
- An IP Address that Deepthought can reach.
- A name of a Computer that DeepThought 'knows' about.
- An IP Address that DeepThought can reach.

### Transfers By Computer Name

Expand Down
2 changes: 1 addition & 1 deletion docs/source/ModuleSystem/LMod.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Generally speaking, we can install almost all Linux/CentOS bounded software/appl
1. Are people other than just me going to use this software?
2. If yes, create a [ServiceOne](https://flindersuni.service-now.com) Ticket, and the HPC Support Team will assess the request.

Otherwise, there is nothing stopping you installing the program locally for yourself! If you run into issues installing software then open [ServiceOne](https://flindersuni.service-now.com) ticket, or contact the HPC Support team at their [email](mailto:deepthought@flinders.edu.au).
Otherwise, there is nothing stopping you installing the program locally for yourself! If you run into issues installing software then open a [ServiceOne](https://flindersuni.service-now.com) ticket, or contact the HPC Support team at their [email](mailto:deepthought@flinders.edu.au).

## How Do I Install Software?

Expand Down
12 changes: 6 additions & 6 deletions docs/source/SLURM/SLURMIntro.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The basic premise is - you have:

Then you multiple all three together to get your end priority. So, lets say you ask for 2 GPU's (The current max you can ask for)

A GPU on Deepthought (When this was written) is set to have these parameters:
A GPU on DeepThought (When this was written) is set to have these parameters:

- Weight: 5
- Factor: 1000
Expand All @@ -75,7 +75,7 @@ To give you an idea of the _initial_ score you would get for consuming an entire

**RAM**: `256 * 0.25 * 1000 = 65,536,000` (Measured Per MB)

**Total**: `65,600,000`
**Total**: `65,600,000`

So, its stacks up very quickly, and you really want to write your job to ask for what it needs, and not much more! This is not the number you see and should only be taken as an example. If you want to read up on exactly how Fairshare works, then head on over to [here](https://slurm.schedmd.com/priority_multifactor.html).

Expand All @@ -87,7 +87,7 @@ Slurm has also produced the [Rosetta Stone](_static/SLURMRosettaStone.pdf) - a d

### Job submission

Once the slurm script is modified and ready to run, go to the location you have saved it and run the command:
Once the Slurm script is modified and ready to run, go to the location you have saved it and run the command:

sbatch <name of script>.sh

Expand Down Expand Up @@ -205,7 +205,7 @@ The following variables are set per job, and can be access from your SLURM Scrip

#### DeepThought Set Environment Variables

The DeepThought HPC will set some additional environment variables to manipulate some of the Operating system functions. These directories are set at job creation time and then are removed when a job completes, crashes or otherwise exists.
The DeepThought HPC will set some additional environment variables to manipulate some of the Operating system functions. These directories are set at job creation time and then are removed when a job completes, crashes or otherwise exists.

This means that if you leave anything in $TMP or $SHM directories it will be *removed when your job finishes*.

Expand Down Expand Up @@ -342,7 +342,7 @@ An excellent guide to [submitting jobs](https://support.ceci-hpc.be/doc/_content
# The command format is as follows: #SBATCH --time=DAYS-HOURS
# There are many ways to specify time, see the SchedMD Slurm
# manual pages for more.
#SBATCH --time=14=0
#SBATCH --time=14-0
#
##################################################################
# How many tasks is your job going to run?
Expand Down Expand Up @@ -421,7 +421,7 @@ An excellent guide to [submitting jobs](https://support.ceci-hpc.be/doc/_content

##################################################################
# Once you job has finished its processing, copy back your results
# and ONLY the results to /scratch, then cleanup the temporary
# and ONLY the results to /scratch, then clean-up the temporary
# working directory

cp -r /$TMPDIR/<OUTPUT_FOLDER> /scratch/user/<FAN>/<JOB_RESULT_FOLDER>
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

# -- Project information -----------------------------------------------------

project = 'DeepThought Documentation'
project = 'DeepThought HPC'
copyright = '2021, Flinders University'
author = 'Flinders University'

Expand Down
5 changes: 3 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Welcome to the DeepThought Documentation
Welcome to the DeepThought HPC
=========================================

The new Flinder HPC is called DeepThought. This new HPC comprises of AMD EPYC based hardware and next-generation management software, allowing for a dynamic and agile HPC service.
The new Flinders University HPC is called DeepThought. This new HPC comprises of AMD EPYC based hardware and next-generation management software, allowing for a dynamic and agile HPC service.

.. attention::
This documentation is under active development, meaning that it can
Expand Down Expand Up @@ -58,6 +58,7 @@ Table of Contents
dataflow/hpcjobdataflow.rst
SLURM/SLURMIntro.md
ModuleSystem/LMod.md

.. toctree::
:maxdepth: 1
:caption: Software Suites
Expand Down
4 changes: 2 additions & 2 deletions docs/source/policies/accessandpermissions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ HPC Etiquette
==================
The HPC is a shared resource, and to help make sure everybody can
continue to use the HPC together, the following provides some expected
behavior.
behaviour.

Head / Login / Management Nodes
--------------------------------
Expand Down Expand Up @@ -83,5 +83,5 @@ If you break these rules the HPC Team may take any or all of these actions:

1. Cancellation of running tasks/jobs
2. Removal problematic files and/or programs
3. Warning of expected behaviors
3. Warning of expected behaviours
4. Revocation of HPC Access
4 changes: 2 additions & 2 deletions docs/source/policies/fairuse.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,9 +59,9 @@ Holds the most used packages on the HPC. The HPC Team monitors the loading and u
As an example, some of the most used program on the HPC are:

* R
* Python 3.8
* Python 3.9
* RGDAL
* CUDA 10.1 Toolkit
* CUDA 11.2 Toolkit

While not an exhaustive list of the common software, it does allow the team to focus our efforts and provide more in-depth support for these programs.
This means they are usually first to be updated and have a wider range of tooling attached to them by default.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/software/gromacs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ GROMACS supports all the usual algorithms you expect from a modern molecular dyn
Quickstart Command Line Guide
================================

Gromacs uses UCX and will require a custom mpirun invocation. The module system will warn you of this when you load the module. The following is a known good starting point:
GROMACS uses UCX and will require a custom mpirun invocation. The module system will warn you of this when you load the module. The following is a known good starting point:


``mpirun -mca pml ucx --mca btl ^vader,tcp,uct -x UCX_NET_DEVICES=bond0 <program> <options>``
Expand Down
2 changes: 1 addition & 1 deletion docs/source/software/jupyter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Released and accessible to all HPC USers at the correct URLs.
=========
Overview
=========
The `Jupyter Enterprise Gateway`_ is a multi-user environment for JupyterNotebooks. DeepThought has integrated
The `Jupyter Enterprise Gateway`_ is a multi-user environment for Jupyter Notebooks. DeepThought has integrated
the Jupyter Gateway to allow users to run jobs on the cluster via the native Web Interface.

If you have access to the HPC, you automatically have access to the Jupyter Lab. You can the JupyterLab Instance
Expand Down
6 changes: 3 additions & 3 deletions docs/source/software/lammps.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ LAMMPS was installed from the Development Branch on 7th Jan, 2022.

There are two versions of LAMMPS installed on DeepThought, each with their own modules:

1. A CPU only version, called lmp
2. a GPU only version, called lmp_gpu
1. A CPU only version, with the program called called lmp
2. a GPU only version, with the program called lmp_gpu

*You cannot run the GPU enabled version without access to a GPU, as it will cause errors.*

Expand All @@ -21,7 +21,7 @@ Overview
=================
From LAMMPS_:

LAMMPS is a classical molecular dynamics code with a focus on materials modeling. It's an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has
LAMMPS is a classical molecular dynamics code with a focus on materials modelling. It's an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has
potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model
atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/storage/storageusage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Here is a rough guide as to what should live in your /scratch/$FAN directory. In
/Local
=========

Local is the per-node, high speed flash storage that is specific to each node. When running a job, you want to run your data-sets on /local if at all possible - its the quickest storage location on the HPC. You MUST cleanup /local once you are done.
Local is the per-node, high speed flash storage that is specific to each node. When running a job, you want to run your data-sets on /local if at all possible - its the quickest storage location on the HPC. You MUST clean-up /local once you are done.

^^^^^^^^^^^^^^^^^^^^^^^^^
What to Store in /local
Expand Down
2 changes: 1 addition & 1 deletion docs/source/system/deepthoughspecifications.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The SLURM Scheduler as the notion of 'Job Queue' or 'Partitions'. These manage
|---------------| ------- | ------ | ----- |
|general | 17 | General Usage Pool | 14 Days |
|gpu | 3 | GPU Access Pool | 14 Days |
|melfeu | 2 | Molecular Biology Lab | 14 Days |
|melfu | 2 | Molecular Biology Lab | 14 Days |

## Storage Layout

Expand Down

0 comments on commit c2dc2ec

Please sign in to comment.