From 910cf3d206fa72457a8f0789d43247b2a95795e9 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 12:26:14 +1300 Subject: [PATCH 01/25] Initial move --- docs/General/FAQs/What_is_a_core_file.md | 2 +- .../Cheat_Sheets/Slurm-Reference_Sheet.md | 2 +- .../Batch_Jobs/Hyperthreading.md | 2 +- .../Batch_Jobs/Using_GPUs.md | 14 ++--- .../Configuring_Dask_MPI_jobs.md | 4 +- .../OpenMP_settings.md | 6 +- .../Thread_Placement_and_Thread_Affinity.md | 6 +- .../Available_Applications}/ABAQUS.md | 0 .../Available_Applications}/ANSYS.md | 0 .../Available_Applications}/AlphaFold.md | 4 +- .../Available_Applications}/Apptainer.md | 0 .../Available_Applications}/BLAST.md | 0 .../Available_Applications}/BRAKER.md | 0 .../Available_Applications}/CESM.md | 0 .../Available_Applications}/COMSOL.md | 0 .../Available_Applications}/Clair3.md | 0 .../Available_Applications}/Cylc.md | 0 .../Available_Applications}/Delft3D.md | 0 .../Available_Applications}/Dorado.md | 0 .../Available_Applications}/FDS.md | 0 .../Available_Applications}/FlexiBLAS.md | 0 .../Available_Applications}/FreeSurfer.md | 0 .../Available_Applications}/GATK.md | 0 .../Available_Applications}/GROMACS.md | 0 .../Available_Applications}/Gaussian.md | 0 .../Available_Applications}/Java.md | 0 .../Available_Applications}/Julia.md | 0 .../Available_Applications}/Keras.md | 0 .../Available_Applications}/Lambda_Stack.md | 0 .../Available_Applications}/MAKER.md | 0 .../Available_Applications}/MATLAB.md | 0 .../Available_Applications}/Miniforge3.md | 0 .../Available_Applications}/Molpro.md | 0 .../Available_Applications}/NWChem.md | 0 .../Available_Applications}/ORCA.md | 0 .../Available_Applications}/OpenFOAM.md | 0 .../Available_Applications}/OpenSees.md | 0 .../Available_Applications}/ParaView.md | 0 .../Available_Applications}/Python.md | 0 .../Available_Applications}/R.md | 0 .../Available_Applications}/RAxML.md | 0 .../Available_Applications}/Relion.md | 0 .../Available_Applications}/Supernova.md | 0 .../Available_Applications}/Synda.md | 0 .../TensorFlow_on_CPUs.md | 0 .../TensorFlow_on_GPUs.md | 2 +- .../Available_Applications}/Trinity.md | 0 .../Available_Applications}/VASP.md | 0 .../Available_Applications}/VTune.md | 2 +- .../Available_Applications}/VirSorter.md | 0 .../Available_Applications}/WRF.md | 0 .../Available_Applications}/fastStructure.md | 0 .../Available_Applications}/index.md | 0 .../Available_Applications}/ipyrad.md | 0 .../Available_Applications}/ont-guppy-gpu.md | 0 .../Available_Applications}/snpEff.md | 0 .../Containers}/NVIDIA_GPU_Containers.md | 0 ..._executable_under_Apptainer_in_parallel.md | 0 ...un_an_executable_under_Apptainer_on_gpu.md | 0 .../Installing_Applications_Yourself.md | 12 ++-- .../Profiling_and_Debugging/.pages.yml | 0 .../Profiling_and_Debugging/Debugging.md | 0 .../Profiler-ARM_MAP.md | 0 .../Profiling_and_Debugging/Profiler-VTune.md | 0 .../Slurm_Native_Profiling.md | 0 .../Software_Installation_Request.md | 0 .../Software_Version_Management.md | 2 +- docs/redirect_map.yml | 59 +++++++++++++++++++ 68 files changed, 88 insertions(+), 29 deletions(-) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/ABAQUS.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/ANSYS.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/AlphaFold.md (99%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Apptainer.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/BLAST.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/BRAKER.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/CESM.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/COMSOL.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Clair3.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Cylc.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Delft3D.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Dorado.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/FDS.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/FlexiBLAS.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/FreeSurfer.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/GATK.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/GROMACS.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Gaussian.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Java.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Julia.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Keras.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Lambda_Stack.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/MAKER.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/MATLAB.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Miniforge3.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Molpro.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/NWChem.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/ORCA.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/OpenFOAM.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/OpenSees.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/ParaView.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Python.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/R.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/RAxML.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Relion.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Supernova.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Synda.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/TensorFlow_on_CPUs.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/TensorFlow_on_GPUs.md (99%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/Trinity.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/VASP.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/VTune.md (97%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/VirSorter.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/WRF.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/fastStructure.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/index.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/ipyrad.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/ont-guppy-gpu.md (100%) rename docs/{Scientific_Computing/Supported_Applications => Software/Available_Applications}/snpEff.md (100%) rename docs/{Scientific_Computing/HPC_Software_Environment => Software/Containers}/NVIDIA_GPU_Containers.md (100%) rename docs/{Scientific_Computing/HPC_Software_Environment => Software/Containers}/Run_an_executable_under_Apptainer_in_parallel.md (100%) rename docs/{Scientific_Computing/HPC_Software_Environment => Software/Containers}/Run_an_executable_under_Apptainer_on_gpu.md (100%) rename docs/{Scientific_Computing/HPC_Software_Environment => Software}/Installing_Applications_Yourself.md (93%) rename docs/{Scientific_Computing => Software}/Profiling_and_Debugging/.pages.yml (100%) rename docs/{Scientific_Computing => Software}/Profiling_and_Debugging/Debugging.md (100%) rename docs/{Scientific_Computing => Software}/Profiling_and_Debugging/Profiler-ARM_MAP.md (100%) rename docs/{Scientific_Computing => Software}/Profiling_and_Debugging/Profiler-VTune.md (100%) rename docs/{Scientific_Computing => Software}/Profiling_and_Debugging/Slurm_Native_Profiling.md (100%) rename docs/{Scientific_Computing/HPC_Software_Environment => Software}/Software_Installation_Request.md (100%) rename docs/{Scientific_Computing/HPC_Software_Environment => Software}/Software_Version_Management.md (91%) diff --git a/docs/General/FAQs/What_is_a_core_file.md b/docs/General/FAQs/What_is_a_core_file.md index 4c53c4833..f22c6f4c9 100644 --- a/docs/General/FAQs/What_is_a_core_file.md +++ b/docs/General/FAQs/What_is_a_core_file.md @@ -18,7 +18,7 @@ see [Finding Job_Efficiency](../../Getting_Started/Next_Steps/Finding_Job_Effici `.core` files are a record of the working memory at time of failure, and can be used for -[debugging](../../Scientific_Computing/Profiling_and_Debugging/Debugging.md). +[debugging](../../Software/Profiling_and_Debugging/Debugging.md). MPI jobs will usually create a `.core` file for each task. The creation of a `.core` file is called a 'core dump' is files is **disabled by default**, diff --git a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md index d14f25249..c6f4be3ba 100644 --- a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md +++ b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md @@ -71,7 +71,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g. | | | | | -- | -- | -- | | `--qos` | `#SBATCH --qos=debug` | Adding this line gives your job a high priority. *Limited to one job at a time, max 15 minutes*. | -| `--profile` | `#SBATCH --profile=ALL` | Allows generation of a .h5 file containing job profile information. See [Slurm Native Profiling](../../Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md) | +| `--profile` | `#SBATCH --profile=ALL` | Allows generation of a .h5 file containing job profile information. See [Slurm Native Profiling](../../Software/Profiling_and_Debugging/Slurm_Native_Profiling.md) | | `--dependency` | `#SBATCH --dependency=afterok:123456789` | Will only start after the job 123456789 has completed. | | `--hint` | `#SBATCH --hint=nomultithread` | Disables [hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md), be aware that this will significantly change how your job is defined. | diff --git a/docs/Scientific_Computing/Batch_Jobs/Hyperthreading.md b/docs/Scientific_Computing/Batch_Jobs/Hyperthreading.md index d9ff5ec42..52f7a2b9d 100644 --- a/docs/Scientific_Computing/Batch_Jobs/Hyperthreading.md +++ b/docs/Scientific_Computing/Batch_Jobs/Hyperthreading.md @@ -34,7 +34,7 @@ once your job starts you will have twice the number of CPUs as `ntasks`. If you set `--cpus-per-task=n`, Slurm will request `n` logical CPUs per task, i.e., will set `n` threads for the job. Your code must be capable of running Hyperthreaded (for example using -[OpenMP](../../Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md)) +[OpenMP](../HPC_Software_Environment/OpenMP_settings.md)) if `--cpus-per-task > 1`. Setting `--hint=nomultithread` with `srun` or `sbatch` causes Slurm to diff --git a/docs/Scientific_Computing/Batch_Jobs/Using_GPUs.md b/docs/Scientific_Computing/Batch_Jobs/Using_GPUs.md index 878ac420b..685ba7a42 100644 --- a/docs/Scientific_Computing/Batch_Jobs/Using_GPUs.md +++ b/docs/Scientific_Computing/Batch_Jobs/Using_GPUs.md @@ -111,7 +111,7 @@ duration of 30 minutes. ## Load CUDA and cuDNN modules To use an Nvidia GPU card with your application, you need to load the -driver and the CUDA toolkit via the [environment modules](../Supported_Applications/index.md) +driver and the CUDA toolkit via the [environment modules](../../Software/Available_Applications/index.md) mechanism: ``` sh @@ -229,12 +229,12 @@ CUDA_VISIBLE_DEVICES=0 The following pages provide additional information for supported applications: -- [ABAQUS](../Supported_Applications/ABAQUS.md#examples) -- [GROMACS](../Supported_Applications/GROMACS.md) -- [Lambda Stack](../Supported_Applications/Lambda_Stack.md) -- [Matlab](../Supported_Applications/MATLAB.md#using-gpus) -- [TensorFlow on GPUs](../Supported_Applications/TensorFlow_on_GPUs.md) +- [ABAQUS](../../Software/Available_Applications/ABAQUS.md#examples) +- [GROMACS](../../Software/Available_Applications/GROMACS.md) +- [Lambda Stack](../../Software/Available_Applications/Lambda_Stack.md) +- [Matlab](../../Software/Available_Applications/MATLAB.md#using-gpus) +- [TensorFlow on GPUs](../../Software/Available_Applications/TensorFlow_on_GPUs.md) And programming toolkits: -- [NVIDIA GPU Containers](../HPC_Software_Environment/NVIDIA_GPU_Containers.md) +- [NVIDIA GPU Containers](../../Software/Containers/NVIDIA_GPU_Containers.md) diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md b/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md index 248ddb112..042cba57b 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md @@ -78,7 +78,7 @@ dependencies: !!! info "See also" See the - [Miniforge3](../../Scientific_Computing/Supported_Applications/Miniforge3.md) + [Miniforge3](../Supported_Applications/Miniforge3.md) page for more information on how to create and manage Miniconda environments on NeSI. @@ -97,7 +97,7 @@ then assigns different roles to the different ranks: This implies that **Dask-MPI jobs must be launched on at least 3 MPI ranks!** Ranks 0 and 1 often perform much less work than the other ranks, it can therefore be beneficial to use -[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) +[Hyperthreading](../Batch_Jobs/Hyperthreading.md) to place these two ranks onto a single physical core. Ensure that activating hyperthreading does not slow down the worker ranks by running a short test workload with and without hyperthreading. diff --git a/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md b/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md index 357b33915..a535353d1 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md @@ -20,17 +20,17 @@ all that is necessary to get 16 OpenMP threads is: in your Slurm script - although this can sometimes be more complicated, e.g., with -[TensorFlow on CPUs](../../Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md). +[TensorFlow on CPUs](../Supported_Applications/TensorFlow_on_CPUs.md). In order to achieve good and consistent parallel scaling, additional settings may be required. This is particularly true on Mahuika where nodes are generally shared between different Slurm jobs. Following are some settings that can help improve scaling and/or make your timings more consistent, additional information can be found in our article -[Thread Placement and Thread Affinity](../../Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md). +[Thread Placement and Thread Affinity](./Thread_Placement_and_Thread_Affinity.md). 1. `--threads-per-core=2`. Use this option to tell srun or sbatch to -that you want to use [Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md), +that you want to use [Hyperthreading](../Batch_Jobs/Hyperthreading.md), so use both of the virual CPUs available on each physical core, halving the number of physical cores you occupy. If you use hyperthreading, you will be charged for the number of physical cores that diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md b/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md index a9e908532..43c41dbf1 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md @@ -8,7 +8,7 @@ status: deprecated Multithreading with OpenMP and other threading libraries is an important way to parallelise scientific software for faster execution (see our article on [Parallel -Execution](../../Getting_Started/Next_Steps/Parallel_Execution.md) for +Execution](../../Software/Getting_Started/Next_Steps/Parallel_Execution.md) for an introduction). Care needs to be taken when running multiple threads on the HPC to achieve best performance - getting it wrong can easily increase compute times by tens of percents, sometimes even more. This is @@ -34,7 +34,7 @@ performance, as a socket connects the processor to its RAM and other processors. A processor in each socket consists of multiple physical cores, and each physical core is split into two logical cores using a technology called -[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)). +[Hyperthreading](../../Software/Scientific_Computing/Batch_Jobs/Hyperthreading.md)). A processor also includes caches - a [cache](https://en.wikipedia.org/wiki/CPU_cache) is very fast memory @@ -48,7 +48,7 @@ cores (our current HPCs have 18 to 20 cores). Each core can also be further divided into two logical cores (or hyperthreads, as mentioned before). -![NodeSocketCore.png](../../assets/images/Thread_Placement_and_Thread_Affinity.png) +![NodeSocketCore.png](../../Software/assets/images/Thread_Placement_and_Thread_Affinity.png) It is very important to note the following: diff --git a/docs/Scientific_Computing/Supported_Applications/ABAQUS.md b/docs/Software/Available_Applications/ABAQUS.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/ABAQUS.md rename to docs/Software/Available_Applications/ABAQUS.md diff --git a/docs/Scientific_Computing/Supported_Applications/ANSYS.md b/docs/Software/Available_Applications/ANSYS.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/ANSYS.md rename to docs/Software/Available_Applications/ANSYS.md diff --git a/docs/Scientific_Computing/Supported_Applications/AlphaFold.md b/docs/Software/Available_Applications/AlphaFold.md similarity index 99% rename from docs/Scientific_Computing/Supported_Applications/AlphaFold.md rename to docs/Software/Available_Applications/AlphaFold.md index 20638b588..4ffbd8e86 100644 --- a/docs/Scientific_Computing/Supported_Applications/AlphaFold.md +++ b/docs/Software/Available_Applications/AlphaFold.md @@ -9,11 +9,11 @@ zendesk_section_id: 360000040076 --- -[//]: <> (APPS PAGE BOILERPLATE START) +[//]:AlphaFold.md> (APPS PAGE BOILERPLATE START) {% set app_name = page.title | trim %} {% set app = applications[app_name] %} {% include "partials/app_header.html" %} -[//]: <> (APPS PAGE BOILERPLATE END) +[//]:AlphaFold.md> (APPS PAGE BOILERPLATE END) !!! prerequisite Tips An extended version of AlphaFold2 on NeSI Mahuika cluster which diff --git a/docs/Scientific_Computing/Supported_Applications/Apptainer.md b/docs/Software/Available_Applications/Apptainer.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Apptainer.md rename to docs/Software/Available_Applications/Apptainer.md diff --git a/docs/Scientific_Computing/Supported_Applications/BLAST.md b/docs/Software/Available_Applications/BLAST.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/BLAST.md rename to docs/Software/Available_Applications/BLAST.md diff --git a/docs/Scientific_Computing/Supported_Applications/BRAKER.md b/docs/Software/Available_Applications/BRAKER.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/BRAKER.md rename to docs/Software/Available_Applications/BRAKER.md diff --git a/docs/Scientific_Computing/Supported_Applications/CESM.md b/docs/Software/Available_Applications/CESM.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/CESM.md rename to docs/Software/Available_Applications/CESM.md diff --git a/docs/Scientific_Computing/Supported_Applications/COMSOL.md b/docs/Software/Available_Applications/COMSOL.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/COMSOL.md rename to docs/Software/Available_Applications/COMSOL.md diff --git a/docs/Scientific_Computing/Supported_Applications/Clair3.md b/docs/Software/Available_Applications/Clair3.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Clair3.md rename to docs/Software/Available_Applications/Clair3.md diff --git a/docs/Scientific_Computing/Supported_Applications/Cylc.md b/docs/Software/Available_Applications/Cylc.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Cylc.md rename to docs/Software/Available_Applications/Cylc.md diff --git a/docs/Scientific_Computing/Supported_Applications/Delft3D.md b/docs/Software/Available_Applications/Delft3D.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Delft3D.md rename to docs/Software/Available_Applications/Delft3D.md diff --git a/docs/Scientific_Computing/Supported_Applications/Dorado.md b/docs/Software/Available_Applications/Dorado.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Dorado.md rename to docs/Software/Available_Applications/Dorado.md diff --git a/docs/Scientific_Computing/Supported_Applications/FDS.md b/docs/Software/Available_Applications/FDS.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/FDS.md rename to docs/Software/Available_Applications/FDS.md diff --git a/docs/Scientific_Computing/Supported_Applications/FlexiBLAS.md b/docs/Software/Available_Applications/FlexiBLAS.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/FlexiBLAS.md rename to docs/Software/Available_Applications/FlexiBLAS.md diff --git a/docs/Scientific_Computing/Supported_Applications/FreeSurfer.md b/docs/Software/Available_Applications/FreeSurfer.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/FreeSurfer.md rename to docs/Software/Available_Applications/FreeSurfer.md diff --git a/docs/Scientific_Computing/Supported_Applications/GATK.md b/docs/Software/Available_Applications/GATK.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/GATK.md rename to docs/Software/Available_Applications/GATK.md diff --git a/docs/Scientific_Computing/Supported_Applications/GROMACS.md b/docs/Software/Available_Applications/GROMACS.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/GROMACS.md rename to docs/Software/Available_Applications/GROMACS.md diff --git a/docs/Scientific_Computing/Supported_Applications/Gaussian.md b/docs/Software/Available_Applications/Gaussian.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Gaussian.md rename to docs/Software/Available_Applications/Gaussian.md diff --git a/docs/Scientific_Computing/Supported_Applications/Java.md b/docs/Software/Available_Applications/Java.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Java.md rename to docs/Software/Available_Applications/Java.md diff --git a/docs/Scientific_Computing/Supported_Applications/Julia.md b/docs/Software/Available_Applications/Julia.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Julia.md rename to docs/Software/Available_Applications/Julia.md diff --git a/docs/Scientific_Computing/Supported_Applications/Keras.md b/docs/Software/Available_Applications/Keras.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Keras.md rename to docs/Software/Available_Applications/Keras.md diff --git a/docs/Scientific_Computing/Supported_Applications/Lambda_Stack.md b/docs/Software/Available_Applications/Lambda_Stack.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Lambda_Stack.md rename to docs/Software/Available_Applications/Lambda_Stack.md diff --git a/docs/Scientific_Computing/Supported_Applications/MAKER.md b/docs/Software/Available_Applications/MAKER.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/MAKER.md rename to docs/Software/Available_Applications/MAKER.md diff --git a/docs/Scientific_Computing/Supported_Applications/MATLAB.md b/docs/Software/Available_Applications/MATLAB.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/MATLAB.md rename to docs/Software/Available_Applications/MATLAB.md diff --git a/docs/Scientific_Computing/Supported_Applications/Miniforge3.md b/docs/Software/Available_Applications/Miniforge3.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Miniforge3.md rename to docs/Software/Available_Applications/Miniforge3.md diff --git a/docs/Scientific_Computing/Supported_Applications/Molpro.md b/docs/Software/Available_Applications/Molpro.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Molpro.md rename to docs/Software/Available_Applications/Molpro.md diff --git a/docs/Scientific_Computing/Supported_Applications/NWChem.md b/docs/Software/Available_Applications/NWChem.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/NWChem.md rename to docs/Software/Available_Applications/NWChem.md diff --git a/docs/Scientific_Computing/Supported_Applications/ORCA.md b/docs/Software/Available_Applications/ORCA.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/ORCA.md rename to docs/Software/Available_Applications/ORCA.md diff --git a/docs/Scientific_Computing/Supported_Applications/OpenFOAM.md b/docs/Software/Available_Applications/OpenFOAM.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/OpenFOAM.md rename to docs/Software/Available_Applications/OpenFOAM.md diff --git a/docs/Scientific_Computing/Supported_Applications/OpenSees.md b/docs/Software/Available_Applications/OpenSees.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/OpenSees.md rename to docs/Software/Available_Applications/OpenSees.md diff --git a/docs/Scientific_Computing/Supported_Applications/ParaView.md b/docs/Software/Available_Applications/ParaView.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/ParaView.md rename to docs/Software/Available_Applications/ParaView.md diff --git a/docs/Scientific_Computing/Supported_Applications/Python.md b/docs/Software/Available_Applications/Python.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Python.md rename to docs/Software/Available_Applications/Python.md diff --git a/docs/Scientific_Computing/Supported_Applications/R.md b/docs/Software/Available_Applications/R.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/R.md rename to docs/Software/Available_Applications/R.md diff --git a/docs/Scientific_Computing/Supported_Applications/RAxML.md b/docs/Software/Available_Applications/RAxML.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/RAxML.md rename to docs/Software/Available_Applications/RAxML.md diff --git a/docs/Scientific_Computing/Supported_Applications/Relion.md b/docs/Software/Available_Applications/Relion.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Relion.md rename to docs/Software/Available_Applications/Relion.md diff --git a/docs/Scientific_Computing/Supported_Applications/Supernova.md b/docs/Software/Available_Applications/Supernova.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Supernova.md rename to docs/Software/Available_Applications/Supernova.md diff --git a/docs/Scientific_Computing/Supported_Applications/Synda.md b/docs/Software/Available_Applications/Synda.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Synda.md rename to docs/Software/Available_Applications/Synda.md diff --git a/docs/Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md b/docs/Software/Available_Applications/TensorFlow_on_CPUs.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md rename to docs/Software/Available_Applications/TensorFlow_on_CPUs.md diff --git a/docs/Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md b/docs/Software/Available_Applications/TensorFlow_on_GPUs.md similarity index 99% rename from docs/Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md rename to docs/Software/Available_Applications/TensorFlow_on_GPUs.md index ad6095f2a..5548e41be 100644 --- a/docs/Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md +++ b/docs/Software/Available_Applications/TensorFlow_on_GPUs.md @@ -183,7 +183,7 @@ For TensorFlow, we recommend using the [official container provided by NVIDIA](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow). More information about using Apptainer with GPU enabled containers is available on the [NVIDIA GPU -Containers](../../Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md) +Containers](../Containers/NVIDIA_GPU_Containers.md) support page. ## Specific versions for A100 diff --git a/docs/Scientific_Computing/Supported_Applications/Trinity.md b/docs/Software/Available_Applications/Trinity.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/Trinity.md rename to docs/Software/Available_Applications/Trinity.md diff --git a/docs/Scientific_Computing/Supported_Applications/VASP.md b/docs/Software/Available_Applications/VASP.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/VASP.md rename to docs/Software/Available_Applications/VASP.md diff --git a/docs/Scientific_Computing/Supported_Applications/VTune.md b/docs/Software/Available_Applications/VTune.md similarity index 97% rename from docs/Scientific_Computing/Supported_Applications/VTune.md rename to docs/Software/Available_Applications/VTune.md index 6c2cb17c7..47da03ddd 100644 --- a/docs/Scientific_Computing/Supported_Applications/VTune.md +++ b/docs/Software/Available_Applications/VTune.md @@ -23,7 +23,7 @@ good practice to profile a code before attempting to modify the code to improve its performance. VTune collects key profiling data and presents them in an intuitive way.  Another tool that provides similar information is [ARM -MAP](../../Scientific_Computing/Profiling_and_Debugging/Profiler-ARM_MAP.md). +MAP](../Profiling_and_Debugging/Profiler-ARM_MAP.md). ## How to use VTune diff --git a/docs/Scientific_Computing/Supported_Applications/VirSorter.md b/docs/Software/Available_Applications/VirSorter.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/VirSorter.md rename to docs/Software/Available_Applications/VirSorter.md diff --git a/docs/Scientific_Computing/Supported_Applications/WRF.md b/docs/Software/Available_Applications/WRF.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/WRF.md rename to docs/Software/Available_Applications/WRF.md diff --git a/docs/Scientific_Computing/Supported_Applications/fastStructure.md b/docs/Software/Available_Applications/fastStructure.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/fastStructure.md rename to docs/Software/Available_Applications/fastStructure.md diff --git a/docs/Scientific_Computing/Supported_Applications/index.md b/docs/Software/Available_Applications/index.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/index.md rename to docs/Software/Available_Applications/index.md diff --git a/docs/Scientific_Computing/Supported_Applications/ipyrad.md b/docs/Software/Available_Applications/ipyrad.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/ipyrad.md rename to docs/Software/Available_Applications/ipyrad.md diff --git a/docs/Scientific_Computing/Supported_Applications/ont-guppy-gpu.md b/docs/Software/Available_Applications/ont-guppy-gpu.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/ont-guppy-gpu.md rename to docs/Software/Available_Applications/ont-guppy-gpu.md diff --git a/docs/Scientific_Computing/Supported_Applications/snpEff.md b/docs/Software/Available_Applications/snpEff.md similarity index 100% rename from docs/Scientific_Computing/Supported_Applications/snpEff.md rename to docs/Software/Available_Applications/snpEff.md diff --git a/docs/Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md b/docs/Software/Containers/NVIDIA_GPU_Containers.md similarity index 100% rename from docs/Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md rename to docs/Software/Containers/NVIDIA_GPU_Containers.md diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Run_an_executable_under_Apptainer_in_parallel.md b/docs/Software/Containers/Run_an_executable_under_Apptainer_in_parallel.md similarity index 100% rename from docs/Scientific_Computing/HPC_Software_Environment/Run_an_executable_under_Apptainer_in_parallel.md rename to docs/Software/Containers/Run_an_executable_under_Apptainer_in_parallel.md diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Run_an_executable_under_Apptainer_on_gpu.md b/docs/Software/Containers/Run_an_executable_under_Apptainer_on_gpu.md similarity index 100% rename from docs/Scientific_Computing/HPC_Software_Environment/Run_an_executable_under_Apptainer_on_gpu.md rename to docs/Software/Containers/Run_an_executable_under_Apptainer_on_gpu.md diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md b/docs/Software/Installing_Applications_Yourself.md similarity index 93% rename from docs/Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md rename to docs/Software/Installing_Applications_Yourself.md index 303cb43c3..c4060ee18 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md +++ b/docs/Software/Installing_Applications_Yourself.md @@ -8,7 +8,7 @@ tags: Before installing your own applications, first check; - The software you want is not already installed. `module spider ` can be used to search software, -or see [Supported Applications](../Supported_Applications/index.md). +or see [Supported Applications](../Scientific_Computing/Supported_Applications/index.md). - If you are looking for a new version of existing software, {% include "partials/support_request.html" %} and we will install the new version. - If you would like us to install something for you or help you install something yourself {% include "partials/support_request.html" %}. If the software is popular, We may decide to install it centrally, in which case there will be no additional steps for you. Otherwise the software will be installed in your project directory, in which case it is your responsibility to maintain. @@ -22,13 +22,13 @@ See [Software Installation Request](Software_Installation_Request.md) for guidel How to add package to an existing module will vary based on the language in question. -- [Python](../Supported_Applications/Python.md#python-packages) -- [R](../Supported_Applications/R.md#dealing-with-packages) -- [Julia](../Supported_Applications/Julia.md#installing-julia-packages) -- [MATLAB](../Supported_Applications/MATLAB.md#adding-support-packages) +- [Python](../Scientific_Computing/Supported_Applications/Python.md#python-packages) +- [R](../Scientific_Computing/Supported_Applications/R.md#dealing-with-packages) +- [Julia](../Scientific_Computing/Supported_Applications/Julia.md#installing-julia-packages) +- [MATLAB](../Scientific_Computing/Supported_Applications/MATLAB.md#adding-support-packages) For other languages check if we have additional documentation for it -in our [application documentation](../Supported_Applications/index.md). +in our [application documentation](../Scientific_Computing/Supported_Applications/index.md). ## Other Applications diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/.pages.yml b/docs/Software/Profiling_and_Debugging/.pages.yml similarity index 100% rename from docs/Scientific_Computing/Profiling_and_Debugging/.pages.yml rename to docs/Software/Profiling_and_Debugging/.pages.yml diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/Debugging.md b/docs/Software/Profiling_and_Debugging/Debugging.md similarity index 100% rename from docs/Scientific_Computing/Profiling_and_Debugging/Debugging.md rename to docs/Software/Profiling_and_Debugging/Debugging.md diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-ARM_MAP.md b/docs/Software/Profiling_and_Debugging/Profiler-ARM_MAP.md similarity index 100% rename from docs/Scientific_Computing/Profiling_and_Debugging/Profiler-ARM_MAP.md rename to docs/Software/Profiling_and_Debugging/Profiler-ARM_MAP.md diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-VTune.md b/docs/Software/Profiling_and_Debugging/Profiler-VTune.md similarity index 100% rename from docs/Scientific_Computing/Profiling_and_Debugging/Profiler-VTune.md rename to docs/Software/Profiling_and_Debugging/Profiler-VTune.md diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md b/docs/Software/Profiling_and_Debugging/Slurm_Native_Profiling.md similarity index 100% rename from docs/Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md rename to docs/Software/Profiling_and_Debugging/Slurm_Native_Profiling.md diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Software_Installation_Request.md b/docs/Software/Software_Installation_Request.md similarity index 100% rename from docs/Scientific_Computing/HPC_Software_Environment/Software_Installation_Request.md rename to docs/Software/Software_Installation_Request.md diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Software_Version_Management.md b/docs/Software/Software_Version_Management.md similarity index 91% rename from docs/Scientific_Computing/HPC_Software_Environment/Software_Version_Management.md rename to docs/Software/Software_Version_Management.md index 8e59049d3..978a5289f 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Software_Version_Management.md +++ b/docs/Software/Software_Version_Management.md @@ -12,7 +12,7 @@ zendesk_section_id: 360000040056 Much of the software installed on the NeSI cluster have multiple versions available as shown on the -[supported applications page](../Supported_Applications/index.md) +[supported applications page](../Scientific_Computing/Supported_Applications/index.md) or by using the `module avail` or `module spider` commands. If only the application name is given a default version will be chosen, diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index 4a182bf99..f84389eda 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -18,3 +18,62 @@ Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md: Sci Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md: Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md : Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md +Scientific_Computing/Supported_Applications/ABAQUS.md : Software/Available_Applications/ABAQUS.md +Scientific_Computing/Supported_Applications/AlphaFold.md : Software/Available_Applications/AlphaFold.md +Scientific_Computing/Supported_Applications/ANSYS.md : Software/Available_Applications/ANSYS.md +Scientific_Computing/Supported_Applications/Apptainer.md : Software/Available_Applications/Apptainer.md +Scientific_Computing/Supported_Applications/BLAST.md : Software/Available_Applications/BLAST.md +Scientific_Computing/Supported_Applications/BRAKER.md : Software/Available_Applications/BRAKER.md +Scientific_Computing/Supported_Applications/CESM.md : Software/Available_Applications/CESM.md +Scientific_Computing/Supported_Applications/Clair3.md : Software/Available_Applications/Clair3.md +Scientific_Computing/Supported_Applications/COMSOL.md : Software/Available_Applications/COMSOL.md +Scientific_Computing/Supported_Applications/Cylc.md : Software/Available_Applications/Cylc.md +Scientific_Computing/Supported_Applications/Delft3D.md : Software/Available_Applications/Delft3D.md +Scientific_Computing/Supported_Applications/Dorado.md : Software/Available_Applications/Dorado.md +Scientific_Computing/Supported_Applications/fastStructure.md : Software/Available_Applications/fastStructure.md +Scientific_Computing/Supported_Applications/FDS.md : Software/Available_Applications/FDS.md +Scientific_Computing/Supported_Applications/FlexiBLAS.md : Software/Available_Applications/FlexiBLAS.md +Scientific_Computing/Supported_Applications/FreeSurfer.md : Software/Available_Applications/FreeSurfer.md +Scientific_Computing/Supported_Applications/GATK.md : Software/Available_Applications/GATK.md +Scientific_Computing/Supported_Applications/Gaussian.md : Software/Available_Applications/Gaussian.md +Scientific_Computing/Supported_Applications/GROMACS.md : Software/Available_Applications/GROMACS.md +Scientific_Computing/Supported_Applications/index.md : Software/Available_Applications/index.md +Scientific_Computing/Supported_Applications/ipyrad.md : Software/Available_Applications/ipyrad.md +Scientific_Computing/Supported_Applications/Java.md : Software/Available_Applications/Java.md +Scientific_Computing/Supported_Applications/Julia.md : Software/Available_Applications/Julia.md +Scientific_Computing/Supported_Applications/Keras.md : Software/Available_Applications/Keras.md +Scientific_Computing/Supported_Applications/Lambda_Stack.md : Software/Available_Applications/Lambda_Stack.md +Scientific_Computing/Supported_Applications/MAKER.md : Software/Available_Applications/MAKER.md +Scientific_Computing/Supported_Applications/MATLAB.md : Software/Available_Applications/MATLAB.md +Scientific_Computing/Supported_Applications/Miniforge3.md : Software/Available_Applications/Miniforge3.md +Scientific_Computing/Supported_Applications/Molpro.md : Software/Available_Applications/Molpro.md +Scientific_Computing/Supported_Applications/NWChem.md : Software/Available_Applications/NWChem.md +Scientific_Computing/Supported_Applications/ont-guppy-gpu.md : Software/Available_Applications/ont-guppy-gpu.md +Scientific_Computing/Supported_Applications/OpenFOAM.md : Software/Available_Applications/OpenFOAM.md +Scientific_Computing/Supported_Applications/OpenSees.md : Software/Available_Applications/OpenSees.md +Scientific_Computing/Supported_Applications/ORCA.md : Software/Available_Applications/ORCA.md +Scientific_Computing/Supported_Applications/ParaView.md : Software/Available_Applications/ParaView.md +Scientific_Computing/Supported_Applications/Python.md : Software/Available_Applications/Python.md +Scientific_Computing/Supported_Applications/R.md : Software/Available_Applications/R.md +Scientific_Computing/Supported_Applications/RAxML.md : Software/Available_Applications/RAxML.md +Scientific_Computing/Supported_Applications/Relion.md : Software/Available_Applications/Relion.md +Scientific_Computing/Supported_Applications/snpEff.md : Software/Available_Applications/snpEff.md +Scientific_Computing/Supported_Applications/Supernova.md : Software/Available_Applications/Supernova.md +Scientific_Computing/Supported_Applications/Synda.md : Software/Available_Applications/Synda.md +Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md : Software/Available_Applications/TensorFlow_on_CPUs.md +Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md : Software/Available_Applications/TensorFlow_on_GPUs.md +Scientific_Computing/Supported_Applications/Trinity.md : Software/Available_Applications/Trinity.md +Scientific_Computing/Supported_Applications/VASP.md : Software/Available_Applications/VASP.md +Scientific_Computing/Supported_Applications/VirSorter.md : Software/Available_Applications/VirSorter.md +Scientific_Computing/Supported_Applications/VTune.md : Software/Available_Applications/VTune.md +Scientific_Computing/Supported_Applications/WRF.md : Software/Available_Applications/WRF.md +Scientific_Computing/Profiling_and_Debugging/Debugging.md : Software/Profiling_and_Debugging/Debugging.md +Scientific_Computing/Profiling_and_Debugging/Profiler-ARM_MAP.md : Software/Profiling_and_Debugging/Profiler-ARM_MAP.md +Scientific_Computing/Profiling_and_Debugging/Profiler-VTune.md : Software/Profiling_and_Debugging/Profiler-VTune.md +Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md : Software/Profiling_and_Debugging/Slurm_Native_Profiling.md +Scientific_Computing/HPC_Software_Environment/Run_an_executable_under_Apptainer_on_gpu.md : Software/Containers/Run_an_executable_under_Apptainer_on_gpu.md +Scientific_Computing/HPC_Software_Environment/Run_an_executable_under_Apptainer_in_parallel.md : Software/Containers/Run_an_executable_under_Apptainer_in_parallel.md +Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md : Software/Containers/NVIDIA_GPU_Containers.md +Scientific_Computing/HPC_Software_Environment/Software_Version_Management.md : Software/Software_Version_Management.md +Scientific_Computing/HPC_Software_Environment/Software_Installation_Request.md : Software/Software_Installation_Request.md +Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md : Software/Installing_Applications_Yourself.md From edf45f798313dd1acdcdb1dfb75171b00c6bb178 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 12:29:42 +1300 Subject: [PATCH 02/25] fix old redirects --- docs/redirect_map.yml | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index f84389eda..f2e734ad8 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -4,8 +4,6 @@ General/FAQs/How_to_replace_my_2FA_token.md: General/FAQs/How_do_I_replace_my_Ad General/FAQs/How_to_replace_my_2FA.md: General/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md Scientific_Computing/Terminal_Setup/Ubuntu_LTS_terminal_Windows.md: Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md General/FAQs/How_can_I_see_how_busy_the_cluster_is.md: General/FAQs/How_busy_is_the_cluster.md -Scientific_Computing/Supported_Applications/Miniconda3.md: Scientific_Computing/Supported_Applications/Miniforge3.md -Scientific_Computing/HPC_Software_Environment/Finding_Software.md : Scientific_Computing/Supported_Applications/index.md hc.md: index.md hc/en-gb.md: index.md Storage/Freezer_long_term_storage.md : Storage/Long_Term_Storage/Freezer_long_term_storage.md @@ -17,7 +15,7 @@ Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md: Scie Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md: Scientific_Computing/Batch_Jobs/Job_prioritisation.md Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md: Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md -Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md : Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md +Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md : Software/Installing_Applications_Yourself.md Scientific_Computing/Supported_Applications/ABAQUS.md : Software/Available_Applications/ABAQUS.md Scientific_Computing/Supported_Applications/AlphaFold.md : Software/Available_Applications/AlphaFold.md Scientific_Computing/Supported_Applications/ANSYS.md : Software/Available_Applications/ANSYS.md From 69be2b04c986a427af181f2af38742d2a76693cd Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 12:31:15 +1300 Subject: [PATCH 03/25] Update pages.yml --- docs/.pages.yml | 2 +- docs/Software/Available_Applications/.pages.yml | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-) create mode 100644 docs/Software/Available_Applications/.pages.yml diff --git a/docs/.pages.yml b/docs/.pages.yml index fafded3d7..dadf935a9 100644 --- a/docs/.pages.yml +++ b/docs/.pages.yml @@ -1,6 +1,6 @@ nav: - Getting_Started - General - - Scientific_Computing + - Software - Storage - Service_Subscriptions diff --git a/docs/Software/Available_Applications/.pages.yml b/docs/Software/Available_Applications/.pages.yml new file mode 100644 index 000000000..4e35bfd6f --- /dev/null +++ b/docs/Software/Available_Applications/.pages.yml @@ -0,0 +1,3 @@ +--- +nav: + - "*" From a265943bedf9a16a3c2d104dd4dc87bab8762b4d Mon Sep 17 00:00:00 2001 From: Jen Reeve Date: Mon, 1 Dec 2025 12:59:46 +1300 Subject: [PATCH 04/25] creating interactive main folder --- docs/.pages.yml | 1 + docs/Interactive_Computing/.pages.yml | 4 ++++ .../OnDemand}/.pages.yml | 0 .../OnDemand}/Apps/.pages.yml | 2 +- .../OnDemand}/Apps/JupyterLab/.pages.yml | 0 .../Jupyter_kernels_Manual_management.md | 0 .../Jupyter_kernels_Tool_assisted_management.md | 0 .../OnDemand}/Apps/JupyterLab/index.md | 0 .../OnDemand}/Apps/MATLAB.md | 0 .../OnDemand}/Apps/RStudio.md | 0 .../OnDemand}/Apps/VSCode.md | 0 .../OnDemand}/Apps/virtual_desktop.md | 0 .../OnDemand}/Release_Notes/index.md | 0 .../OnDemand}/how_to_guide.md | 0 .../OnDemand}/index.md | 0 .../OnDemand}/ood_troubleshooting.md | 0 .../Slurm_Interactive_Sessions.md | 0 docs/Scientific_Computing/Batch_Jobs/.pages.yml | 1 - docs/redirect_map.yml | 14 +++++++++++++- 19 files changed, 19 insertions(+), 3 deletions(-) create mode 100644 docs/Interactive_Computing/.pages.yml rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/.pages.yml (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/.pages.yml (80%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/JupyterLab/.pages.yml (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/JupyterLab/Jupyter_kernels_Manual_management.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/JupyterLab/index.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/MATLAB.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/RStudio.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/VSCode.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Apps/virtual_desktop.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/Release_Notes/index.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/how_to_guide.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/index.md (100%) rename docs/{Scientific_Computing/Interactive_computing_with_OnDemand => Interactive_Computing/OnDemand}/ood_troubleshooting.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Interactive_Computing}/Slurm_Interactive_Sessions.md (100%) diff --git a/docs/.pages.yml b/docs/.pages.yml index fafded3d7..a181b591d 100644 --- a/docs/.pages.yml +++ b/docs/.pages.yml @@ -2,5 +2,6 @@ nav: - Getting_Started - General - Scientific_Computing + - Interactive_Computing - Storage - Service_Subscriptions diff --git a/docs/Interactive_Computing/.pages.yml b/docs/Interactive_Computing/.pages.yml new file mode 100644 index 000000000..6e9419d27 --- /dev/null +++ b/docs/Interactive_Computing/.pages.yml @@ -0,0 +1,4 @@ +--- +nav: + - Slurm interactive sessions: Slurm_Interactive_Sessions.md + - Interactive computing with OnDemand: OnDemand diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/.pages.yml b/docs/Interactive_Computing/OnDemand/.pages.yml similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/.pages.yml rename to docs/Interactive_Computing/OnDemand/.pages.yml diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/.pages.yml b/docs/Interactive_Computing/OnDemand/Apps/.pages.yml similarity index 80% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/.pages.yml rename to docs/Interactive_Computing/OnDemand/Apps/.pages.yml index f8d7ad5af..18a3265ac 100644 --- a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/.pages.yml +++ b/docs/Interactive_Computing/OnDemand/Apps/.pages.yml @@ -3,6 +3,6 @@ nav: - JupyterLab: JupyterLab - RStudio: RStudio.md - MATLAB: MATLAB.md - - Code server: code_server.md + - VS Code: VSCode.md - Virtual desktop: virtual_desktop.md - "*" diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/.pages.yml b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/.pages.yml similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/.pages.yml rename to docs/Interactive_Computing/OnDemand/Apps/JupyterLab/.pages.yml diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md rename to docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md rename to docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/index.md b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/index.md rename to docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/MATLAB.md b/docs/Interactive_Computing/OnDemand/Apps/MATLAB.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/MATLAB.md rename to docs/Interactive_Computing/OnDemand/Apps/MATLAB.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/RStudio.md b/docs/Interactive_Computing/OnDemand/Apps/RStudio.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/RStudio.md rename to docs/Interactive_Computing/OnDemand/Apps/RStudio.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/VSCode.md b/docs/Interactive_Computing/OnDemand/Apps/VSCode.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/VSCode.md rename to docs/Interactive_Computing/OnDemand/Apps/VSCode.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/virtual_desktop.md b/docs/Interactive_Computing/OnDemand/Apps/virtual_desktop.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Apps/virtual_desktop.md rename to docs/Interactive_Computing/OnDemand/Apps/virtual_desktop.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/Release_Notes/index.md b/docs/Interactive_Computing/OnDemand/Release_Notes/index.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/Release_Notes/index.md rename to docs/Interactive_Computing/OnDemand/Release_Notes/index.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/how_to_guide.md b/docs/Interactive_Computing/OnDemand/how_to_guide.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/how_to_guide.md rename to docs/Interactive_Computing/OnDemand/how_to_guide.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/index.md b/docs/Interactive_Computing/OnDemand/index.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/index.md rename to docs/Interactive_Computing/OnDemand/index.md diff --git a/docs/Scientific_Computing/Interactive_computing_with_OnDemand/ood_troubleshooting.md b/docs/Interactive_Computing/OnDemand/ood_troubleshooting.md similarity index 100% rename from docs/Scientific_Computing/Interactive_computing_with_OnDemand/ood_troubleshooting.md rename to docs/Interactive_Computing/OnDemand/ood_troubleshooting.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md b/docs/Interactive_Computing/Slurm_Interactive_Sessions.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md rename to docs/Interactive_Computing/Slurm_Interactive_Sessions.md diff --git a/docs/Scientific_Computing/Batch_Jobs/.pages.yml b/docs/Scientific_Computing/Batch_Jobs/.pages.yml index d3a7c0800..6056cc39d 100644 --- a/docs/Scientific_Computing/Batch_Jobs/.pages.yml +++ b/docs/Scientific_Computing/Batch_Jobs/.pages.yml @@ -4,7 +4,6 @@ nav: - SLURM-Best_Practice.md - Using_GPUs.md - Job_Checkpointing.md - - Slurm_Interactive_Sessions.md - Fair_Share.md - Checksums.md - "*" diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index 4a182bf99..9b4311b4a 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -16,5 +16,17 @@ Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md: Scienti Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md: Scientific_Computing/Batch_Jobs/Job_Checkpointing.md Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md: Scientific_Computing/Batch_Jobs/Job_prioritisation.md Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md: Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md +Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Interactive_Computing/Slurm_Interactive_Sessions.md Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md : Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md +Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md : Interactive_Computing/Slurm_Interactive_Sessions.md +Scientific_Computing/Interactive_computing_with_OnDemand/how_to_guide.md : Interactive_Computing/OnDemand/how_to_guide.md +Scientific_Computing/Interactive_computing_with_OnDemand/index.md : Interactive_Computing/OnDemand/index.md +Scientific_Computing/Interactive_computing_with_OnDemand/ood_troubleshooting.md : Interactive_Computing/OnDemand/ood_troubleshooting.md +Scientific_Computing/Interactive_computing_with_OnDemand/Apps/MATLAB.md : Interactive_Computing/OnDemand/Apps/MATLAB.md +Scientific_Computing/Interactive_computing_with_OnDemand/Apps/RStudio.md : Interactive_Computing/OnDemand/Apps/RStudio.md +Scientific_Computing/Interactive_computing_with_OnDemand/Apps/virtual_desktop.md : Interactive_Computing/OnDemand/Apps/virtual_desktop.md +Scientific_Computing/Interactive_computing_with_OnDemand/Apps/VSCode.md : Interactive_Computing/OnDemand/Apps/VSCode.md +Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/index.md : Interactive_Computing/OnDemand/Apps/JupyterLab/index.md +Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md : Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md +Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md : Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md +Scientific_Computing/Interactive_computing_with_OnDemand/Release_Notes/index.md : Interactive_Computing/OnDemand/Release_Notes/index.md From 19bf327828e657a87fea28d3321641f91e05a956 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 13:18:16 +1300 Subject: [PATCH 05/25] Move initial to batch_jobs --- .../Batch_Jobs => Batch_Computing}/.pages.yml | 0 .../Checking_resource_usage.md | 0 .../Batch_Jobs => Batch_Computing}/Checksums.md | 0 .../Batch_Jobs => Batch_Computing}/Fair_Share.md | 0 .../Batch_Jobs => Batch_Computing}/Hardware.md | 0 .../Hyperthreading.md | 0 .../Job_Checkpointing.md | 0 .../Batch_Jobs => Batch_Computing}/Job_Limits.md | 0 .../Job_prioritisation.md | 0 .../SLURM-Best_Practice.md | 0 .../Slurm_Interactive_Sessions.md | 0 .../Temporary_directories.md | 0 .../Batch_Jobs => Batch_Computing}/Using_GPUs.md | 0 .../HPC_Software_Environment/.pages.yml | 2 -- .../Available_Applications/TensorFlow_on_CPUs.md | 2 +- .../Configuring_Dask_MPI_jobs.md | 8 ++++---- .../OpenMP_settings.md | 4 ++-- .../Thread_Placement_and_Thread_Affinity.md | 6 +++--- docs/redirect_map.yml | 16 ++++++++++++++++ 19 files changed, 26 insertions(+), 12 deletions(-) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/.pages.yml (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Checking_resource_usage.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Checksums.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Fair_Share.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Hardware.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Hyperthreading.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Job_Checkpointing.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Job_Limits.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Job_prioritisation.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/SLURM-Best_Practice.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Slurm_Interactive_Sessions.md (100%) rename docs/{Scientific_Computing/HPC_Software_Environment => Batch_Computing}/Temporary_directories.md (100%) rename docs/{Scientific_Computing/Batch_Jobs => Batch_Computing}/Using_GPUs.md (100%) delete mode 100644 docs/Scientific_Computing/HPC_Software_Environment/.pages.yml rename docs/{Scientific_Computing/HPC_Software_Environment => Software}/Configuring_Dask_MPI_jobs.md (96%) rename docs/{Scientific_Computing/HPC_Software_Environment => Software}/OpenMP_settings.md (94%) rename docs/{Scientific_Computing/HPC_Software_Environment => Software}/Thread_Placement_and_Thread_Affinity.md (98%) diff --git a/docs/Scientific_Computing/Batch_Jobs/.pages.yml b/docs/Batch_Computing/.pages.yml similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/.pages.yml rename to docs/Batch_Computing/.pages.yml diff --git a/docs/Scientific_Computing/Batch_Jobs/Checking_resource_usage.md b/docs/Batch_Computing/Checking_resource_usage.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Checking_resource_usage.md rename to docs/Batch_Computing/Checking_resource_usage.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Checksums.md b/docs/Batch_Computing/Checksums.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Checksums.md rename to docs/Batch_Computing/Checksums.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Fair_Share.md b/docs/Batch_Computing/Fair_Share.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Fair_Share.md rename to docs/Batch_Computing/Fair_Share.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Hardware.md b/docs/Batch_Computing/Hardware.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Hardware.md rename to docs/Batch_Computing/Hardware.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Hyperthreading.md b/docs/Batch_Computing/Hyperthreading.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Hyperthreading.md rename to docs/Batch_Computing/Hyperthreading.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Job_Checkpointing.md b/docs/Batch_Computing/Job_Checkpointing.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Job_Checkpointing.md rename to docs/Batch_Computing/Job_Checkpointing.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Job_Limits.md b/docs/Batch_Computing/Job_Limits.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Job_Limits.md rename to docs/Batch_Computing/Job_Limits.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Job_prioritisation.md b/docs/Batch_Computing/Job_prioritisation.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Job_prioritisation.md rename to docs/Batch_Computing/Job_prioritisation.md diff --git a/docs/Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md b/docs/Batch_Computing/SLURM-Best_Practice.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md rename to docs/Batch_Computing/SLURM-Best_Practice.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md b/docs/Batch_Computing/Slurm_Interactive_Sessions.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md rename to docs/Batch_Computing/Slurm_Interactive_Sessions.md diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Temporary_directories.md b/docs/Batch_Computing/Temporary_directories.md similarity index 100% rename from docs/Scientific_Computing/HPC_Software_Environment/Temporary_directories.md rename to docs/Batch_Computing/Temporary_directories.md diff --git a/docs/Scientific_Computing/Batch_Jobs/Using_GPUs.md b/docs/Batch_Computing/Using_GPUs.md similarity index 100% rename from docs/Scientific_Computing/Batch_Jobs/Using_GPUs.md rename to docs/Batch_Computing/Using_GPUs.md diff --git a/docs/Scientific_Computing/HPC_Software_Environment/.pages.yml b/docs/Scientific_Computing/HPC_Software_Environment/.pages.yml deleted file mode 100644 index 61d9c36ba..000000000 --- a/docs/Scientific_Computing/HPC_Software_Environment/.pages.yml +++ /dev/null @@ -1,2 +0,0 @@ -nav: - - "*" diff --git a/docs/Software/Available_Applications/TensorFlow_on_CPUs.md b/docs/Software/Available_Applications/TensorFlow_on_CPUs.md index a6915d587..05f60f2c1 100644 --- a/docs/Software/Available_Applications/TensorFlow_on_CPUs.md +++ b/docs/Software/Available_Applications/TensorFlow_on_CPUs.md @@ -109,7 +109,7 @@ threading behaviour of the Intel oneDNN library. While these settings should work well for a lot of applications, it is worth trying out different setups (e.g., longer blocktimes) and compare runtimes. Please see our article on [Thread Placement and Thread -Affinity](../../Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md) +Affinity](../Thread_Placement_and_Thread_Affinity.md) as well as this [Intel article](https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture) for further information and tips for improving peformance on CPUs. diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md b/docs/Software/Configuring_Dask_MPI_jobs.md similarity index 96% rename from docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md rename to docs/Software/Configuring_Dask_MPI_jobs.md index 042cba57b..5e6da78fe 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md +++ b/docs/Software/Configuring_Dask_MPI_jobs.md @@ -78,14 +78,14 @@ dependencies: !!! info "See also" See the - [Miniforge3](../Supported_Applications/Miniforge3.md) + [Miniforge3](../Scientific_Computing/Supported_Applications/Miniforge3.md) page for more information on how to create and manage Miniconda environments on NeSI. ## Configuring Slurm At runtime, Slurm will launch a number of Python processes as requested -in the [Slurm configuration script](../../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md). +in the [Slurm configuration script](../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md). Each process is given an ID (or "rank") starting at rank 0. Dask-MPI then assigns different roles to the different ranks: @@ -97,7 +97,7 @@ then assigns different roles to the different ranks: This implies that **Dask-MPI jobs must be launched on at least 3 MPI ranks!** Ranks 0 and 1 often perform much less work than the other ranks, it can therefore be beneficial to use -[Hyperthreading](../Batch_Jobs/Hyperthreading.md) +[Hyperthreading](../Scientific_Computing/Batch_Jobs/Hyperthreading.md) to place these two ranks onto a single physical core. Ensure that activating hyperthreading does not slow down the worker ranks by running a short test workload with and without hyperthreading. @@ -261,7 +261,7 @@ where the `%runscript` section ensures that the Python script passed to Conda environment inside the container. !!! note Tips - You can build this container on NeSI,following the instructions from the [dedicated supportpage](../Supported_Applications/Apptainer.md) + You can build this container on NeSI,following the instructions from the [dedicated supportpage](../Scientific_Computing/Supported_Applications/Apptainer.md) ### Slurm configuration diff --git a/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md b/docs/Software/OpenMP_settings.md similarity index 94% rename from docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md rename to docs/Software/OpenMP_settings.md index a535353d1..f4f939244 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md +++ b/docs/Software/OpenMP_settings.md @@ -20,7 +20,7 @@ all that is necessary to get 16 OpenMP threads is: in your Slurm script - although this can sometimes be more complicated, e.g., with -[TensorFlow on CPUs](../Supported_Applications/TensorFlow_on_CPUs.md). +[TensorFlow on CPUs](../Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md). In order to achieve good and consistent parallel scaling, additional settings may be required. This is particularly true on Mahuika where @@ -30,7 +30,7 @@ consistent, additional information can be found in our article [Thread Placement and Thread Affinity](./Thread_Placement_and_Thread_Affinity.md). 1. `--threads-per-core=2`. Use this option to tell srun or sbatch to -that you want to use [Hyperthreading](../Batch_Jobs/Hyperthreading.md), +that you want to use [Hyperthreading](../Scientific_Computing/Batch_Jobs/Hyperthreading.md), so use both of the virual CPUs available on each physical core, halving the number of physical cores you occupy. If you use hyperthreading, you will be charged for the number of physical cores that diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md b/docs/Software/Thread_Placement_and_Thread_Affinity.md similarity index 98% rename from docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md rename to docs/Software/Thread_Placement_and_Thread_Affinity.md index 43c41dbf1..3146c0343 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md +++ b/docs/Software/Thread_Placement_and_Thread_Affinity.md @@ -8,7 +8,7 @@ status: deprecated Multithreading with OpenMP and other threading libraries is an important way to parallelise scientific software for faster execution (see our article on [Parallel -Execution](../../Software/Getting_Started/Next_Steps/Parallel_Execution.md) for +Execution](./Getting_Started/Next_Steps/Parallel_Execution.md) for an introduction). Care needs to be taken when running multiple threads on the HPC to achieve best performance - getting it wrong can easily increase compute times by tens of percents, sometimes even more. This is @@ -34,7 +34,7 @@ performance, as a socket connects the processor to its RAM and other processors. A processor in each socket consists of multiple physical cores, and each physical core is split into two logical cores using a technology called -[Hyperthreading](../../Software/Scientific_Computing/Batch_Jobs/Hyperthreading.md)). +[Hyperthreading](./Scientific_Computing/Batch_Jobs/Hyperthreading.md)). A processor also includes caches - a [cache](https://en.wikipedia.org/wiki/CPU_cache) is very fast memory @@ -48,7 +48,7 @@ cores (our current HPCs have 18 to 20 cores). Each core can also be further divided into two logical cores (or hyperthreads, as mentioned before). -![NodeSocketCore.png](../../Software/assets/images/Thread_Placement_and_Thread_Affinity.png) +![NodeSocketCore.png](./assets/images/Thread_Placement_and_Thread_Affinity.png) It is very important to note the following: diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index f2e734ad8..18cf68e20 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -75,3 +75,19 @@ Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md : Softwar Scientific_Computing/HPC_Software_Environment/Software_Version_Management.md : Software/Software_Version_Management.md Scientific_Computing/HPC_Software_Environment/Software_Installation_Request.md : Software/Software_Installation_Request.md Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md : Software/Installing_Applications_Yourself.md +Scientific_Computing/Batch_Jobs/Checking_resource_usage.md : Batch_Computing/Checking_resource_usage.md +Scientific_Computing/Batch_Jobs/Checksums.md : Batch_Computing/Checksums.md +Scientific_Computing/Batch_Jobs/Fair_Share.md : Batch_Computing/Fair_Share.md +Scientific_Computing/Batch_Jobs/Hardware.md : Batch_Computing/Hardware.md +Scientific_Computing/Batch_Jobs/Hyperthreading.md : Batch_Computing/Hyperthreading.md +Scientific_Computing/Batch_Jobs/Job_Checkpointing.md : Batch_Computing/Job_Checkpointing.md +Scientific_Computing/Batch_Jobs/Job_Limits.md : Batch_Computing/Job_Limits.md +Scientific_Computing/Batch_Jobs/Job_prioritisation.md : Batch_Computing/Job_prioritisation.md +Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md : Batch_Computing/Slurm_Interactive_Sessions.md +Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md : Batch_Computing/SLURM-Best_Practice.md +Scientific_Computing/Batch_Jobs/Using_GPUs.md : Batch_Computing/Using_GPUs.md +Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md : Software/Configuring_Dask_MPI_jobs.md +Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md : Software/OpenMP_settings.md +Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md : Software/Thread_Placement_and_Thread_Affinity.md +Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md : Software/OpenMP_settings.md +Scientific_Computing/HPC_Software_Environment/Temporary_directories.md : Batch_Computing/Temporary_directories.md From aca8b3750fee076c9733e77adc214ca640020ef8 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 13:27:09 +1300 Subject: [PATCH 06/25] fix redirect --- docs/redirect_map.yml | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index cf02e64d5..44aac522c 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -7,14 +7,8 @@ General/FAQs/How_can_I_see_how_busy_the_cluster_is.md: General/FAQs/How_busy_is_ hc.md: index.md hc/en-gb.md: index.md Storage/Freezer_long_term_storage.md : Storage/Long_Term_Storage/Freezer_long_term_storage.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md: Scientific_Computing/Batch_Jobs/Checksums.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md: Scientific_Computing/Batch_Jobs/Fair_Share.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Using_GPUs.md: Scientific_Computing/Batch_Jobs/Using_GPUs.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md: Scientific_Computing/Batch_Jobs/Hyperthreading.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md: Scientific_Computing/Batch_Jobs/Job_Checkpointing.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md: Scientific_Computing/Batch_Jobs/Job_prioritisation.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md: Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md -Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md +Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md: Batch_Computing/Job_prioritisation.md +Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md: Batch_Computing/SLURM-Best_Practice.md Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md : Software/Installing_Applications_Yourself.md Scientific_Computing/Supported_Applications/ABAQUS.md : Software/Available_Applications/ABAQUS.md Scientific_Computing/Supported_Applications/AlphaFold.md : Software/Available_Applications/AlphaFold.md @@ -83,16 +77,13 @@ Scientific_Computing/Batch_Jobs/Hyperthreading.md : Batch_Computing/Hyperthreadi Scientific_Computing/Batch_Jobs/Job_Checkpointing.md : Batch_Computing/Job_Checkpointing.md Scientific_Computing/Batch_Jobs/Job_Limits.md : Batch_Computing/Job_Limits.md Scientific_Computing/Batch_Jobs/Job_prioritisation.md : Batch_Computing/Job_prioritisation.md -Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md : Batch_Computing/Slurm_Interactive_Sessions.md Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md : Batch_Computing/SLURM-Best_Practice.md Scientific_Computing/Batch_Jobs/Using_GPUs.md : Batch_Computing/Using_GPUs.md Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md : Software/Configuring_Dask_MPI_jobs.md -Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md : Software/OpenMP_settings.md Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md : Software/Thread_Placement_and_Thread_Affinity.md Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md : Software/OpenMP_settings.md Scientific_Computing/HPC_Software_Environment/Temporary_directories.md : Batch_Computing/Temporary_directories.md Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Interactive_Computing/Slurm_Interactive_Sessions.md -Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md : Scientific_Computing/HPC_Software_Environment/Installing_Applications_Yourself.md Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md : Interactive_Computing/Slurm_Interactive_Sessions.md Scientific_Computing/Interactive_computing_with_OnDemand/how_to_guide.md : Interactive_Computing/OnDemand/how_to_guide.md Scientific_Computing/Interactive_computing_with_OnDemand/index.md : Interactive_Computing/OnDemand/index.md From 861046fc2d0e369cf841b2d984994cfd71b28f16 Mon Sep 17 00:00:00 2001 From: jen-reeve <31419037+jen-reeve@users.noreply.github.com> Date: Mon, 1 Dec 2025 14:05:55 +1300 Subject: [PATCH 07/25] Announcements (#985) --- docs/.pages.yml | 1 + docs/{General => }/Announcements/.pages.yml | 1 + .../Accessing_NeSI_Support_during_the_Easter_break.md | 2 +- .../Autodeletion_returning_for_scratch_filesystem.md | 0 .../Announcements/December_holiday_support_restrictions.md | 6 +++--- .../Identity_Changes_for_Crown_Research_Institutes.md | 0 docs/{General => }/Announcements/Known_Issues_HPC3.md | 2 +- docs/{General => Announcements}/Release_Notes/index.md | 4 ---- docs/General/.pages.yml | 1 - docs/General/FAQs/Mahuika_HPC3_Differences.md | 2 +- docs/redirect_map.yml | 6 ++++++ 11 files changed, 14 insertions(+), 11 deletions(-) rename docs/{General => }/Announcements/.pages.yml (83%) rename docs/{General => }/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md (96%) rename docs/{General => }/Announcements/Autodeletion_returning_for_scratch_filesystem.md (100%) rename docs/{General => }/Announcements/December_holiday_support_restrictions.md (70%) rename docs/{General => }/Announcements/Identity_Changes_for_Crown_Research_Institutes.md (100%) rename docs/{General => }/Announcements/Known_Issues_HPC3.md (96%) rename docs/{General => Announcements}/Release_Notes/index.md (91%) diff --git a/docs/.pages.yml b/docs/.pages.yml index e4986b71f..e13f9472e 100644 --- a/docs/.pages.yml +++ b/docs/.pages.yml @@ -1,4 +1,5 @@ nav: + - Announcements - Getting_Started - General - Software diff --git a/docs/General/Announcements/.pages.yml b/docs/Announcements/.pages.yml similarity index 83% rename from docs/General/Announcements/.pages.yml rename to docs/Announcements/.pages.yml index ee5ce4812..6164722a4 100644 --- a/docs/General/Announcements/.pages.yml +++ b/docs/Announcements/.pages.yml @@ -1,5 +1,6 @@ --- nav: + - Release Notes: Release_Notes - Autodeletion_returning_for_scratch_filesystem.md - December_holiday_support_restrictions.md - Identity_Changes_for_Crown_Research_Institutes.md diff --git a/docs/General/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md b/docs/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md similarity index 96% rename from docs/General/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md rename to docs/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md index dfdd321b3..00ebabca5 100644 --- a/docs/General/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md +++ b/docs/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md @@ -19,7 +19,7 @@ sources of self-service support: - Changes to system status are reported via our [System Status page](https://status.nesi.org.nz/ "https://status.nesi.org.nz/"). You can also subscribe for notifications of system updates and - unplanned outages sent straight to your inbox. [Sign up here.](../../Getting_Started/Getting_Help/System_status.md) + unplanned outages sent straight to your inbox. [Sign up here.](../Getting_Started/Getting_Help/System_status.md) - [Consult our User Documentation](https://www.docs.nesi.org.nz) pages for instructions and guidelines for using the systems. diff --git a/docs/General/Announcements/Autodeletion_returning_for_scratch_filesystem.md b/docs/Announcements/Autodeletion_returning_for_scratch_filesystem.md similarity index 100% rename from docs/General/Announcements/Autodeletion_returning_for_scratch_filesystem.md rename to docs/Announcements/Autodeletion_returning_for_scratch_filesystem.md diff --git a/docs/General/Announcements/December_holiday_support_restrictions.md b/docs/Announcements/December_holiday_support_restrictions.md similarity index 70% rename from docs/General/Announcements/December_holiday_support_restrictions.md rename to docs/Announcements/December_holiday_support_restrictions.md index 2aa9ebb9d..fe4d95f26 100644 --- a/docs/General/Announcements/December_holiday_support_restrictions.md +++ b/docs/Announcements/December_holiday_support_restrictions.md @@ -29,8 +29,8 @@ END: 9:00am 5 January 2026 Self-service support
Available 24/7 - Email: support@nesi.org.nz - - Sign up for system status updates for advance warning of any system updates or unplanned outages.
- Consult our User Documentation pages for instructions and guidelines for using the systems.
- Visit our YouTube channel  for introductory training webinars. + Email: support@nesi.org.nz + - Sign up for system status updates for advance warning of any system updates or unplanned outages.
- Consult our User Documentation pages for instructions and guidelines for using the systems.
- Visit our YouTube channel  for introductory training webinars. @@ -45,7 +45,7 @@ END: 9:00am 5 January 2026 Self-service support
Available 24/7 - Phone: 0508 466 466
Email: help@reannz.co.nz + Phone: 0508 466 466
Email: help@reannz.co.nz - Sign up for system status updates for advance warning of any system updates or unplanned outages.
- Check the REANNZ weathermap for a real-time view of network operations. diff --git a/docs/General/Announcements/Identity_Changes_for_Crown_Research_Institutes.md b/docs/Announcements/Identity_Changes_for_Crown_Research_Institutes.md similarity index 100% rename from docs/General/Announcements/Identity_Changes_for_Crown_Research_Institutes.md rename to docs/Announcements/Identity_Changes_for_Crown_Research_Institutes.md diff --git a/docs/General/Announcements/Known_Issues_HPC3.md b/docs/Announcements/Known_Issues_HPC3.md similarity index 96% rename from docs/General/Announcements/Known_Issues_HPC3.md rename to docs/Announcements/Known_Issues_HPC3.md index cabd1e253..d84a24bea 100644 --- a/docs/General/Announcements/Known_Issues_HPC3.md +++ b/docs/Announcements/Known_Issues_HPC3.md @@ -8,7 +8,7 @@ tags: Below is a list issues that we're actively working on. We hope to have these resolved soon. This is intended to be a temporary page. -For differences between the new platforms and Mahuika, see the more permanent [differences from Mahuika](../../General/FAQs/Mahuika_HPC3_Differences.md). +For differences between the new platforms and Mahuika, see the more permanent [differences from Mahuika](../General/FAQs/Mahuika_HPC3_Differences.md). !!! info "Recently fixed" diff --git a/docs/General/Release_Notes/index.md b/docs/Announcements/Release_Notes/index.md similarity index 91% rename from docs/General/Release_Notes/index.md rename to docs/Announcements/Release_Notes/index.md index d0908b597..c0f0ed2b9 100644 --- a/docs/General/Release_Notes/index.md +++ b/docs/Announcements/Release_Notes/index.md @@ -1,11 +1,7 @@ --- created_at: '2021-02-23T19:52:34Z' tags: [] -vote_count: 0 -vote_sum: 0 title: Release Notes -zendesk_article_id: 360003507115 -zendesk_section_id: 360000437436 --- NeSI publishes release notes for applications, 3rd party applications diff --git a/docs/General/.pages.yml b/docs/General/.pages.yml index 220bf4fff..c8c8c3b2c 100644 --- a/docs/General/.pages.yml +++ b/docs/General/.pages.yml @@ -1,6 +1,5 @@ --- nav: -- Announcements - FAQs - Policy - Release_Notes diff --git a/docs/General/FAQs/Mahuika_HPC3_Differences.md b/docs/General/FAQs/Mahuika_HPC3_Differences.md index a12eee7c8..177b2917e 100644 --- a/docs/General/FAQs/Mahuika_HPC3_Differences.md +++ b/docs/General/FAQs/Mahuika_HPC3_Differences.md @@ -11,7 +11,7 @@ This article presents an overview comparison of the differences between the NeSI It is not a comprehensive view of the differences and where appropriate individual support pages will be updated to reflect changes and enhancements. For example the [Slurm Reference Sheet](../../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md) will have a number of changes made to it along with significant changes to the Slurm Partitions. -This page should be read in conjunction with the [Known Issues](../Announcements/Known_Issues_HPC3.md) which are not included here as they are temporary differences to be resolved soon. +This page should be read in conjunction with the [Known Issues](../../Announcements/Known_Issues_HPC3.md) which are not included here as they are temporary differences to be resolved soon. ## Login diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index 44aac522c..ab620bb7e 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -96,3 +96,9 @@ Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/index.m Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md : Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md : Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md Scientific_Computing/Interactive_computing_with_OnDemand/Release_Notes/index.md : Interactive_Computing/OnDemand/Release_Notes/index.md +General/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md : Announcements/Accessing_NeSI_Support_during_the_Easter_break.md +General/Announcements/Autodeletion_returning_for_scratch_filesystem.md : Announcements/Autodeletion_returning_for_scratch_filesystem.md +General/Announcements/December_holiday_support_restrictions.md : Announcements/December_holiday_support_restrictions.md +General/Announcements/Identity_Changes_for_Crown_Research_Institutes.md : Announcements/Identity_Changes_for_Crown_Research_Institutes.md +General/Announcements/Known_Issues_HPC3.md : Announcements/Known_Issues_HPC3.md +General/Release_Notes/index.md : Announcements/Release_Notes/index.md From fded06a2bf075c133d3e696e5d9d9d14d7521041 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 14:13:37 +1300 Subject: [PATCH 08/25] major moving --- docs/.pages.yml | 3 +- .../Submitting_your_first_job.md | 0 docs/General/.pages.yml | 5 -- .../Accessing_the_HPCs}/Git_Bash_Windows.md | 2 +- .../MobaXterm_Setup_Windows.md | 2 +- .../Standard_Terminal_Setup.md | 2 +- .../Accessing_the_HPCs}/VSCode.md | 0 .../WinSCP-PuTTY_Setup_Windows.md | 2 +- .../Windows_Subsystem_for_Linux_WSL.md | 0 .../Accessing_the_HPCs}/X11.md | 0 .../What_is_an_allocation.md | 4 +- .../Cheat_Sheets/Slurm-Reference_Sheet.md | 2 +- .../FAQs/.pages.yml | 0 ...change_my_time_zone_to_New_Zealand_time.md | 0 ..._cluster_filesystem_on_my_local_machine.md | 0 ...on_questions_about_the_platform_refresh.md | 0 ...indows_style_to_UNIX_style_line_endings.md | 0 .../FAQs/How_busy_is_the_cluster.md | 0 ...ad_only_team_members_access_to_my_files.md | 0 ...ect_team_members_read_or_write_my_files.md | 0 ..._I_view_images_generated_on_the_cluster.md | 0 ...w_do_I_find_out_the_size_of_a_directory.md | 0 ...o_I_fix_my_locale_and_language_settings.md | 0 ...y_Additional_Authentication_Credentials.md | 0 .../FAQs/How_do_I_request_memory.md | 0 ..._I_run_my_Python_Notebook_through_SLURM.md | 0 .../FAQs/Ive_run_out_of_storage_space.md | 0 .../FAQs/Login_Troubleshooting.md | 0 .../FAQs/Mahuika_HPC3_Differences.md | 0 .../FAQs/What_Is_A_Trusted_Device.md | 0 ...What_are_my-bashrc_and-bash_profile_for.md | 0 .../FAQs/What_does_oom_kill_mean.md | 0 ...t_is_Multiple_Factor_Authentication_MFA.md | 0 .../FAQs/What_is_a_core_file.md | 0 ...d_for_Machine_Learning_and_data_science.md | 0 ..._should_I_store_my_data_on_NeSI_systems.md | 0 .../Why_am_I_seeing_Account_is_not_ready.md | 0 .../FAQs/Why_does_my_program_crash.md | 0 ...y_is_my_job_taking_a_long_time_to_start.md | 0 docs/Getting_Started/Next_Steps/.pages.yml | 7 --- .../Next_Steps/The_HPC_environment.md | 23 -------- .../Policy/.pages.yml | 0 .../Policy/Acceptable_Use_Policy.md | 0 .../Policy/Access_Policy.md | 0 ...ccount_Requests_for_non_Tuakiri_Members.md | 0 ...cknowledgement-Citation_and_Publication.md | 0 .../Policy/Allocation_classes.md | 0 .../Policy/Application_Support_Model.md | 0 .../Policy/How_we_review_applications.md | 0 .../Policy/Institutional_allocations.md | 0 .../Policy/Licence_Policy.md | 0 .../Policy/Merit_allocations.md | 0 .../Policy/Postgraduate_allocations.md | 0 .../Policy/Privacy_Policy.md | 0 .../Proposal_Development_allocations.md | 0 .../Terminal_Setup/.pages.yml | 4 -- .../Research_Developer_Cloud/.pages.yml | 0 .../Research_Developer_Cloud/User_Guides.md | 1 - .../Software/Available_Applications/ABAQUS.md | 5 +- docs/Software/Available_Applications/ANSYS.md | 8 +-- .../Available_Applications/AlphaFold.md | 8 +-- .../Available_Applications/Apptainer.md | 5 -- docs/Software/Available_Applications/CESM.md | 2 +- .../Available_Applications/Delft3D.md | 3 +- docs/Software/Available_Applications/FDS.md | 4 +- .../Available_Applications/GROMACS.md | 2 +- .../Software/Available_Applications/MATLAB.md | 2 +- .../TensorFlow_on_CPUs.md | 2 +- .../Containers/NVIDIA_GPU_Containers.md | 3 +- .../Configuring_Dask_MPI_jobs.md | 8 +-- .../MPI_Scaling_Example.md | 0 .../Multithreading_Scaling_Example.md | 2 +- .../OpenMP_settings.md | 4 +- .../Parallel_Computing}/Parallel_Execution.md | 0 .../Thread_Placement_and_Thread_Affinity.md | 6 +- .../Finding_Job_Efficiency.md | 0 ...Job_Scaling_Ascertaining_job_dimensions.md | 4 +- .../Moving_files_to_and_from_the_cluster.md | 22 ++++---- docs/redirect_map.yml | 56 +++++++++++++++++++ 79 files changed, 105 insertions(+), 98 deletions(-) rename docs/{Getting_Started/Next_Steps => Batch_Computing}/Submitting_your_first_job.md (100%) delete mode 100644 docs/General/.pages.yml rename docs/{Scientific_Computing/Terminal_Setup => Getting_Started/Accessing_the_HPCs}/Git_Bash_Windows.md (92%) rename docs/{Scientific_Computing/Terminal_Setup => Getting_Started/Accessing_the_HPCs}/MobaXterm_Setup_Windows.md (97%) rename docs/{Scientific_Computing/Terminal_Setup => Getting_Started/Accessing_the_HPCs}/Standard_Terminal_Setup.md (98%) rename docs/{Scientific_Computing/Terminal_Setup => Getting_Started/Accessing_the_HPCs}/VSCode.md (100%) rename docs/{Scientific_Computing/Terminal_Setup => Getting_Started/Accessing_the_HPCs}/WinSCP-PuTTY_Setup_Windows.md (97%) rename docs/{Scientific_Computing/Terminal_Setup => Getting_Started/Accessing_the_HPCs}/Windows_Subsystem_for_Linux_WSL.md (100%) rename docs/{Scientific_Computing/Terminal_Setup => Getting_Started/Accessing_the_HPCs}/X11.md (100%) rename docs/{General => Getting_Started}/FAQs/.pages.yml (100%) rename docs/{General => Getting_Started}/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md (100%) rename docs/{General => Getting_Started}/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md (100%) rename docs/{General => Getting_Started}/FAQs/Common_questions_about_the_platform_refresh.md (100%) rename docs/{General => Getting_Started}/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md (100%) rename docs/{General => Getting_Started}/FAQs/How_busy_is_the_cluster.md (100%) rename docs/{General => Getting_Started}/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md (100%) rename docs/{General => Getting_Started}/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md (100%) rename docs/{General => Getting_Started}/FAQs/How_can_I_view_images_generated_on_the_cluster.md (100%) rename docs/{General => Getting_Started}/FAQs/How_do_I_find_out_the_size_of_a_directory.md (100%) rename docs/{General => Getting_Started}/FAQs/How_do_I_fix_my_locale_and_language_settings.md (100%) rename docs/{General => Getting_Started}/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md (100%) rename docs/{General => Getting_Started}/FAQs/How_do_I_request_memory.md (100%) rename docs/{General => Getting_Started}/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md (100%) rename docs/{General => Getting_Started}/FAQs/Ive_run_out_of_storage_space.md (100%) rename docs/{General => Getting_Started}/FAQs/Login_Troubleshooting.md (100%) rename docs/{General => Getting_Started}/FAQs/Mahuika_HPC3_Differences.md (100%) rename docs/{General => Getting_Started}/FAQs/What_Is_A_Trusted_Device.md (100%) rename docs/{General => Getting_Started}/FAQs/What_are_my-bashrc_and-bash_profile_for.md (100%) rename docs/{General => Getting_Started}/FAQs/What_does_oom_kill_mean.md (100%) rename docs/{General => Getting_Started}/FAQs/What_is_Multiple_Factor_Authentication_MFA.md (100%) rename docs/{General => Getting_Started}/FAQs/What_is_a_core_file.md (100%) rename docs/{General => Getting_Started}/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md (100%) rename docs/{General => Getting_Started}/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md (100%) rename docs/{General => Getting_Started}/FAQs/Why_am_I_seeing_Account_is_not_ready.md (100%) rename docs/{General => Getting_Started}/FAQs/Why_does_my_program_crash.md (100%) rename docs/{General => Getting_Started}/FAQs/Why_is_my_job_taking_a_long_time_to_start.md (100%) delete mode 100644 docs/Getting_Started/Next_Steps/.pages.yml delete mode 100644 docs/Getting_Started/Next_Steps/The_HPC_environment.md rename docs/{General => Getting_Started}/Policy/.pages.yml (100%) rename docs/{General => Getting_Started}/Policy/Acceptable_Use_Policy.md (100%) rename docs/{General => Getting_Started}/Policy/Access_Policy.md (100%) rename docs/{General => Getting_Started}/Policy/Account_Requests_for_non_Tuakiri_Members.md (100%) rename docs/{General => Getting_Started}/Policy/Acknowledgement-Citation_and_Publication.md (100%) rename docs/{General => Getting_Started}/Policy/Allocation_classes.md (100%) rename docs/{General => Getting_Started}/Policy/Application_Support_Model.md (100%) rename docs/{General => Getting_Started}/Policy/How_we_review_applications.md (100%) rename docs/{General => Getting_Started}/Policy/Institutional_allocations.md (100%) rename docs/{General => Getting_Started}/Policy/Licence_Policy.md (100%) rename docs/{General => Getting_Started}/Policy/Merit_allocations.md (100%) rename docs/{General => Getting_Started}/Policy/Postgraduate_allocations.md (100%) rename docs/{General => Getting_Started}/Policy/Privacy_Policy.md (100%) rename docs/{General => Getting_Started}/Policy/Proposal_Development_allocations.md (100%) delete mode 100644 docs/Scientific_Computing/Terminal_Setup/.pages.yml rename docs/{Scientific_Computing => Service_Subscriptions}/Research_Developer_Cloud/.pages.yml (100%) rename docs/{Scientific_Computing => Service_Subscriptions}/Research_Developer_Cloud/User_Guides.md (99%) rename docs/Software/{ => Parallel_Computing}/Configuring_Dask_MPI_jobs.md (96%) rename docs/{Getting_Started/Next_Steps => Software/Parallel_Computing}/MPI_Scaling_Example.md (100%) rename docs/{Getting_Started/Next_Steps => Software/Parallel_Computing}/Multithreading_Scaling_Example.md (99%) rename docs/Software/{ => Parallel_Computing}/OpenMP_settings.md (94%) rename docs/{Getting_Started/Next_Steps => Software/Parallel_Computing}/Parallel_Execution.md (100%) rename docs/Software/{ => Parallel_Computing}/Thread_Placement_and_Thread_Affinity.md (98%) rename docs/{Getting_Started/Next_Steps => Software/Profiling_and_Debugging}/Finding_Job_Efficiency.md (100%) rename docs/{Getting_Started/Next_Steps => Software/Profiling_and_Debugging}/Job_Scaling_Ascertaining_job_dimensions.md (95%) rename docs/{Getting_Started/Next_Steps => Storage}/Moving_files_to_and_from_the_cluster.md (80%) diff --git a/docs/.pages.yml b/docs/.pages.yml index e13f9472e..9ab168cbe 100644 --- a/docs/.pages.yml +++ b/docs/.pages.yml @@ -1,9 +1,8 @@ nav: - Announcements - Getting_Started - - General - Software - - Scientific_Computing + - Batch_Computing - Interactive_Computing - Storage - Service_Subscriptions diff --git a/docs/Getting_Started/Next_Steps/Submitting_your_first_job.md b/docs/Batch_Computing/Submitting_your_first_job.md similarity index 100% rename from docs/Getting_Started/Next_Steps/Submitting_your_first_job.md rename to docs/Batch_Computing/Submitting_your_first_job.md diff --git a/docs/General/.pages.yml b/docs/General/.pages.yml deleted file mode 100644 index c8c8c3b2c..000000000 --- a/docs/General/.pages.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -nav: -- FAQs -- Policy -- Release_Notes diff --git a/docs/Scientific_Computing/Terminal_Setup/Git_Bash_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/Git_Bash_Windows.md similarity index 92% rename from docs/Scientific_Computing/Terminal_Setup/Git_Bash_Windows.md rename to docs/Getting_Started/Accessing_the_HPCs/Git_Bash_Windows.md index 7ae5e9c46..6d518f6fb 100644 --- a/docs/Scientific_Computing/Terminal_Setup/Git_Bash_Windows.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Git_Bash_Windows.md @@ -40,7 +40,7 @@ credentials every time you open a new terminal or try to move a file.* scp nesi:~/ ``` -For more info visit [data transfer](../../Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md). +For more info visit [data transfer](../../Storage/Moving_files_to_and_from_the_cluster.md). !!! prerequisite "What Next?" - [Standard Terminal Setup](Standard_Terminal_Setup.md) diff --git a/docs/Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md similarity index 97% rename from docs/Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md rename to docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md index 6c4d686a8..bbf2ca52e 100644 --- a/docs/Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md +++ b/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md @@ -22,7 +22,7 @@ description: How to set up cluster access using MobaXterm - Otherwise, choose freely the Portable or Installer Edition. !!! prerequisite "What Next?" - - [Moving files to/from a cluster.](../../Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md) + - [Moving files to/from a cluster.](../../Storage/Moving_files_to_and_from_the_cluster.md) The interactive login configuration for MobaXterm is not compatable with the current web-based authentication method. If you wish to use MobaXterm as your SSH client you therefore need to use a non-interactive setup. This can be done by following a modified version of the instructions for setting up the [the standard terminal setup described on this support page](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md). diff --git a/docs/Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md b/docs/Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md similarity index 98% rename from docs/Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md rename to docs/Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md index 2458f05d7..e76537e87 100644 --- a/docs/Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md @@ -184,5 +184,5 @@ You should now be able to login with only a single authentication prompt. [Watch a demo](https://www.youtube.com/embed/IKihbN-QlIA?si=N93PPPsi85jPYV7k). !!! prerequisite "What Next?" - - [Moving files to/from a cluster.](../../Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md) + - [Moving files to/from a cluster.](../../Storage/Moving_files_to_and_from_the_cluster.md) - Setting up an [X-Server](./X11.md) (optional). diff --git a/docs/Scientific_Computing/Terminal_Setup/VSCode.md b/docs/Getting_Started/Accessing_the_HPCs/VSCode.md similarity index 100% rename from docs/Scientific_Computing/Terminal_Setup/VSCode.md rename to docs/Getting_Started/Accessing_the_HPCs/VSCode.md diff --git a/docs/Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md similarity index 97% rename from docs/Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md rename to docs/Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md index 74ca5dcef..041bc8695 100644 --- a/docs/Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md +++ b/docs/Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md @@ -129,5 +129,5 @@ for a single transfer'. with login authentication. !!! prerequisite "What Next?" - - [Moving files to and from the cluster](../../Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md) + - [Moving files to and from the cluster](../../Storage/Moving_files_to_and_from_the_cluster.md) - [X11 on NeSI](./X11.md)(optional). diff --git a/docs/Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md b/docs/Getting_Started/Accessing_the_HPCs/Windows_Subsystem_for_Linux_WSL.md similarity index 100% rename from docs/Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md rename to docs/Getting_Started/Accessing_the_HPCs/Windows_Subsystem_for_Linux_WSL.md diff --git a/docs/Scientific_Computing/Terminal_Setup/X11.md b/docs/Getting_Started/Accessing_the_HPCs/X11.md similarity index 100% rename from docs/Scientific_Computing/Terminal_Setup/X11.md rename to docs/Getting_Started/Accessing_the_HPCs/X11.md diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md index ed867cde6..705f217fb 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md @@ -15,8 +15,8 @@ different allocation criteria. An allocation will come from one of our allocation classes. We will decide what class of allocation is most suitable for you and your -research programme, however you're welcome to review [our article on -allocation classes](../../General/Policy/Allocation_classes.md) +research programme, however you're welcome to review +[our article on allocation classes](../../General/Policy/Allocation_classes.md) to find out what class you're likely eligible for. ## An important note on CPU hour allocations diff --git a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md index c6f4be3ba..8826d69d3 100644 --- a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md +++ b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md @@ -58,7 +58,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g. | | | | | --------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | | `--nodes` | ``#SBATCH --nodes=2`` | Will request tasks be run across 2 nodes. | -| `--ntasks` | ``#SBATCH --ntasks=2 `` | Will start 2 [MPI](../../Getting_Started/Next_Steps/Parallel_Execution.md) tasks. | +| `--ntasks` | ``#SBATCH --ntasks=2 `` | Will start 2 [MPI](../../Software/Parallel_Computing/Parallel_Execution.md) tasks. | | `--ntasks-per-node` | `#SBATCH --ntasks-per-node=1` | Will start 1 task per requested node. | | `--cpus-per-task` | `#SBATCH --cpus-per-task=10` | Will request 10 [*logical* CPUs](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) per task. | | `--mem-per-cpu` | `#SBATCH --mem-per-cpu=512MB` | Memory Per *logical* CPU. `--mem` Should be used if shared memory job. See [How do I request memory?](../../General/FAQs/How_do_I_request_memory.md) | diff --git a/docs/General/FAQs/.pages.yml b/docs/Getting_Started/FAQs/.pages.yml similarity index 100% rename from docs/General/FAQs/.pages.yml rename to docs/Getting_Started/FAQs/.pages.yml diff --git a/docs/General/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md b/docs/Getting_Started/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md similarity index 100% rename from docs/General/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md rename to docs/Getting_Started/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md diff --git a/docs/General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md b/docs/Getting_Started/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md similarity index 100% rename from docs/General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md rename to docs/Getting_Started/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md diff --git a/docs/General/FAQs/Common_questions_about_the_platform_refresh.md b/docs/Getting_Started/FAQs/Common_questions_about_the_platform_refresh.md similarity index 100% rename from docs/General/FAQs/Common_questions_about_the_platform_refresh.md rename to docs/Getting_Started/FAQs/Common_questions_about_the_platform_refresh.md diff --git a/docs/General/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md b/docs/Getting_Started/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md similarity index 100% rename from docs/General/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md rename to docs/Getting_Started/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md diff --git a/docs/General/FAQs/How_busy_is_the_cluster.md b/docs/Getting_Started/FAQs/How_busy_is_the_cluster.md similarity index 100% rename from docs/General/FAQs/How_busy_is_the_cluster.md rename to docs/Getting_Started/FAQs/How_busy_is_the_cluster.md diff --git a/docs/General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md b/docs/Getting_Started/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md similarity index 100% rename from docs/General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md rename to docs/Getting_Started/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md diff --git a/docs/General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md b/docs/Getting_Started/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md similarity index 100% rename from docs/General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md rename to docs/Getting_Started/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md diff --git a/docs/General/FAQs/How_can_I_view_images_generated_on_the_cluster.md b/docs/Getting_Started/FAQs/How_can_I_view_images_generated_on_the_cluster.md similarity index 100% rename from docs/General/FAQs/How_can_I_view_images_generated_on_the_cluster.md rename to docs/Getting_Started/FAQs/How_can_I_view_images_generated_on_the_cluster.md diff --git a/docs/General/FAQs/How_do_I_find_out_the_size_of_a_directory.md b/docs/Getting_Started/FAQs/How_do_I_find_out_the_size_of_a_directory.md similarity index 100% rename from docs/General/FAQs/How_do_I_find_out_the_size_of_a_directory.md rename to docs/Getting_Started/FAQs/How_do_I_find_out_the_size_of_a_directory.md diff --git a/docs/General/FAQs/How_do_I_fix_my_locale_and_language_settings.md b/docs/Getting_Started/FAQs/How_do_I_fix_my_locale_and_language_settings.md similarity index 100% rename from docs/General/FAQs/How_do_I_fix_my_locale_and_language_settings.md rename to docs/Getting_Started/FAQs/How_do_I_fix_my_locale_and_language_settings.md diff --git a/docs/General/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md b/docs/Getting_Started/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md similarity index 100% rename from docs/General/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md rename to docs/Getting_Started/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md diff --git a/docs/General/FAQs/How_do_I_request_memory.md b/docs/Getting_Started/FAQs/How_do_I_request_memory.md similarity index 100% rename from docs/General/FAQs/How_do_I_request_memory.md rename to docs/Getting_Started/FAQs/How_do_I_request_memory.md diff --git a/docs/General/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md b/docs/Getting_Started/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md similarity index 100% rename from docs/General/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md rename to docs/Getting_Started/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md diff --git a/docs/General/FAQs/Ive_run_out_of_storage_space.md b/docs/Getting_Started/FAQs/Ive_run_out_of_storage_space.md similarity index 100% rename from docs/General/FAQs/Ive_run_out_of_storage_space.md rename to docs/Getting_Started/FAQs/Ive_run_out_of_storage_space.md diff --git a/docs/General/FAQs/Login_Troubleshooting.md b/docs/Getting_Started/FAQs/Login_Troubleshooting.md similarity index 100% rename from docs/General/FAQs/Login_Troubleshooting.md rename to docs/Getting_Started/FAQs/Login_Troubleshooting.md diff --git a/docs/General/FAQs/Mahuika_HPC3_Differences.md b/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md similarity index 100% rename from docs/General/FAQs/Mahuika_HPC3_Differences.md rename to docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md diff --git a/docs/General/FAQs/What_Is_A_Trusted_Device.md b/docs/Getting_Started/FAQs/What_Is_A_Trusted_Device.md similarity index 100% rename from docs/General/FAQs/What_Is_A_Trusted_Device.md rename to docs/Getting_Started/FAQs/What_Is_A_Trusted_Device.md diff --git a/docs/General/FAQs/What_are_my-bashrc_and-bash_profile_for.md b/docs/Getting_Started/FAQs/What_are_my-bashrc_and-bash_profile_for.md similarity index 100% rename from docs/General/FAQs/What_are_my-bashrc_and-bash_profile_for.md rename to docs/Getting_Started/FAQs/What_are_my-bashrc_and-bash_profile_for.md diff --git a/docs/General/FAQs/What_does_oom_kill_mean.md b/docs/Getting_Started/FAQs/What_does_oom_kill_mean.md similarity index 100% rename from docs/General/FAQs/What_does_oom_kill_mean.md rename to docs/Getting_Started/FAQs/What_does_oom_kill_mean.md diff --git a/docs/General/FAQs/What_is_Multiple_Factor_Authentication_MFA.md b/docs/Getting_Started/FAQs/What_is_Multiple_Factor_Authentication_MFA.md similarity index 100% rename from docs/General/FAQs/What_is_Multiple_Factor_Authentication_MFA.md rename to docs/Getting_Started/FAQs/What_is_Multiple_Factor_Authentication_MFA.md diff --git a/docs/General/FAQs/What_is_a_core_file.md b/docs/Getting_Started/FAQs/What_is_a_core_file.md similarity index 100% rename from docs/General/FAQs/What_is_a_core_file.md rename to docs/Getting_Started/FAQs/What_is_a_core_file.md diff --git a/docs/General/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md b/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md similarity index 100% rename from docs/General/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md rename to docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md diff --git a/docs/General/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md b/docs/Getting_Started/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md similarity index 100% rename from docs/General/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md rename to docs/Getting_Started/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md diff --git a/docs/General/FAQs/Why_am_I_seeing_Account_is_not_ready.md b/docs/Getting_Started/FAQs/Why_am_I_seeing_Account_is_not_ready.md similarity index 100% rename from docs/General/FAQs/Why_am_I_seeing_Account_is_not_ready.md rename to docs/Getting_Started/FAQs/Why_am_I_seeing_Account_is_not_ready.md diff --git a/docs/General/FAQs/Why_does_my_program_crash.md b/docs/Getting_Started/FAQs/Why_does_my_program_crash.md similarity index 100% rename from docs/General/FAQs/Why_does_my_program_crash.md rename to docs/Getting_Started/FAQs/Why_does_my_program_crash.md diff --git a/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md b/docs/Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md similarity index 100% rename from docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md rename to docs/Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md diff --git a/docs/Getting_Started/Next_Steps/.pages.yml b/docs/Getting_Started/Next_Steps/.pages.yml deleted file mode 100644 index 094b1e71c..000000000 --- a/docs/Getting_Started/Next_Steps/.pages.yml +++ /dev/null @@ -1,7 +0,0 @@ ---- -nav: - - Moving_files_to_and_from_the_cluster.md - - Submitting_your_first_job.md - - Parallel_Execution.md - - Finding_Job_Efficiency.md - - "*" diff --git a/docs/Getting_Started/Next_Steps/The_HPC_environment.md b/docs/Getting_Started/Next_Steps/The_HPC_environment.md deleted file mode 100644 index 9299a758e..000000000 --- a/docs/Getting_Started/Next_Steps/The_HPC_environment.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -created_at: '2019-08-16T01:22:03Z' -tags: [] -vote_count: 0 -vote_sum: 0 -zendesk_article_id: 360001113076 -zendesk_section_id: 360000189716 ---- - -## Environment Modules - -Modules are a convenient  way to provide access to applications  on the cluster. -They prepare the environment you need to run an application. - -For a full list of module commands run `man module` or visit the [lmod documentation](https://lmod.readthedocs.io/en/latest/010_user.html). - -| Command | Description | -|-------------------------------|---------------------------------------------------------------| -| `module spider` | Lists all available modules. (only Mahuika) | -| `module spider [module name]` | Searches available modules for \[module name\] (only Mahuika) | -| `module show [module name]` | Shows information about \[module name\] | -| `module load [module name]` | Loads \[module name\] | -| `module list [module name]` | Lists currently loaded modules. | diff --git a/docs/General/Policy/.pages.yml b/docs/Getting_Started/Policy/.pages.yml similarity index 100% rename from docs/General/Policy/.pages.yml rename to docs/Getting_Started/Policy/.pages.yml diff --git a/docs/General/Policy/Acceptable_Use_Policy.md b/docs/Getting_Started/Policy/Acceptable_Use_Policy.md similarity index 100% rename from docs/General/Policy/Acceptable_Use_Policy.md rename to docs/Getting_Started/Policy/Acceptable_Use_Policy.md diff --git a/docs/General/Policy/Access_Policy.md b/docs/Getting_Started/Policy/Access_Policy.md similarity index 100% rename from docs/General/Policy/Access_Policy.md rename to docs/Getting_Started/Policy/Access_Policy.md diff --git a/docs/General/Policy/Account_Requests_for_non_Tuakiri_Members.md b/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md similarity index 100% rename from docs/General/Policy/Account_Requests_for_non_Tuakiri_Members.md rename to docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md diff --git a/docs/General/Policy/Acknowledgement-Citation_and_Publication.md b/docs/Getting_Started/Policy/Acknowledgement-Citation_and_Publication.md similarity index 100% rename from docs/General/Policy/Acknowledgement-Citation_and_Publication.md rename to docs/Getting_Started/Policy/Acknowledgement-Citation_and_Publication.md diff --git a/docs/General/Policy/Allocation_classes.md b/docs/Getting_Started/Policy/Allocation_classes.md similarity index 100% rename from docs/General/Policy/Allocation_classes.md rename to docs/Getting_Started/Policy/Allocation_classes.md diff --git a/docs/General/Policy/Application_Support_Model.md b/docs/Getting_Started/Policy/Application_Support_Model.md similarity index 100% rename from docs/General/Policy/Application_Support_Model.md rename to docs/Getting_Started/Policy/Application_Support_Model.md diff --git a/docs/General/Policy/How_we_review_applications.md b/docs/Getting_Started/Policy/How_we_review_applications.md similarity index 100% rename from docs/General/Policy/How_we_review_applications.md rename to docs/Getting_Started/Policy/How_we_review_applications.md diff --git a/docs/General/Policy/Institutional_allocations.md b/docs/Getting_Started/Policy/Institutional_allocations.md similarity index 100% rename from docs/General/Policy/Institutional_allocations.md rename to docs/Getting_Started/Policy/Institutional_allocations.md diff --git a/docs/General/Policy/Licence_Policy.md b/docs/Getting_Started/Policy/Licence_Policy.md similarity index 100% rename from docs/General/Policy/Licence_Policy.md rename to docs/Getting_Started/Policy/Licence_Policy.md diff --git a/docs/General/Policy/Merit_allocations.md b/docs/Getting_Started/Policy/Merit_allocations.md similarity index 100% rename from docs/General/Policy/Merit_allocations.md rename to docs/Getting_Started/Policy/Merit_allocations.md diff --git a/docs/General/Policy/Postgraduate_allocations.md b/docs/Getting_Started/Policy/Postgraduate_allocations.md similarity index 100% rename from docs/General/Policy/Postgraduate_allocations.md rename to docs/Getting_Started/Policy/Postgraduate_allocations.md diff --git a/docs/General/Policy/Privacy_Policy.md b/docs/Getting_Started/Policy/Privacy_Policy.md similarity index 100% rename from docs/General/Policy/Privacy_Policy.md rename to docs/Getting_Started/Policy/Privacy_Policy.md diff --git a/docs/General/Policy/Proposal_Development_allocations.md b/docs/Getting_Started/Policy/Proposal_Development_allocations.md similarity index 100% rename from docs/General/Policy/Proposal_Development_allocations.md rename to docs/Getting_Started/Policy/Proposal_Development_allocations.md diff --git a/docs/Scientific_Computing/Terminal_Setup/.pages.yml b/docs/Scientific_Computing/Terminal_Setup/.pages.yml deleted file mode 100644 index 961aa69d5..000000000 --- a/docs/Scientific_Computing/Terminal_Setup/.pages.yml +++ /dev/null @@ -1,4 +0,0 @@ -nav: - - Standard_Terminal_Setup.md - - "*" - - X11.md diff --git a/docs/Scientific_Computing/Research_Developer_Cloud/.pages.yml b/docs/Service_Subscriptions/Research_Developer_Cloud/.pages.yml similarity index 100% rename from docs/Scientific_Computing/Research_Developer_Cloud/.pages.yml rename to docs/Service_Subscriptions/Research_Developer_Cloud/.pages.yml diff --git a/docs/Scientific_Computing/Research_Developer_Cloud/User_Guides.md b/docs/Service_Subscriptions/Research_Developer_Cloud/User_Guides.md similarity index 99% rename from docs/Scientific_Computing/Research_Developer_Cloud/User_Guides.md rename to docs/Service_Subscriptions/Research_Developer_Cloud/User_Guides.md index 5aa34d7a2..119b5fa47 100644 --- a/docs/Scientific_Computing/Research_Developer_Cloud/User_Guides.md +++ b/docs/Service_Subscriptions/Research_Developer_Cloud/User_Guides.md @@ -15,7 +15,6 @@ Research teams can use this platform to develop novel solutions that enable rese - *Programmable infrastructure:* Applying DevOps practices enabled by Infrastructure as Code (IaC) to automate, measure, collaborate, and learn. - *Partnership-led approaches:* Collaborating with our DevOps specialists to build a platform or tools that can benefit your research community. - ## Features Our platform's cloud building blocks include: diff --git a/docs/Software/Available_Applications/ABAQUS.md b/docs/Software/Available_Applications/ABAQUS.md index 21e7a790f..7f02e42f1 100644 --- a/docs/Software/Available_Applications/ABAQUS.md +++ b/docs/Software/Available_Applications/ABAQUS.md @@ -75,7 +75,7 @@ Not all solvers are compatible with all types of parallelisation. === "Serial" For when only one CPU is required, generally as part of - a [job array](../../Getting_Started/Next_Steps/Parallel_Execution.md#job-arrays) + a [job array](../Parallel_Computing/Parallel_Execution.md#job-arrays) ```sl #!/bin/bash -e @@ -186,8 +186,7 @@ loaded with `module load`, you may have to change the compile commands in your l ## Environment file -The [ABAQUS environment -file](http://media.3ds.com/support/simulia/public/v613/installation-and-licensing-guides/books/sgb/default.htm?startat=ch04s01.html) contains +The [ABAQUS environment file](http://media.3ds.com/support/simulia/public/v613/installation-and-licensing-guides/books/sgb/default.htm?startat=ch04s01.html) contains a number of parameters that define how the your job will run, some of these you may with to change. diff --git a/docs/Software/Available_Applications/ANSYS.md b/docs/Software/Available_Applications/ANSYS.md index 11069f6c6..f1b02469a 100644 --- a/docs/Software/Available_Applications/ANSYS.md +++ b/docs/Software/Available_Applications/ANSYS.md @@ -142,8 +142,7 @@ the use of variables in what might otherwise be a fixed input. ## Fluent -[Some great documentation on journal -files](https://docs.hpc.shef.ac.uk/en/latest/referenceinfo/ANSYS/fluent/writing-fluent-journal-files.html) +[Some great documentation on journal files](https://docs.hpc.shef.ac.uk/en/latest/referenceinfo/ANSYS/fluent/writing-fluent-journal-files.html) `fluent -help` for a list of commands. @@ -210,8 +209,7 @@ Must have one of these flags. While it will always be more time and resource efficient using a slurm script as shown above, there are occasions where the GUI is required. If you only require a few CPUs for a short while you may run the fluent on -the login node, otherwise use of an [slurm interactive -session](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md) +the login node, otherwise use of an [slurm interactive session](../../Interactive_Computing/Slurm_Interactive_Sessions.md) is recommended. For example. @@ -624,7 +622,7 @@ Progress can be tracked through the GUI as usual. ## ANSYS-Electromagnetic ANSYS-EM jobs can be submitted through a slurm script or by -[interactive session](../../Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md). +[interactive session](../../Interactive_Computing/Slurm_Interactive_Sessions.md). ### RSM diff --git a/docs/Software/Available_Applications/AlphaFold.md b/docs/Software/Available_Applications/AlphaFold.md index 4ffbd8e86..9d194cf0b 100644 --- a/docs/Software/Available_Applications/AlphaFold.md +++ b/docs/Software/Available_Applications/AlphaFold.md @@ -30,10 +30,10 @@ as AlphaFold throughout the rest of this document. Any publication that discloses findings arising from using this source code or the model parameters -should [cite](https://github.com/deepmind/alphafold#citing-this-work) the [AlphaFold -paper](https://doi.org/10.1038/s41586-021-03819-2). Please also refer to -the [Supplementary -Information](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM1_ESM.pdf) for +should [cite](https://github.com/deepmind/alphafold#citing-this-work) the +[AlphaFold paper](https://doi.org/10.1038/s41586-021-03819-2). +Please also refer to the [Supplementary +Information](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM1_ESM.pdf) for a detailed description of the method. Home page is at diff --git a/docs/Software/Available_Applications/Apptainer.md b/docs/Software/Available_Applications/Apptainer.md index 8d454dba5..004ea31cd 100644 --- a/docs/Software/Available_Applications/Apptainer.md +++ b/docs/Software/Available_Applications/Apptainer.md @@ -1,8 +1,3 @@ -
- -![apptainer-icon](../../assets/images/apptainer_icon.png) - -
!!! circle-info "The latest version of Apptainer is installed directly on the host operating system of both the login and compute nodes. We recommend using this system-wide version and advise against attempting to load Apptainer via environment modules. Loading the Apptainer module will trigger a message stating: "*The Apptainer environment module has been removed since the system Apptainer is now just as recent*" !!! quote "" diff --git a/docs/Software/Available_Applications/CESM.md b/docs/Software/Available_Applications/CESM.md index 262279dac..2a4360e71 100644 --- a/docs/Software/Available_Applications/CESM.md +++ b/docs/Software/Available_Applications/CESM.md @@ -30,7 +30,7 @@ both Māui and Mahuika. On Mahuika only, load a module with a more recent version of git than the default one: -``` +```sl module load git ``` diff --git a/docs/Software/Available_Applications/Delft3D.md b/docs/Software/Available_Applications/Delft3D.md index 362cea043..6698b6002 100644 --- a/docs/Software/Available_Applications/Delft3D.md +++ b/docs/Software/Available_Applications/Delft3D.md @@ -16,7 +16,8 @@ tags: === "Serial" - For when only one CPU is required, generally as part of a [job array](../../Getting_Started/Next_Steps/Parallel_Execution.md#job-arrays). + For when only one CPU is required, generally as part of a + [job array](../../Getting_Started/Next_Steps/Parallel_Execution.md#job-arrays). ```sl #!/bin/bash -e diff --git a/docs/Software/Available_Applications/FDS.md b/docs/Software/Available_Applications/FDS.md index 919ebea2f..b7723ccdb 100644 --- a/docs/Software/Available_Applications/FDS.md +++ b/docs/Software/Available_Applications/FDS.md @@ -26,9 +26,9 @@ General documentation can be found [here](https://github.com/firemodels/fds/releases/download/FDS6.7.1/FDS_User_Guide.pdf). FDS can utilise both -[MPI](../../Getting_Started/Next_Steps/Parallel_Execution.md#mpi) +[MPI](../Parallel_Computing/Parallel_Execution.md#mpi) and -[OpenMP](../../Getting_Started/Next_Steps/Parallel_Execution.md#multi-threading) +[OpenMP](../Parallel_Computing/Parallel_Execution.md#multi-threading) ## Example Script diff --git a/docs/Software/Available_Applications/GROMACS.md b/docs/Software/Available_Applications/GROMACS.md index d713a738b..dc30dd288 100644 --- a/docs/Software/Available_Applications/GROMACS.md +++ b/docs/Software/Available_Applications/GROMACS.md @@ -34,7 +34,7 @@ obtained with the Software. === "Serial" For when only one CPU is required, generally as part of - a [job array](../../Getting_Started/Next_Steps/Parallel_Execution.md#job-arrays) + a [job array](../Parallel_Computing/Parallel_Execution.md#job-arrays) ```sl #!/bin/bash -e diff --git a/docs/Software/Available_Applications/MATLAB.md b/docs/Software/Available_Applications/MATLAB.md index a9f5c62e6..231b0a006 100644 --- a/docs/Software/Available_Applications/MATLAB.md +++ b/docs/Software/Available_Applications/MATLAB.md @@ -80,7 +80,7 @@ utilise more than a 4-8 CPUs this way. !!! tip If your code is explicitly parallel at a high level it is preferable to use - [SLURM job arrays](../../Getting_Started/Next_Steps/Parallel_Execution.md) + [SLURM job arrays](../Parallel_Computing/Parallel_Execution.md) as there is less computational overhead and the multiple smaller jobs will queue faster and therefore improve your throughput. diff --git a/docs/Software/Available_Applications/TensorFlow_on_CPUs.md b/docs/Software/Available_Applications/TensorFlow_on_CPUs.md index 05f60f2c1..32d83cd6a 100644 --- a/docs/Software/Available_Applications/TensorFlow_on_CPUs.md +++ b/docs/Software/Available_Applications/TensorFlow_on_CPUs.md @@ -109,7 +109,7 @@ threading behaviour of the Intel oneDNN library. While these settings should work well for a lot of applications, it is worth trying out different setups (e.g., longer blocktimes) and compare runtimes. Please see our article on [Thread Placement and Thread -Affinity](../Thread_Placement_and_Thread_Affinity.md) +Affinity](../Parallel_Computing/Thread_Placement_and_Thread_Affinity.md) as well as this [Intel article](https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture) for further information and tips for improving peformance on CPUs. diff --git a/docs/Software/Containers/NVIDIA_GPU_Containers.md b/docs/Software/Containers/NVIDIA_GPU_Containers.md index cd152daad..5e771ba22 100644 --- a/docs/Software/Containers/NVIDIA_GPU_Containers.md +++ b/docs/Software/Containers/NVIDIA_GPU_Containers.md @@ -46,8 +46,7 @@ running the NAMD image on NeSI, based on the NVIDIA instructions directly, which does not require root access: !!! note - Please do refer [Build Environment - Variables](../../Scientific_Computing/Supported_Applications/Apptainer.md) + Please do refer [Build Environment Variables](../../Software/Available_Applications/Apptainer.md) prior to running the following `pull` command. ```sh diff --git a/docs/Software/Configuring_Dask_MPI_jobs.md b/docs/Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md similarity index 96% rename from docs/Software/Configuring_Dask_MPI_jobs.md rename to docs/Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md index 5e6da78fe..a0845a30b 100644 --- a/docs/Software/Configuring_Dask_MPI_jobs.md +++ b/docs/Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md @@ -78,14 +78,14 @@ dependencies: !!! info "See also" See the - [Miniforge3](../Scientific_Computing/Supported_Applications/Miniforge3.md) + [Miniforge3](../../Scientific_Computing/Supported_Applications/Miniforge3.md) page for more information on how to create and manage Miniconda environments on NeSI. ## Configuring Slurm At runtime, Slurm will launch a number of Python processes as requested -in the [Slurm configuration script](../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md). +in the [Slurm configuration script](../../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md). Each process is given an ID (or "rank") starting at rank 0. Dask-MPI then assigns different roles to the different ranks: @@ -97,7 +97,7 @@ then assigns different roles to the different ranks: This implies that **Dask-MPI jobs must be launched on at least 3 MPI ranks!** Ranks 0 and 1 often perform much less work than the other ranks, it can therefore be beneficial to use -[Hyperthreading](../Scientific_Computing/Batch_Jobs/Hyperthreading.md) +[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) to place these two ranks onto a single physical core. Ensure that activating hyperthreading does not slow down the worker ranks by running a short test workload with and without hyperthreading. @@ -261,7 +261,7 @@ where the `%runscript` section ensures that the Python script passed to Conda environment inside the container. !!! note Tips - You can build this container on NeSI,following the instructions from the [dedicated supportpage](../Scientific_Computing/Supported_Applications/Apptainer.md) + You can build this container on NeSI,following the instructions from the [dedicated supportpage](../../Scientific_Computing/Supported_Applications/Apptainer.md) ### Slurm configuration diff --git a/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md b/docs/Software/Parallel_Computing/MPI_Scaling_Example.md similarity index 100% rename from docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md rename to docs/Software/Parallel_Computing/MPI_Scaling_Example.md diff --git a/docs/Getting_Started/Next_Steps/Multithreading_Scaling_Example.md b/docs/Software/Parallel_Computing/Multithreading_Scaling_Example.md similarity index 99% rename from docs/Getting_Started/Next_Steps/Multithreading_Scaling_Example.md rename to docs/Software/Parallel_Computing/Multithreading_Scaling_Example.md index a82359307..09c1ea98d 100644 --- a/docs/Getting_Started/Next_Steps/Multithreading_Scaling_Example.md +++ b/docs/Software/Parallel_Computing/Multithreading_Scaling_Example.md @@ -274,4 +274,4 @@ memory as we may otherwise have run out. about 20% more wall time and memory than you think you are going to need to minimise the chance of your jobs failing due to a lack of resources. Your project's fair share score considers the time actually used by the - job, not the time requested by the job. \ No newline at end of file + job, not the time requested by the job. diff --git a/docs/Software/OpenMP_settings.md b/docs/Software/Parallel_Computing/OpenMP_settings.md similarity index 94% rename from docs/Software/OpenMP_settings.md rename to docs/Software/Parallel_Computing/OpenMP_settings.md index f4f939244..0a98de5dc 100644 --- a/docs/Software/OpenMP_settings.md +++ b/docs/Software/Parallel_Computing/OpenMP_settings.md @@ -20,7 +20,7 @@ all that is necessary to get 16 OpenMP threads is: in your Slurm script - although this can sometimes be more complicated, e.g., with -[TensorFlow on CPUs](../Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md). +[TensorFlow on CPUs](../../Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md). In order to achieve good and consistent parallel scaling, additional settings may be required. This is particularly true on Mahuika where @@ -30,7 +30,7 @@ consistent, additional information can be found in our article [Thread Placement and Thread Affinity](./Thread_Placement_and_Thread_Affinity.md). 1. `--threads-per-core=2`. Use this option to tell srun or sbatch to -that you want to use [Hyperthreading](../Scientific_Computing/Batch_Jobs/Hyperthreading.md), +that you want to use [Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md), so use both of the virual CPUs available on each physical core, halving the number of physical cores you occupy. If you use hyperthreading, you will be charged for the number of physical cores that diff --git a/docs/Getting_Started/Next_Steps/Parallel_Execution.md b/docs/Software/Parallel_Computing/Parallel_Execution.md similarity index 100% rename from docs/Getting_Started/Next_Steps/Parallel_Execution.md rename to docs/Software/Parallel_Computing/Parallel_Execution.md diff --git a/docs/Software/Thread_Placement_and_Thread_Affinity.md b/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md similarity index 98% rename from docs/Software/Thread_Placement_and_Thread_Affinity.md rename to docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md index 3146c0343..2066a5bea 100644 --- a/docs/Software/Thread_Placement_and_Thread_Affinity.md +++ b/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md @@ -8,7 +8,7 @@ status: deprecated Multithreading with OpenMP and other threading libraries is an important way to parallelise scientific software for faster execution (see our article on [Parallel -Execution](./Getting_Started/Next_Steps/Parallel_Execution.md) for +Execution](../Getting_Started/Next_Steps/Parallel_Execution.md) for an introduction). Care needs to be taken when running multiple threads on the HPC to achieve best performance - getting it wrong can easily increase compute times by tens of percents, sometimes even more. This is @@ -34,7 +34,7 @@ performance, as a socket connects the processor to its RAM and other processors. A processor in each socket consists of multiple physical cores, and each physical core is split into two logical cores using a technology called -[Hyperthreading](./Scientific_Computing/Batch_Jobs/Hyperthreading.md)). +[Hyperthreading](../Scientific_Computing/Batch_Jobs/Hyperthreading.md)). A processor also includes caches - a [cache](https://en.wikipedia.org/wiki/CPU_cache) is very fast memory @@ -48,7 +48,7 @@ cores (our current HPCs have 18 to 20 cores). Each core can also be further divided into two logical cores (or hyperthreads, as mentioned before). -![NodeSocketCore.png](./assets/images/Thread_Placement_and_Thread_Affinity.png) +![NodeSocketCore.png](../assets/images/Thread_Placement_and_Thread_Affinity.png) It is very important to note the following: diff --git a/docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md b/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md similarity index 100% rename from docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md rename to docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md diff --git a/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md b/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md similarity index 95% rename from docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md rename to docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md index 8cf4e5379..9ddf0ac20 100644 --- a/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md +++ b/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md @@ -74,8 +74,8 @@ will not have waited for hours or days in the queue beforehand. !!! example - - [Multithreading Scaling](../../Getting_Started/Next_Steps/Multithreading_Scaling_Example.md) - - [MPI Scaling](../../Getting_Started/Next_Steps/MPI_Scaling_Example.md) + - [Multithreading Scaling](../../Software/Parallel_Computing/Multithreading_Scaling_Example.md) + - [MPI Scaling](../../Software/Parallel_Computing/MPI_Scaling_Example.md) !!! tip "Webinar: How to estimate CPU, memory & time needs" diff --git a/docs/Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md b/docs/Storage/Moving_files_to_and_from_the_cluster.md similarity index 80% rename from docs/Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md rename to docs/Storage/Moving_files_to_and_from_the_cluster.md index feb602d83..1c2a32b2e 100644 --- a/docs/Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md +++ b/docs/Storage/Moving_files_to_and_from_the_cluster.md @@ -8,19 +8,19 @@ tags: --- !!! prerequisite - Have an [active account and project.](../Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md) + Have an [active account and project.](../Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md) -Find more information on [our filesystem](../../Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md). +Find more information on [our filesystem](./File_Systems_and_Quotas/Filesystems_and_Quotas.md). ## OnDemand Requiring only a web browser, the instructions are same whether your are connecting from a Windows, Mac or a Linux computer. -See [OnDemand how to guide](../../Scientific_Computing/Interactive_computing_with_OnDemand/how_to_guide.md) for more info. +See [OnDemand how to guide](../Scientific_Computing/Interactive_computing_with_OnDemand/how_to_guide.md) for more info. ## Standard Terminal !!! prerequisite - Have SSH setup as described in [Standard Terminal Setup](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) + Have SSH setup as described in [Standard Terminal Setup](../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) In a local terminal the following commands can be used to: @@ -38,7 +38,7 @@ scp mahuika: !!! note - This will only work if you have set up aliases as described in - [Terminal Setup](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md). + [Terminal Setup](../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md). - As the term 'mahuika' is defined locally, the above commands *only works when using a local terminal* (i.e. not on Mahuika). - If you are using Windows subsystem, the root paths are different @@ -54,7 +54,7 @@ your password. ## File Managers !!! prerequisite - Have SSH setup as described in [Standard Terminal Setup](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) + Have SSH setup as described in [Standard Terminal Setup](../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) Most file managers can be used to connect to a remote directory simply by typing in the address bar provided your have an active connection to @@ -67,22 +67,22 @@ This **does not** work for File Explorer (Windows default) This **does not** work for Finder (Mac default) -![files](../../assets/images/Moving_files_to_and_from_the_cluster_1.png) +![files](../assets/images/Moving_files_to_and_from_the_cluster_1.png) If your default file manager does not support mounting over SFTP, see -[Can I use SSHFS to mount the cluster filesystem on my local machine?](../../General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md). +[Can I use SSHFS to mount the cluster filesystem on my local machine?](../General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md). ## MobaXterm !!! prerequisite - [MobaXterm Setup Windows](../../Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md) + [MobaXterm Setup Windows](../Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md) See [Standard Terminal]](Moving_files_to_and_from_the_cluster.md#standard-terminal), [Rclone]](Moving_files_to_and_from_the_cluster.md#rclone), or [Rsync]](Moving_files_to_and_from_the_cluster.md#rsync) for information on how to move files to and from the HPC in the terminal. ## WinSCP !!! prerequisite - [WinSCP-PuTTY Setup Windows](../../Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md) + [WinSCP-PuTTY Setup Windows](../Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md) As WinSCP uses multiple tunnels for file transfer you will be required to authenticate again on your first file operation of the session. The @@ -94,7 +94,7 @@ authentication. Globus is available for those with large amounts of data, security concerns, or connection consistency issues. You can find more details in -[Data_Transfer_using_Globus](../../Storage/Data_Transfer_Services/Data_Transfer_using_Globus.md). +[Data_Transfer_using_Globus](./Data_Transfer_Services/Data_Transfer_using_Globus.md). ## Rclone diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index ab620bb7e..db0465b06 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -102,3 +102,59 @@ General/Announcements/December_holiday_support_restrictions.md : Announcements/D General/Announcements/Identity_Changes_for_Crown_Research_Institutes.md : Announcements/Identity_Changes_for_Crown_Research_Institutes.md General/Announcements/Known_Issues_HPC3.md : Announcements/Known_Issues_HPC3.md General/Release_Notes/index.md : Announcements/Release_Notes/index.md +Scientific_Computing/Terminal_Setup/Git_Bash_Windows.md : Getting_Started/Accessing_the_HPCs/Git_Bash_Windows.md +Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md : Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md +Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md : Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md +Scientific_Computing/Terminal_Setup/VSCode.md : Getting_Started/Accessing_the_HPCs/VSCode.md +Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md : Getting_Started/Accessing_the_HPCs/Windows_Subsystem_for_Linux_WSL.md +Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md : Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md +Scientific_Computing/Terminal_Setup/X11.md : Getting_Started/Accessing_the_HPCs/X11.md +Getting_Started/Next_Steps/Parallel_Execution.md : Software/Parallel_Computing/Parallel_Execution.md +Getting_Started/Next_Steps/Multithreading_Scaling_Example.md : Software/Parallel_Computing/Multithreading_Scaling_Example.md +Getting_Started/Next_Steps/MPI_Scaling_Example.md : Software/Parallel_Computing/MPI_Scaling_Example.md +Getting_Started/Next_Steps/Finding_Job_Efficiency.md : Software/Profiling_and_Debugging/Finding_Job_Efficiency.md +Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md : Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md +Software/OpenMP_settings.md : Software/Parallel_Computing/OpenMP_settings.md +Software/Configuring_Dask_MPI_jobs.md : Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md +Software/Thread_Placement_and_Thread_Affinity.md : Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md +General/Policy/Acceptable_Use_Policy.md : Getting_Started/Policy/Acceptable_Use_Policy.md +General/Policy/Access_Policy.md : Getting_Started/Policy/Access_Policy.md +General/Policy/Account_Requests_for_non_Tuakiri_Members.md : Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md +General/Policy/Acknowledgement-Citation_and_Publication.md : Getting_Started/Policy/Acknowledgement-Citation_and_Publication.md +General/Policy/Allocation_classes.md : Getting_Started/Policy/Allocation_classes.md +General/Policy/Application_Support_Model.md : Getting_Started/Policy/Application_Support_Model.md +General/Policy/How_we_review_applications.md : Getting_Started/Policy/How_we_review_applications.md +General/Policy/Institutional_allocations.md : Getting_Started/Policy/Institutional_allocations.md +General/Policy/Licence_Policy.md : Getting_Started/Policy/Licence_Policy.md +General/Policy/Merit_allocations.md : Getting_Started/Policy/Merit_allocations.md +General/Policy/Postgraduate_allocations.md : Getting_Started/Policy/Postgraduate_allocations.md +General/Policy/Privacy_Policy.md : Getting_Started/Policy/Privacy_Policy.md +General/Policy/Proposal_Development_allocations.md : Getting_Started/Policy/Proposal_Development_allocations.md +General/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md : Getting_Started/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md +General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md : Getting_Started/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md +General/FAQs/Common_questions_about_the_platform_refresh.md : Getting_Started/FAQs/Common_questions_about_the_platform_refresh.md +General/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md : Getting_Started/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md +General/FAQs/How_busy_is_the_cluster.md : Getting_Started/FAQs/How_busy_is_the_cluster.md +General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md : Getting_Started/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md +General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md : Getting_Started/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md +General/FAQs/How_can_I_view_images_generated_on_the_cluster.md : Getting_Started/FAQs/How_can_I_view_images_generated_on_the_cluster.md +General/FAQs/How_do_I_find_out_the_size_of_a_directory.md : Getting_Started/FAQs/How_do_I_find_out_the_size_of_a_directory.md +General/FAQs/How_do_I_fix_my_locale_and_language_settings.md : Getting_Started/FAQs/How_do_I_fix_my_locale_and_language_settings.md +General/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md : Getting_Started/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md +General/FAQs/How_do_I_request_memory.md : Getting_Started/FAQs/How_do_I_request_memory.md +General/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md : Getting_Started/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md +General/FAQs/Ive_run_out_of_storage_space.md : Getting_Started/FAQs/Ive_run_out_of_storage_space.md +General/FAQs/Login_Troubleshooting.md : Getting_Started/FAQs/Login_Troubleshooting.md +General/FAQs/Mahuika_HPC3_Differences.md : Getting_Started/FAQs/Mahuika_HPC3_Differences.md +General/FAQs/What_are_my-bashrc_and-bash_profile_for.md : Getting_Started/FAQs/What_are_my-bashrc_and-bash_profile_for.md +General/FAQs/What_does_oom_kill_mean.md : Getting_Started/FAQs/What_does_oom_kill_mean.md +General/FAQs/What_is_a_core_file.md : Getting_Started/FAQs/What_is_a_core_file.md +General/FAQs/What_Is_A_Trusted_Device.md : Getting_Started/FAQs/What_Is_A_Trusted_Device.md +General/FAQs/What_is_Multiple_Factor_Authentication_MFA.md : Getting_Started/FAQs/What_is_Multiple_Factor_Authentication_MFA.md +General/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md : Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md +General/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md : Getting_Started/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md +General/FAQs/Why_am_I_seeing_Account_is_not_ready.md : Getting_Started/FAQs/Why_am_I_seeing_Account_is_not_ready.md +General/FAQs/Why_does_my_program_crash.md : Getting_Started/FAQs/Why_does_my_program_crash.md +General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md : Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md +Getting_Started/Next_Steps/Submitting_your_first_job.md : Batch_Computing/Submitting_your_first_job.md +Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md : Storage/Moving_files_to_and_from_the_cluster.md From 3759798f55947f9fd28fc8b30e7527f1422d2646 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 14:15:49 +1300 Subject: [PATCH 09/25] Delete sci comp --- .../Introduction_to_computing_on_the_NeSI_HPC.md | 0 ..._to_computing_on_the_NeSI_HPC_YouTube_Recordings.md | 0 .../Getting_Help}/Webinars.md | 0 .../Getting_Help}/Workshops.md | 0 docs/Scientific_Computing/.pages.yml | 10 ---------- docs/Scientific_Computing/Training/.pages.yml | 2 -- docs/redirect_map.yml | 4 ++++ 7 files changed, 4 insertions(+), 12 deletions(-) rename docs/{Scientific_Computing/Training => Getting_Started/Getting_Help}/Introduction_to_computing_on_the_NeSI_HPC.md (100%) rename docs/{Scientific_Computing/Training => Getting_Started/Getting_Help}/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md (100%) rename docs/{Scientific_Computing/Training => Getting_Started/Getting_Help}/Webinars.md (100%) rename docs/{Scientific_Computing/Training => Getting_Started/Getting_Help}/Workshops.md (100%) delete mode 100644 docs/Scientific_Computing/.pages.yml delete mode 100644 docs/Scientific_Computing/Training/.pages.yml diff --git a/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md b/docs/Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC.md similarity index 100% rename from docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md rename to docs/Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC.md diff --git a/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md b/docs/Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md similarity index 100% rename from docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md rename to docs/Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md diff --git a/docs/Scientific_Computing/Training/Webinars.md b/docs/Getting_Started/Getting_Help/Webinars.md similarity index 100% rename from docs/Scientific_Computing/Training/Webinars.md rename to docs/Getting_Started/Getting_Help/Webinars.md diff --git a/docs/Scientific_Computing/Training/Workshops.md b/docs/Getting_Started/Getting_Help/Workshops.md similarity index 100% rename from docs/Scientific_Computing/Training/Workshops.md rename to docs/Getting_Started/Getting_Help/Workshops.md diff --git a/docs/Scientific_Computing/.pages.yml b/docs/Scientific_Computing/.pages.yml deleted file mode 100644 index 9659ae0c1..000000000 --- a/docs/Scientific_Computing/.pages.yml +++ /dev/null @@ -1,10 +0,0 @@ -nav: - - Supported_Applications - - Training - - Interactive_computing_with_OnDemand - - Batch_Jobs - - Profiling_and_Debugging - - HPC_Software_Environment - - Terminal_Setup - - Research_Developer_Cloud - - "*" diff --git a/docs/Scientific_Computing/Training/.pages.yml b/docs/Scientific_Computing/Training/.pages.yml deleted file mode 100644 index 7a6c3d54c..000000000 --- a/docs/Scientific_Computing/Training/.pages.yml +++ /dev/null @@ -1,2 +0,0 @@ -nav: - - "*" diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index db0465b06..ef8a5a9fa 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -158,3 +158,7 @@ General/FAQs/Why_does_my_program_crash.md : Getting_Started/FAQs/Why_does_my_pro General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md : Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md Getting_Started/Next_Steps/Submitting_your_first_job.md : Batch_Computing/Submitting_your_first_job.md Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md : Storage/Moving_files_to_and_from_the_cluster.md +Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md : Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC_YouTube_Recordings.md +Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md : Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC.md +Scientific_Computing/Training/Webinars.md : Getting_Started/Getting_Help/Webinars.md +Scientific_Computing/Training/Workshops.md : Getting_Started/Getting_Help/Workshops.md From b594dcc0feca3c6764525cfafbca252135cb8088 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 14:19:27 +1300 Subject: [PATCH 10/25] fix redirects --- .../File_Systems_and_Quotas/Filesystems_and_Quotas.md | 6 ++++-- docs/redirect_map.yml | 9 --------- 2 files changed, 4 insertions(+), 11 deletions(-) diff --git a/docs/Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md b/docs/Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md index 1a16aa914..6fb756ad5 100644 --- a/docs/Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md +++ b/docs/Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md @@ -102,7 +102,8 @@ apply per-project disk space quotas to projects on this filesystem. The default per-project quotas are as described in the above table; if you require more temporary (scratch) space for your project than the default quota allows for, you can discuss your -requirements with us during [the project application process](../../General/Policy/How_we_review_applications.md), +requirements with us during +[the project application process](../../General/Policy/How_we_review_applications.md), or {% include "partials/support_request.html" %} at any time. To ensure this filesystem remains fit-for-purpose, we have a regular @@ -125,7 +126,8 @@ an Automatic Tape Library (ATL). Files will remain on Freezer temporarily, typically for hours to days, before being moved to tape. A catalogue of files on tape will remain on the disk for quick access. -See more information about the long term storage see our [documentation about the Freezer storage service](../Long_Term_Storage/Freezer_long_term_storage.md). +See more information about the long term storage see our +[documentation about the Freezer storage service](../Long_Term_Storage/Freezer_long_term_storage.md). ## Snapshots diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index ef8a5a9fa..a882a5f53 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -1,9 +1,4 @@ -General/FAQs/Two_Factor_Authentication_FAQ.md : General/FAQs/What_is_Multiple_Factor_Authentication_MFA.md -Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup_HPC3.md: Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md -General/FAQs/How_to_replace_my_2FA_token.md: General/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md -General/FAQs/How_to_replace_my_2FA.md: General/FAQs/How_do_I_replace_my_Additional_Authentication_Credentials.md Scientific_Computing/Terminal_Setup/Ubuntu_LTS_terminal_Windows.md: Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md -General/FAQs/How_can_I_see_how_busy_the_cluster_is.md: General/FAQs/How_busy_is_the_cluster.md hc.md: index.md hc/en-gb.md: index.md Storage/Freezer_long_term_storage.md : Storage/Long_Term_Storage/Freezer_long_term_storage.md @@ -78,10 +73,7 @@ Scientific_Computing/Batch_Jobs/Job_Checkpointing.md : Batch_Computing/Job_Check Scientific_Computing/Batch_Jobs/Job_Limits.md : Batch_Computing/Job_Limits.md Scientific_Computing/Batch_Jobs/Job_prioritisation.md : Batch_Computing/Job_prioritisation.md Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md : Batch_Computing/SLURM-Best_Practice.md -Scientific_Computing/Batch_Jobs/Using_GPUs.md : Batch_Computing/Using_GPUs.md -Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md : Software/Configuring_Dask_MPI_jobs.md Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md : Software/Thread_Placement_and_Thread_Affinity.md -Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md : Software/OpenMP_settings.md Scientific_Computing/HPC_Software_Environment/Temporary_directories.md : Batch_Computing/Temporary_directories.md Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Interactive_Computing/Slurm_Interactive_Sessions.md Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md : Interactive_Computing/Slurm_Interactive_Sessions.md @@ -116,7 +108,6 @@ Getting_Started/Next_Steps/Finding_Job_Efficiency.md : Software/Profiling_and_De Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md : Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md Software/OpenMP_settings.md : Software/Parallel_Computing/OpenMP_settings.md Software/Configuring_Dask_MPI_jobs.md : Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md -Software/Thread_Placement_and_Thread_Affinity.md : Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md General/Policy/Acceptable_Use_Policy.md : Getting_Started/Policy/Acceptable_Use_Policy.md General/Policy/Access_Policy.md : Getting_Started/Policy/Access_Policy.md General/Policy/Account_Requests_for_non_Tuakiri_Members.md : Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md From 768d822a1c30ce0e52bccdf0b06de2583a527bef Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 14:55:13 +1300 Subject: [PATCH 11/25] link fixes --- docs/Announcements/.pages.yml | 3 ++- docs/Batch_Computing/.pages.yml | 1 + docs/Getting_Started/.pages.yml | 1 - .../Parallel_Computing}/Hyperthreading.md | 4 ++-- docs/redirect_map.yml | 1 + 5 files changed, 6 insertions(+), 4 deletions(-) rename docs/{Batch_Computing => Software/Parallel_Computing}/Hyperthreading.md (99%) diff --git a/docs/Announcements/.pages.yml b/docs/Announcements/.pages.yml index 6164722a4..15ebe5fee 100644 --- a/docs/Announcements/.pages.yml +++ b/docs/Announcements/.pages.yml @@ -1,7 +1,8 @@ --- nav: - - Release Notes: Release_Notes - Autodeletion_returning_for_scratch_filesystem.md - December_holiday_support_restrictions.md - Identity_Changes_for_Crown_Research_Institutes.md + - Known_Issues_HPC3 + - Release_Notes diff --git a/docs/Batch_Computing/.pages.yml b/docs/Batch_Computing/.pages.yml index 6056cc39d..1490d5275 100644 --- a/docs/Batch_Computing/.pages.yml +++ b/docs/Batch_Computing/.pages.yml @@ -1,4 +1,5 @@ nav: + - Submitting_your_first_job.md - Hardware.md - Job_prioritisation.md - SLURM-Best_Practice.md diff --git a/docs/Getting_Started/.pages.yml b/docs/Getting_Started/.pages.yml index 799298a11..60b008d44 100644 --- a/docs/Getting_Started/.pages.yml +++ b/docs/Getting_Started/.pages.yml @@ -2,7 +2,6 @@ nav: - Accounts, Projects and Allocations : Accounts-Projects_and_Allocations - Accessing_the_HPCs - - Next_Steps - Getting_Help - Cheat_Sheets - "*" diff --git a/docs/Batch_Computing/Hyperthreading.md b/docs/Software/Parallel_Computing/Hyperthreading.md similarity index 99% rename from docs/Batch_Computing/Hyperthreading.md rename to docs/Software/Parallel_Computing/Hyperthreading.md index 52f7a2b9d..27886cfe0 100644 --- a/docs/Batch_Computing/Hyperthreading.md +++ b/docs/Software/Parallel_Computing/Hyperthreading.md @@ -34,7 +34,7 @@ once your job starts you will have twice the number of CPUs as `ntasks`. If you set `--cpus-per-task=n`, Slurm will request `n` logical CPUs per task, i.e., will set `n` threads for the job. Your code must be capable of running Hyperthreaded (for example using -[OpenMP](../HPC_Software_Environment/OpenMP_settings.md)) +[OpenMP](../../HPC_Software_Environment/OpenMP_settings.md)) if `--cpus-per-task > 1`. Setting `--hint=nomultithread` with `srun` or `sbatch` causes Slurm to @@ -187,7 +187,7 @@ considered a bonus. for MPI jobs that request the same number of tasks on every node, we recommend to specify `--mem` (i.e. memory per node) instead. See [How to request memory - (RAM)](../../General/FAQs/How_do_I_request_memory.md) for more + (RAM)](../../../General/FAQs/How_do_I_request_memory.md) for more information. - Non-MPI jobs which specify `--cpus-per-task` and use **srun** should also set `--ntasks=1`, otherwise the program will be run twice in diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index a882a5f53..9f3bedf32 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -153,3 +153,4 @@ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md : Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC.md Scientific_Computing/Training/Webinars.md : Getting_Started/Getting_Help/Webinars.md Scientific_Computing/Training/Workshops.md : Getting_Started/Getting_Help/Workshops.md +Batch_Computing/Hyperthreading.md : Software/Parallel_Computing/Hyperthreading.md From 7d871915cc803bc4c2ec17bbdbcd5678528e6f75 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 18:55:04 +1300 Subject: [PATCH 12/25] fix links --- docs/Announcements/Known_Issues_HPC3.md | 2 +- docs/Batch_Computing/Fair_Share.md | 2 +- docs/Batch_Computing/Job_prioritisation.md | 2 +- docs/Batch_Computing/SLURM-Best_Practice.md | 6 +- .../Slurm_Interactive_Sessions.md | 2 +- docs/Batch_Computing/Using_GPUs.md | 12 +- .../Connecting_to_the_Cluster.md | 16 +- .../MobaXterm_Setup_Windows.md | 2 +- .../Accessing_the_HPCs/Port_Forwarding.md | 8 +- .../Accessing_the_HPCs/VSCode.md | 2 +- .../Windows_Subsystem_for_Linux_WSL.md | 2 +- .../Applying_for_a_new_project.md | 6 +- .../Creating_an_Account_Profile.md | 2 +- .../What_is_an_allocation.md | 6 +- .../Cheat_Sheets/Slurm-Reference_Sheet.md | 8 +- ...change_my_time_zone_to_New_Zealand_time.md | 2 +- ..._I_view_images_generated_on_the_cluster.md | 2 +- .../FAQs/How_do_I_request_memory.md | 2 +- ..._I_run_my_Python_Notebook_through_SLURM.md | 2 +- .../FAQs/Mahuika_HPC3_Differences.md | 2 +- .../FAQs/What_is_a_core_file.md | 2 +- ...d_for_Machine_Learning_and_data_science.md | 12 +- .../FAQs/Why_does_my_program_crash.md | 2 +- ...y_is_my_job_taking_a_long_time_to_start.md | 2 +- ...ccount_Requests_for_non_Tuakiri_Members.md | 2 +- .../Policy/How_we_review_applications.md | 2 +- .../Policy/Institutional_allocations.md | 6 +- .../Policy/Merit_allocations.md | 2 +- .../Policy/Postgraduate_allocations.md | 2 +- .../Proposal_Development_allocations.md | 2 +- .../Logging_in_to_my-nesi-org-nz.md | 2 +- .../my-nesi-org-nz_release_notes_v2-21-0.md | 2 +- .../Jupyter_kernels_Manual_management.md | 6 +- .../Jupyter_kernels_Manual_management.md.bak | 286 ++++++++++++++++++ ...er_kernels_Tool_assisted_management.md.bak | 160 ++++++++++ .../OnDemand/Apps/JupyterLab/index.md | 2 +- .../OnDemand/Apps/JupyterLab/index.md.bak | 106 +++++++ .../OnDemand/Apps/RStudio.md | 112 +++---- .../Slurm_Interactive_Sessions.md | 2 +- .../Software/Available_Applications/ABAQUS.md | 2 +- .../Software/Available_Applications/COMSOL.md | 2 +- .../Available_Applications/Delft3D.md | 2 +- .../Available_Applications/GROMACS.md | 2 +- docs/Software/Available_Applications/Keras.md | 4 +- .../Available_Applications/Lambda_Stack.md | 4 +- .../Software/Available_Applications/MATLAB.md | 2 +- .../Available_Applications/Miniforge3.md | 4 +- .../Available_Applications/Supernova.md | 2 +- .../TensorFlow_on_CPUs.md | 4 +- .../TensorFlow_on_GPUs.md | 4 +- docs/Software/Available_Applications/VASP.md | 4 +- .../Available_Applications/fastStructure.md | 2 +- .../Installing_Applications_Yourself.md | 6 +- .../Configuring_Dask_MPI_jobs.md | 6 +- .../Parallel_Computing/Hyperthreading.md | 4 +- .../Parallel_Computing/MPI_Scaling_Example.md | 2 +- .../Parallel_Computing/OpenMP_settings.md | 4 +- .../Parallel_Computing/Parallel_Execution.md | 2 +- .../Thread_Placement_and_Thread_Affinity.md | 4 +- .../Finding_Job_Efficiency.md | 2 +- ...Job_Scaling_Ascertaining_job_dimensions.md | 2 +- .../File_permissions_and_groups.md | 6 +- .../Filesystems_and_Quotas.md | 2 +- .../Moving_files_to_and_from_the_cluster.md | 14 +- fixlinks.py | 88 ++++++ 65 files changed, 810 insertions(+), 170 deletions(-) create mode 100644 docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md.bak create mode 100644 docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md.bak create mode 100644 docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md.bak create mode 100644 fixlinks.py diff --git a/docs/Announcements/Known_Issues_HPC3.md b/docs/Announcements/Known_Issues_HPC3.md index d84a24bea..b65d58070 100644 --- a/docs/Announcements/Known_Issues_HPC3.md +++ b/docs/Announcements/Known_Issues_HPC3.md @@ -8,7 +8,7 @@ tags: Below is a list issues that we're actively working on. We hope to have these resolved soon. This is intended to be a temporary page. -For differences between the new platforms and Mahuika, see the more permanent [differences from Mahuika](../General/FAQs/Mahuika_HPC3_Differences.md). +For differences between the new platforms and Mahuika, see the more permanent [differences from Mahuika](../Getting_Started/FAQs/Mahuika_HPC3_Differences.md). !!! info "Recently fixed" diff --git a/docs/Batch_Computing/Fair_Share.md b/docs/Batch_Computing/Fair_Share.md index 9d22c77e0..1430b05c1 100644 --- a/docs/Batch_Computing/Fair_Share.md +++ b/docs/Batch_Computing/Fair_Share.md @@ -18,7 +18,7 @@ Your *Fair Share score* is a number between **0** and **1**. Projects with a **larger** Fair Share score receive a **higher priority** in the queue. -A project is given an [allocation of compute units](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md) +A project is given an [allocation of compute units](../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md) over a given **period**. An institution also has a percentage **Fair Share entitlement** of each machine's deliverable capacity over that same period. diff --git a/docs/Batch_Computing/Job_prioritisation.md b/docs/Batch_Computing/Job_prioritisation.md index cac7da8c8..61c78149c 100644 --- a/docs/Batch_Computing/Job_prioritisation.md +++ b/docs/Batch_Computing/Job_prioritisation.md @@ -29,7 +29,7 @@ jobs, but is limited to one small job per user at a time: no more than Job priority decreases whenever the project uses more core-hours than expected, across all partitions. -This [Fair Share](../../Scientific_Computing/Batch_Jobs/Fair_Share.md) +This [Fair Share](Fair_Share.md) policy means that projects that have consumed many CPU core hours in the recent past compared to their expected rate of use (either by submitting and running many jobs, or by submitting and running large jobs) will diff --git a/docs/Batch_Computing/SLURM-Best_Practice.md b/docs/Batch_Computing/SLURM-Best_Practice.md index c7cae8414..c0dae4dab 100644 --- a/docs/Batch_Computing/SLURM-Best_Practice.md +++ b/docs/Batch_Computing/SLURM-Best_Practice.md @@ -44,7 +44,7 @@ etc). ### Memory (RAM) If you request more memory (RAM) than you need for your job, it -[will wait longer in the queue and will be more expensive when it runs](../../General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md). +[will wait longer in the queue and will be more expensive when it runs](../Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md). On the other hand, if you don't request enough memory, the job may be killed for attempting to exceed its allocated memory limits. @@ -53,7 +53,7 @@ your program will need at peak memory usage. We also recommend using `--mem` instead of `--mem-per-cpu` in most cases. There are a few kinds of jobs for which `--mem-per-cpu` is more -suitable. See [our article on how to request memory](../../General/FAQs/How_do_I_request_memory.md) +suitable. See [our article on how to request memory](../Getting_Started/FAQs/How_do_I_request_memory.md) for more information. ## Parallelism @@ -77,4 +77,4 @@ job array in a single command) A low fairshare score will affect your jobs priority in the queue, learn more about how to effectively use your allocation, -[Fair Share](../../Scientific_Computing/Batch_Jobs/Fair_Share.md). +[Fair Share](Fair_Share.md). diff --git a/docs/Batch_Computing/Slurm_Interactive_Sessions.md b/docs/Batch_Computing/Slurm_Interactive_Sessions.md index 62cd02a92..e7a827795 100644 --- a/docs/Batch_Computing/Slurm_Interactive_Sessions.md +++ b/docs/Batch_Computing/Slurm_Interactive_Sessions.md @@ -12,7 +12,7 @@ you to use them interactively as you would the login node. There are two main commands that can be used to make a session, `srun` and `salloc`, both of which use most of the same options available to `sbatch` (see -[our Slurm Reference Sheet](../../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md)). +[our Slurm Reference Sheet](../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md)). !!! warning An interactive session will, once it starts, use the entire requested diff --git a/docs/Batch_Computing/Using_GPUs.md b/docs/Batch_Computing/Using_GPUs.md index 685ba7a42..d2aa6dc96 100644 --- a/docs/Batch_Computing/Using_GPUs.md +++ b/docs/Batch_Computing/Using_GPUs.md @@ -18,7 +18,7 @@ This page provides generic information about how to access GPUs through the Slur ## Request GPU resources using Slurm -To request a GPU for your [Slurm job](../../Getting_Started/Next_Steps/Submitting_your_first_job.md), add +To request a GPU for your [Slurm job](Submitting_your_first_job.md), add the following option in the header of your submission script: ```sl @@ -229,12 +229,12 @@ CUDA_VISIBLE_DEVICES=0 The following pages provide additional information for supported applications: -- [ABAQUS](../../Software/Available_Applications/ABAQUS.md#examples) -- [GROMACS](../../Software/Available_Applications/GROMACS.md) -- [Lambda Stack](../../Software/Available_Applications/Lambda_Stack.md) +- [ABAQUS](../Software/Available_Applications/ABAQUS.md#examples) +- [GROMACS](../Software/Available_Applications/GROMACS.md) +- [Lambda Stack](../Software/Available_Applications/Lambda_Stack.md) - [Matlab](../../Software/Available_Applications/MATLAB.md#using-gpus) -- [TensorFlow on GPUs](../../Software/Available_Applications/TensorFlow_on_GPUs.md) +- [TensorFlow on GPUs](../Software/Available_Applications/TensorFlow_on_GPUs.md) And programming toolkits: -- [NVIDIA GPU Containers](../../Software/Containers/NVIDIA_GPU_Containers.md) +- [NVIDIA GPU Containers](../Software/Containers/NVIDIA_GPU_Containers.md) diff --git a/docs/Getting_Started/Accessing_the_HPCs/Connecting_to_the_Cluster.md b/docs/Getting_Started/Accessing_the_HPCs/Connecting_to_the_Cluster.md index 33ada6fbf..b13320395 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Connecting_to_the_Cluster.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Connecting_to_the_Cluster.md @@ -27,7 +27,7 @@ operating system and level of experience. !!! tip "What next?" - More info on - [NeSI OnDemand](../../Scientific_Computing/Interactive_computing_with_OnDemand/how_to_guide.md) + [NeSI OnDemand](../../Interactive_Computing/OnDemand/how_to_guide.md) - Visit [ondemand.nesi.org.nz](https://ondemand.nesi.org.nz/). ## Linux or Mac OS @@ -40,12 +40,12 @@ installed, usually called, "Terminal." To find it, simply search for Congratulations! You are ready to move to the next step. !!! prerequisite "What next?" - Setting up your [Default Terminal](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) + Setting up your [Default Terminal](Standard_Terminal_Setup.md) ### VSCode The inbuilt 'remotes' plugin allows connecting to remote hosts. -If you have set up your `~/.ssh/config` as described in [Standard_Terminal_Setup](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md), +If you have set up your `~/.ssh/config` as described in [Standard_Terminal_Setup](Standard_Terminal_Setup.md), VSCode will detect this and show configured hosts in the 'Remote Explorer' Tab. ## Windows @@ -69,8 +69,8 @@ different options, listed in order of preference. !!! tip "What next?" - Enabling - [WSL](../../Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md) - - Setting up the [Ubuntu Terminal](../../Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md) + [WSL](Windows_Subsystem_for_Linux_WSL.md) + - Setting up the [Ubuntu Terminal](Windows_Subsystem_for_Linux_WSL.md) ### VSCode @@ -91,7 +91,7 @@ VSCode can be used with WSL or without. institution's IT team supports MobaXTerm. !!! tip "What next?" - Setting up - [MobaXterm](../../Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md) + [MobaXterm](MobaXterm_Setup_Windows.md) ### Using a Virtual Machine @@ -123,7 +123,7 @@ for new users. !!! tip "What next?" - Setting up - [WinSCP](../../Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md) + [WinSCP](WinSCP-PuTTY_Setup_Windows.md) ### Git Bash @@ -141,7 +141,7 @@ primary terminal. All Windows computers have PowerShell installed, however it will only be useful to you if Windows Subsystem for Linux (WSL) is also enabled, instructions can be found at -[Windows_Subsystem_for_Linux_WSL](../../Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md). +[Windows_Subsystem_for_Linux_WSL](Windows_Subsystem_for_Linux_WSL.md). Like Git Bash, PowerShell is perfectly adequate for testing your login or setting up your password, but lacks many of the features of diff --git a/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md index bbf2ca52e..c2e98548e 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md +++ b/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md @@ -25,7 +25,7 @@ description: How to set up cluster access using MobaXterm - [Moving files to/from a cluster.](../../Storage/Moving_files_to_and_from_the_cluster.md) The interactive login configuration for MobaXterm is not compatable with the current web-based authentication method. If you wish to use MobaXterm as your SSH client you therefore need to use a non-interactive setup. -This can be done by following a modified version of the instructions for setting up the [the standard terminal setup described on this support page](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md). +This can be done by following a modified version of the instructions for setting up the [the standard terminal setup described on this support page](Standard_Terminal_Setup.md). ## First time setup diff --git a/docs/Getting_Started/Accessing_the_HPCs/Port_Forwarding.md b/docs/Getting_Started/Accessing_the_HPCs/Port_Forwarding.md index 0b2015271..1a73ef3e2 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Port_Forwarding.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Port_Forwarding.md @@ -6,7 +6,7 @@ tags: --- !!! prerequisite - Have your [connection to the NeSI cluster](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) configured + Have your [connection to the NeSI cluster](Standard_Terminal_Setup.md) configured Some applications only accept connections from internal ports (i.e a port on the same local network), if you are running one such application @@ -24,12 +24,12 @@ to `127.0.0.1`. The alias `localhost` can also be used in most cases. **Host Alias:** An alias for the socket of your main connection to the cluster, `nesi` if you have set up your ssh config file as -described in [Standard Terminal Setup](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md). +described in [Standard Terminal Setup](Standard_Terminal_Setup.md). **Remote Port:** The port number you will use on the remote machine (in this case the NeSI cluster) !!! note - The following examples use aliases as set up in [standard terminal setup](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md). + The following examples use aliases as set up in [standard terminal setup](Standard_Terminal_Setup.md). This allows the forwarding from your local machine to the NeSI cluster, without having to re-tunnel through the lander node. @@ -205,4 +205,4 @@ ssh -Nf -R 6676:localhost:6676 ${SLURM_SUBMIT_HOST} ``` !!! tip "What Next?" - - [Paraview](../../Scientific_Computing/Supported_Applications/ParaView.md) + - [Paraview](../../Software/Available_Applications/ParaView.md) diff --git a/docs/Getting_Started/Accessing_the_HPCs/VSCode.md b/docs/Getting_Started/Accessing_the_HPCs/VSCode.md index 685e75a7a..b35fc7729 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/VSCode.md +++ b/docs/Getting_Started/Accessing_the_HPCs/VSCode.md @@ -83,7 +83,7 @@ Clicking on these will open a connection to that machine, you will then be promp You may find that VSCode is not utilising your preferred versions of software (e.g. when debugging or linting your Python code). -As the NeSI cluster utilises [Environment Modules](../../Getting_Started/Next_Steps/Submitting_your_first_job.md#environment-modules), changing the executable used is not just a matter of changing the path in VSCode configuration, as the libraries required will not be loaded. +As the NeSI cluster utilises [Environment Modules](../../Batch_Computing/Submitting_your_first_job.md#environment-modules), changing the executable used is not just a matter of changing the path in VSCode configuration, as the libraries required will not be loaded. The only way to make sure that VSCode has access to a suitable environment, is to load the required modules in your `~/.bashrc` diff --git a/docs/Getting_Started/Accessing_the_HPCs/Windows_Subsystem_for_Linux_WSL.md b/docs/Getting_Started/Accessing_the_HPCs/Windows_Subsystem_for_Linux_WSL.md index 140d7356b..de65c1f6b 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Windows_Subsystem_for_Linux_WSL.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Windows_Subsystem_for_Linux_WSL.md @@ -85,4 +85,4 @@ ln -s /mnt/c/Users/YourWindowsUsername/ WinFS ``` !!! prerequisite What "Next?" - - Set up your [SSH config file](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md). + - Set up your [SSH config file](Standard_Terminal_Setup.md). diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md index 57a47f1d6..f8de64064 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md @@ -23,7 +23,7 @@ tags: introductory sessions (or watching the recording)](../../Getting_Started/Getting_Help/Introductory_Material.md), or having one or more of your project team members do so. - - Review our [allocation classes](../../General/Policy/Allocation_classes.md). If + - Review our [allocation classes](../Policy/Allocation_classes.md). If you don't think you currently qualify for any class other than Proposal Development, please {% include "partials/support_request.html" %} as soon as possible to discuss your options. Your institution may be in a @@ -65,7 +65,7 @@ information: research programme's current or expected funding) - Details of how your project is funded (this will help determine whether you are eligible for an allocation from our - [Merit](../../General/Policy/Merit_allocations.md) class) + [Merit](../Policy/Merit_allocations.md) class) - Your previous HPC experience - Whether you would like expert scientific programming support on your project @@ -77,7 +77,7 @@ is relevant. !!! prerequisite "What Next?" - Your NeSI Project proposal will be - [reviewed](../../General/Policy/How_we_review_applications.md), + [reviewed](../Policy/How_we_review_applications.md), after which you will be informed of the outcome. - We may contact you if further details are required. - When your project is approved you will be able to [login for the first time](../../Getting_Started/Accessing_the_HPCs/First_Time_Login.md). diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md index b6cdb54e1..bdb39ee10 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md @@ -10,7 +10,7 @@ tags: !!! prerequisite Either an active login at a Tuakiri member institution, or - [a Tuakiri Virtual Home account in respect of your current place of work or study](../../General/Policy/Account_Requests_for_non_Tuakiri_Members.md). + [a Tuakiri Virtual Home account in respect of your current place of work or study](../Policy/Account_Requests_for_non_Tuakiri_Members.md). 1. Access [my.nesi.org.nz](https://my.nesi.org.nz) via your browser and log in with either your institutional credentials, or your Tuakiri diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md index 705f217fb..254ed3359 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md @@ -16,14 +16,14 @@ different allocation criteria. An allocation will come from one of our allocation classes. We will decide what class of allocation is most suitable for you and your research programme, however you're welcome to review -[our article on allocation classes](../../General/Policy/Allocation_classes.md) +[our article on allocation classes](../Policy/Allocation_classes.md) to find out what class you're likely eligible for. ## An important note on CPU hour allocations You may continue to submit jobs even if you have used all your CPU-hour allocation. The effect of 0 remaining CPU hours allocation is a -[lower fairshare](../../Scientific_Computing/Batch_Jobs/Fair_Share.md), +[lower fairshare](../../Batch_Computing/Fair_Share.md), not the inability to use CPUs. Your ability to submit jobs will only be removed when your project's allocation expires, not when core-hours are exhausted. @@ -38,7 +38,7 @@ plus one kind of compute allocation) in order to be valid and active. Compute allocations are expressed in terms of a number of units, to be consumed or reserved between a set start date and time and a set end date and time. For allocations of computing power, we use [Fair -Share](../../Scientific_Computing/Batch_Jobs/Fair_Share.md) +Share](../../Batch_Computing/Fair_Share.md) to balance work between different projects. NeSI allocations and the relative "prices" of resources used by those allocations should not be taken as any indicator of the real NZD costs of purchasing or running diff --git a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md index 8826d69d3..56a6bb0ff 100644 --- a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md +++ b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md @@ -8,7 +8,7 @@ description: Quick list of the most commonly used Slurm commands, flags, and env --- If you are unsure about using our job scheduler Slurm, more details can -be found on [Submitting_your_first_job](../../Getting_Started/Next_Steps/Submitting_your_first_job.md). +be found on [Submitting_your_first_job](../../Batch_Computing/Submitting_your_first_job.md). ## Slurm Commands @@ -60,8 +60,8 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g. | `--nodes` | ``#SBATCH --nodes=2`` | Will request tasks be run across 2 nodes. | | `--ntasks` | ``#SBATCH --ntasks=2 `` | Will start 2 [MPI](../../Software/Parallel_Computing/Parallel_Execution.md) tasks. | | `--ntasks-per-node` | `#SBATCH --ntasks-per-node=1` | Will start 1 task per requested node. | -| `--cpus-per-task` | `#SBATCH --cpus-per-task=10` | Will request 10 [*logical* CPUs](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) per task. | -| `--mem-per-cpu` | `#SBATCH --mem-per-cpu=512MB` | Memory Per *logical* CPU. `--mem` Should be used if shared memory job. See [How do I request memory?](../../General/FAQs/How_do_I_request_memory.md) | +| `--cpus-per-task` | `#SBATCH --cpus-per-task=10` | Will request 10 [*logical* CPUs](../../Software/Parallel_Computing/Hyperthreading.md) per task. | +| `--mem-per-cpu` | `#SBATCH --mem-per-cpu=512MB` | Memory Per *logical* CPU. `--mem` Should be used if shared memory job. See [How do I request memory?](../FAQs/How_do_I_request_memory.md) | | --array | `#SBATCH --array=1-5` | Will submit job 5 times each with a different `$SLURM_ARRAY_TASK_ID` (1,2,3,4,5). | | | `#SBATCH --array=0-20:5` | Will submit job 5 times each with a different `$SLURM_ARRAY_TASK_ID` (0,5,10,15,20). | | | `#SBATCH --array=1-100%10` | Will submit 1 though to 100 jobs but no more than 10 at once. | @@ -73,7 +73,7 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g. | `--qos` | `#SBATCH --qos=debug` | Adding this line gives your job a high priority. *Limited to one job at a time, max 15 minutes*. | | `--profile` | `#SBATCH --profile=ALL` | Allows generation of a .h5 file containing job profile information. See [Slurm Native Profiling](../../Software/Profiling_and_Debugging/Slurm_Native_Profiling.md) | | `--dependency` | `#SBATCH --dependency=afterok:123456789` | Will only start after the job 123456789 has completed. | -| `--hint` | `#SBATCH --hint=nomultithread` | Disables [hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md), be aware that this will significantly change how your job is defined. | +| `--hint` | `#SBATCH --hint=nomultithread` | Disables [hyperthreading](../../Software/Parallel_Computing/Hyperthreading.md), be aware that this will significantly change how your job is defined. | !!! tip Many options have a short (`-`) and long (`--`) form e.g. diff --git a/docs/Getting_Started/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md b/docs/Getting_Started/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md index d92bc0fbd..7503f49f2 100644 --- a/docs/Getting_Started/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md +++ b/docs/Getting_Started/FAQs/Can_I_change_my_time_zone_to_New_Zealand_time.md @@ -27,7 +27,7 @@ latter but not the former: test -r ~/.bashrc && . ~/.bashrc ``` -Please see the article, [.bashrc or.bash profile?](../../General/FAQs/What_are_my-bashrc_and-bash_profile_for.md) +Please see the article, [.bashrc or.bash profile?](What_are_my-bashrc_and-bash_profile_for.md) for more information. ## What about cron jobs? diff --git a/docs/Getting_Started/FAQs/How_can_I_view_images_generated_on_the_cluster.md b/docs/Getting_Started/FAQs/How_can_I_view_images_generated_on_the_cluster.md index bc11ab1b8..a02a41041 100644 --- a/docs/Getting_Started/FAQs/How_can_I_view_images_generated_on_the_cluster.md +++ b/docs/Getting_Started/FAQs/How_can_I_view_images_generated_on_the_cluster.md @@ -16,4 +16,4 @@ gm display myImage.png ``` This requires a [working X-11 -server](../../Scientific_Computing/Terminal_Setup/X11.md). +server](../Accessing_the_HPCs/X11.md). diff --git a/docs/Getting_Started/FAQs/How_do_I_request_memory.md b/docs/Getting_Started/FAQs/How_do_I_request_memory.md index 2446ba7cf..c79e944a1 100644 --- a/docs/Getting_Started/FAQs/How_do_I_request_memory.md +++ b/docs/Getting_Started/FAQs/How_do_I_request_memory.md @@ -5,7 +5,7 @@ description: Instructions for requesting memory --- - `--mem`: Memory per node -- `--mem-per-cpu`: Memory per [logical CPU](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) +- `--mem-per-cpu`: Memory per [logical CPU](../../Software/Parallel_Computing/Hyperthreading.md) In most circumstances, you should request memory using `--mem`. The exception is if you are running an MPI job that could be placed on more diff --git a/docs/Getting_Started/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md b/docs/Getting_Started/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md index b5aa1af22..b69216bf5 100644 --- a/docs/Getting_Started/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md +++ b/docs/Getting_Started/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md @@ -31,5 +31,5 @@ the file explorer in Jupyter from your downloads folder. This script can then be run as a regular python script as described in our -[Python](../../Scientific_Computing/Supported_Applications/Python.md) +[Python](../../Software/Available_Applications/Python.md) documentation. diff --git a/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md b/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md index 177b2917e..c4ea994b8 100644 --- a/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md +++ b/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md @@ -15,7 +15,7 @@ This page should be read in conjunction with the [Known Issues](../../Announceme ## Login -We are now using Tuakiri to provide second-factor authentication, and this changes the login experience. See [Standard Terminal Setup HPC3](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) for the full details. +We are now using Tuakiri to provide second-factor authentication, and this changes the login experience. See [Standard Terminal Setup HPC3](../Accessing_the_HPCs/Standard_Terminal_Setup.md) for the full details. ## Operating System diff --git a/docs/Getting_Started/FAQs/What_is_a_core_file.md b/docs/Getting_Started/FAQs/What_is_a_core_file.md index f22c6f4c9..0a2658e1b 100644 --- a/docs/Getting_Started/FAQs/What_is_a_core_file.md +++ b/docs/Getting_Started/FAQs/What_is_a_core_file.md @@ -14,7 +14,7 @@ Your application may crash with an error like, `Segmentation fault (core dumped) These failures are memory-related, such as the program asking for more memory than allocated or for memory it can't legally access. Your first step in troubleshooting should be checking if this is the case, -see [Finding Job_Efficiency](../../Getting_Started/Next_Steps/Finding_Job_Efficiency.md) +see [Finding Job_Efficiency](../../Software/Profiling_and_Debugging/Finding_Job_Efficiency.md) `.core` files are a record of the working memory at time of failure, and can be used for diff --git a/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md b/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md index 63f5be71a..a06864325 100644 --- a/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md +++ b/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md @@ -13,20 +13,20 @@ use. Examples of software environments on NeSI optimised for data science include: -- [R](../../Scientific_Computing/Supported_Applications/R.md) and - [Python](../../Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md) users +- [R](../../Software/Available_Applications/R.md) and + [Python](../../Software/Available_Applications/TensorFlow_on_GPUs.md) users can get right into using and exploring the several built-in packages or create custom code. - [Jupyter on NeSI](../../Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/index.md)is particularly well suited to artificial intelligence and machine - learning workloads. [RStudio](../../Scientific_Computing/Interactive_computing_with_OnDemand/Apps/RStudio.md) + learning workloads. [RStudio](../../Interactive_Computing/OnDemand/Apps/RStudio.md) and/or Conda can be accessed via Jupyter. - Commonly used data science environments and libraries such as - [Keras](../../Scientific_Computing/Supported_Applications/Keras.md), - [LambdaStack](../../Scientific_Computing/Supported_Applications/Lambda_Stack.md), - [Tensorflow](../../Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md) + [Keras](../../Software/Available_Applications/Keras.md), + [LambdaStack](../../Software/Available_Applications/Lambda_Stack.md), + [Tensorflow](../../Software/Available_Applications/TensorFlow_on_GPUs.md) and [Conda](https://docs.conda.io/en/latest/) are available to create comprehensive workflows. diff --git a/docs/Getting_Started/FAQs/Why_does_my_program_crash.md b/docs/Getting_Started/FAQs/Why_does_my_program_crash.md index 84a34a49e..72ec8a4ff 100644 --- a/docs/Getting_Started/FAQs/Why_does_my_program_crash.md +++ b/docs/Getting_Started/FAQs/Why_does_my_program_crash.md @@ -11,7 +11,7 @@ investigate. ## OOM One common reason is a limited amount of memory. Then the application -could crash with an [Out Of Memory exception](../../General/FAQs/What_does_oom_kill_mean.md). +could crash with an [Out Of Memory exception](What_does_oom_kill_mean.md). ## Debugger diff --git a/docs/Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md b/docs/Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md index 6377f0788..c371527a2 100644 --- a/docs/Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md +++ b/docs/Getting_Started/FAQs/Why_is_my_job_taking_a_long_time_to_start.md @@ -106,7 +106,7 @@ If, compared to other jobs in the queue, your job's priority (third column) and fair share score (fifth column) are both low, this usually means that your project team has recently been using through CPU core hours faster than expected. -See [Fair Share -- How jobs get prioritised](../../Scientific_Computing/Batch_Jobs/Fair_Share.md) for more +See [Fair Share -- How jobs get prioritised](../../Batch_Computing/Fair_Share.md) for more information on Fair Share, how you can check your project's fair share score, and what you can do about a low project fair share score. diff --git a/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md b/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md index 8dbf82595..a841c3f29 100644 --- a/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md +++ b/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md @@ -43,6 +43,6 @@ my.nesi.org.nz. If you still can't find the email, {% include "partials/support_request.html" %}. !!! note "What next?" - - [Project Eligibility](../../General/Policy/Allocation_classes.md) + - [Project Eligibility](Allocation_classes.md) - [Applying for a new project.](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md) - [Applying to join an existing project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md). diff --git a/docs/Getting_Started/Policy/How_we_review_applications.md b/docs/Getting_Started/Policy/How_we_review_applications.md index 6a233f1a0..d64b70e57 100644 --- a/docs/Getting_Started/Policy/How_we_review_applications.md +++ b/docs/Getting_Started/Policy/How_we_review_applications.md @@ -44,7 +44,7 @@ new projects is as follows: 5. **Decision and notification:** If we approve an initial allocation for your project, we will typically award the project an [allocation of compute units and also an online storage allocation](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md), - from one of [our allocation classes](../../General/Policy/Allocation_classes.md). + from one of [our allocation classes](Allocation_classes.md). In an case, we will send you an email telling you about our decision. Our review process for requests for new allocations on existing projects diff --git a/docs/Getting_Started/Policy/Institutional_allocations.md b/docs/Getting_Started/Policy/Institutional_allocations.md index 37f7f8225..17b4f577b 100644 --- a/docs/Getting_Started/Policy/Institutional_allocations.md +++ b/docs/Getting_Started/Policy/Institutional_allocations.md @@ -26,11 +26,11 @@ from your institution. If you are a postgraduate student at a NeSI collaborator, your project will likely be considered for an Institutional allocation rather than a -[Merit](../../General/Policy/Merit_allocations.md) or -[Postgraduate](../../General/Policy/Postgraduate_allocations.md) +[Merit](Merit_allocations.md) or +[Postgraduate](Postgraduate_allocations.md) allocation. -Read more about [how we review applications](../../General/Policy/How_we_review_applications.md). +Read more about [how we review applications](How_we_review_applications.md). To learn more about NeSI Projects or to apply for a new project, please read our article [Applying for a NeSI Project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md). diff --git a/docs/Getting_Started/Policy/Merit_allocations.md b/docs/Getting_Started/Policy/Merit_allocations.md index f4536a7dd..ded727cff 100644 --- a/docs/Getting_Started/Policy/Merit_allocations.md +++ b/docs/Getting_Started/Policy/Merit_allocations.md @@ -52,7 +52,7 @@ must meet the following criteria: supervisor is a named investigator. Read more about [how we review -applications](../../General/Policy/How_we_review_applications.md). +applications](How_we_review_applications.md). To learn more about REANNZ HPC Projects or to apply for a new project, please read our article [Applying for a REANNZ HPC Project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md). diff --git a/docs/Getting_Started/Policy/Postgraduate_allocations.md b/docs/Getting_Started/Policy/Postgraduate_allocations.md index d4731733b..0c63508b1 100644 --- a/docs/Getting_Started/Policy/Postgraduate_allocations.md +++ b/docs/Getting_Started/Policy/Postgraduate_allocations.md @@ -37,7 +37,7 @@ project an allocation from the Postgraduate class: available to meet demand. Read more about [how we review -applications](../../General/Policy/How_we_review_applications.md). +applications](How_we_review_applications.md). To learn more about NeSI Projects, and to apply please review the content of the section entitled [Applying for a NeSI Project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md). diff --git a/docs/Getting_Started/Policy/Proposal_Development_allocations.md b/docs/Getting_Started/Policy/Proposal_Development_allocations.md index cb466f023..46406ff72 100644 --- a/docs/Getting_Started/Policy/Proposal_Development_allocations.md +++ b/docs/Getting_Started/Policy/Proposal_Development_allocations.md @@ -29,7 +29,7 @@ Proposal Development allocation. Once you have completed your Proposal Development allocation, you are welcome to apply for a further allocation. If you are successful, the project's next allocation will be from another of the -[allocation classes](../../General/Policy/Allocation_classes.md). +[allocation classes](Allocation_classes.md). The [How Applications are Reviewed](How_we_review_applications.md) section provides additional important information for applicants. diff --git a/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md b/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md index d61c081a7..7042f7ef3 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md +++ b/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md @@ -34,7 +34,7 @@ profile.](https://my.nesi.org.nz/html/request_nesi_account) NeSI will (if approved) provision a so-called "virtual home account" on Tuakiri. See also [Account Requests for non-Tuakiri -Members](../../General/Policy/Account_Requests_for_non_Tuakiri_Members.md) +Members](../Policy/Account_Requests_for_non_Tuakiri_Members.md) ## Troubleshooting login issues diff --git a/docs/Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-21-0.md b/docs/Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-21-0.md index 3173a82f8..ec2f8d713 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-21-0.md +++ b/docs/Getting_Started/my-nesi-org-nz/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-21-0.md @@ -20,7 +20,7 @@ search: items under Accounts. - On the Project page and New Allocation Request page, tool tip text referring to - [nn\_corehour\_usage](../../../Scientific_Computing/Batch_Jobs/Checking_resource_usage.md) + [nn\_corehour\_usage](../../../Batch_Computing/Checking_resource_usage.md) will appear when you hover over the Mahuika Compute Units information. diff --git a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md index 48c6e4c60..3413118d2 100644 --- a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md +++ b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md @@ -21,8 +21,8 @@ Python and R kernels by default, which can be selected from the Launcher. Many packages are preinstalled in our default Python and R environments and these can be extended further as described on the -[Python](../../../../Scientific_Computing/Supported_Applications/Python.md) and -[R](../../../../Scientific_Computing/Supported_Applications/R.md) support +[Python](../../../../Software/Available_Applications/Python.md) and +[R](../../../../Software/Available_Applications/R.md) support pages. ## Adding a custom Python kernel @@ -211,7 +211,7 @@ Launcher as "Shared Virtual Env". ## Custom kernel in a Singularity container An example showing setting up a custom kernel running in a Singularity -container can be found on our [Lambda Stack](../../../../Scientific_Computing/Supported_Applications/Lambda_Stack.md#lambda-stack-via-jupyter) +container can be found on our [Lambda Stack](../../../../Software/Available_Applications/Lambda_Stack.md#lambda-stack-via-jupyter) support page. ## Adding a custom R kernel diff --git a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md.bak b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md.bak new file mode 100644 index 000000000..48c6e4c60 --- /dev/null +++ b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md.bak @@ -0,0 +1,286 @@ +--- +created_at: 2025-01-24 +description: How to set up your own custom kernals for use on NeSI JupyterHub +tags: + - JupyterHub + - Python + - R +--- + +# Jupyter kernels - Manual management + +!!! warning + + NeSI OnDemand is in development and accessible to early access users only. + If you are interested in helping us test it please [contact us](mailto:support@nesi.org.nz). + +## Introduction + +Jupyter kernels execute the code that you write. NeSI provides a number of +Python and R kernels by default, which can be selected from the Launcher. + +Many packages are preinstalled in our default Python and R environments +and these can be extended further as described on the +[Python](../../../../Scientific_Computing/Supported_Applications/Python.md) and +[R](../../../../Scientific_Computing/Supported_Applications/R.md) support +pages. + +## Adding a custom Python kernel + +!!! note "see also" + See the [Jupyter kernels - Tool-assisted management](./Jupyter_kernels_Tool_assisted_management.md) + page for the **preferred** way to register kernels, which uses the + `nesi-add-kernel` command line tool to automate most of these manual + steps. + +You can configure custom Python kernels for running your Jupyter +notebooks. This could be necessary and/or recommended in some +situations, including: + +- if you wish to load a different combination of environment modules + than those we load in our default kernels +- if you would like to activate a virtual environment or conda + environment before launching the kernel + +The following example will create a custom kernel based on the +Miniconda3 environment module (but applies to other environment modules +too). + +In a terminal run the following commands to load a Miniconda environment +module: + +``` sh +module purge +module load Miniconda3 +``` + +Now create a conda environment named "my-conda-env" using Python 3.6. +The *ipykernel* Python package is required but you can change the names +of the environment, version of Python and install other Python packages +as required. + +``` sh +conda create --name my-conda-env python=3.11 +source $(conda info --base)/etc/profile.d/conda.sh +conda activate my-conda-env +conda install ipykernel +# you can pip/conda install other packages here too +``` + +Now create a Jupyter kernel based on your new conda environment: + +``` sh +python -m ipykernel install --user --name my-conda-env --display-name="My Conda Env" +``` + +We must now edit the kernel to load the required NeSI environment +modules before the kernel is launched. Change to the directory the +kernelspec was installed to +`~/.local/share/jupyter/kernels/my-conda-env`, (assuming you kept +`--name my-conda-env` in the above command): + +``` sh +cd ~/.local/share/jupyter/kernels/my-conda-env +``` + +Now create a wrapper script, called `wrapper.sh`, with the following +contents: + +``` sh +#!/usr/bin/env bash + +# load required modules here +module purge +module load Miniconda3 + +# activate conda environment +source $(conda info --base)/etc/profile.d/conda.sh +conda deactivate # workaround for https://github.com/conda/conda/issues/9392 +conda activate my-conda-env + +# run the kernel +exec python $@ +``` + +Make the wrapper script executable: + +``` sh +chmod +x wrapper.sh +``` + +Next edit the *kernel.json* to change the first element of the argv list +to point to the wrapper script we just created. The file should look +like this (change <username> to your NeSI username): + +```json +{ + "argv": [ + "/home//.local/share/jupyter/kernels/my-conda-env/wrapper.sh", + "-m", + "ipykernel_launcher", + "-f", + "{connection_file}" + ], + "display_name": "My Conda Env", + "language": "python" +} +``` + +After refreshing JupyterLab your new kernel should show up in the +Launcher as "My Conda Env". + +## Sharing a Python kernel with your project team members + +You can also configure a shared Python kernel that others with access to +the same NeSI project will be able to load. If this kernel is based on a +Python virtual environment, Conda environment or similar, you must make +sure it also exists in a shared location (other users cannot see your +home directory). + +The example below shows creating a shared Python kernel based on the +`Python/3.8.2-gimkl-2020a` module and also loads the +`ETE/3.1.1-gimkl-2020a-Python-3.8.2` module. + +In a terminal run the following commands to load the Python and ETE +environment modules: + +``` sh +module purge +module load Python/3.8.2-gimkl-2020a +module load ETE/3.1.1-gimkl-2020a-Python-3.8.2 +``` + +Now create a Jupyter kernel within your project directory, based on your +new virtual environment: + +``` sh +python -m ipykernel install --prefix=/nesi/project//.jupyter --name shared-ete-env --display-name="Shared ETE Env" +``` + +Next change to the kernel directory, which for the above command would +be: + +``` sh +cd /nesi/project//.jupyter/share/jupyter/kernels/shared-ete-env +``` + +Create a wrapper script, *wrapper.sh*, with the following contents: + +``` sh +#!/usr/bin/env bash + +# load necessary modules here +module purge +module load Python/3.8.2-gimkl-2020a +module load ETE/3.1.1-gimkl-2020a-Python-3.8.2 + +# run the kernel +exec python $@ +``` + +Note we also load the ETE module so that we can use that from our +kernel. + +Make the wrapper script executable: + +``` sh +chmod +x wrapper.sh +``` + +Next, edit the *kernel.json* to change the first element of the argv +list to point to the wrapper script we just created. The file should +look like this (change <project\_code> to your NeSI project code): + +```json +{ + "argv": [ + "/nesi/project//.jupyter/share/jupyter/kernels/shared-ete-env/wrapper.sh", + "-m", + "ipykernel_launcher", + "-f", + "{connection_file}" + ], + "display_name": "Shared Conda Env", + "language": "python" +} +``` + +After refreshing JupyterLab your new kernel should show up in the +Launcher as "Shared Virtual Env". + +## Custom kernel in a Singularity container + +An example showing setting up a custom kernel running in a Singularity +container can be found on our [Lambda Stack](../../../../Scientific_Computing/Supported_Applications/Lambda_Stack.md#lambda-stack-via-jupyter) +support page. + +## Adding a custom R kernel + +You can configure custom R kernels for running your Jupyter notebooks. +The following example will create a custom kernel based on the +R/3.6.2-gimkl-2020a environment module and will additionally load an +MPFR environment module (e.g. if you wanted to load the Rmpfr package). + +In a terminal run the following commands to load the required +environment modules: + +``` sh +module purge +module load IRkernel/1.1.1-gimkl-2020a-R-3.6.2 +module load Python/3.8.2-gimkl-2020a +``` + +The IRkernel module loads the R module as a dependency and provides the +R kernel for Jupyter. Python is required to install the kernel (since +Jupyter is written in Python). + +Now create an R Jupyter kernel based on your new conda environment: + +``` sh +R -e "IRkernel::installspec(name='myrwithmpfr', displayname = 'R with MPFR', user = TRUE)" +``` + +We must now to edit the kernel to load the required NeSI environment +modules when the kernel is launched. Change to the directory the +kernelspec was installed to +(~/.local/share/jupyter/kernels/myrwithmpfr, assuming you kept `--name +myrwithmpfr` in the above command): + +``` sh +cd ~/.local/share/jupyter/kernels/myrwithmpfr +``` + +Now create a wrapper script in that directory, called *wrapper.sh*, with +the following contents: + +``` sh +#!/usr/bin/env bash + +# load required modules here +module purge +module load MPFR/4.0.2-GCCcore-9.2.0 +module load IRkernel/1.1.1-gimkl-2020a-R-3.6.2 + +# run the kernel +exec R $@ +``` + +Make the wrapper script executable: + +``` sh +chmod +x wrapper.sh_ + "argv": [ + "/home//.local/share/jupyter/kernels/myrwithmpfr/wrapper.sh", + "--slave", + "-e", + "IRkernel::main()", + "--args", + "{connection_file}" + ], + "display_name": "R with MPFR", + "language": "R" +} +``` + +After refreshing JupyterLab your new R kernel should show up in the +Launcher as "R with MPFR". diff --git a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md.bak b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md.bak new file mode 100644 index 000000000..21f48afe4 --- /dev/null +++ b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md.bak @@ -0,0 +1,160 @@ +--- +title: Jupyter kernels - Tool-assisted management +description: +tags: + - JupyterHub + - Python + - R +--- + +## Introduction + +Jupyter can execute code in different computing environments using +*kernels*. Some kernels are provided by default (Python, R, etc.) but +you may want to register your computing environment to use it in +notebooks. For example, you may want to load a specific environment +module in your kernel or use a Conda environment. + +To register a Jupyter kernel, you can follow the steps highlighted in +the [Jupyter kernels - Manual management](./Jupyter_kernels_Manual_management.md) +or use the `nesi-add-kernel` tool provided within the [Jupyter on NeSI service](https://jupyter.nesi.org.nz). +This page details the latter option, which we recommend. + +## Getting started + +First you need to open a terminal. It can be from a session on Jupyter +on NeSI or from a regular ssh connection on Mahuika login node. If you +use the ssh option, make sure to load the JupyterLab module to have +access to the `nesi-add-kernel` tool: + +``` sh +module purge # remove all previously loaded modules +module load JupyterLab +``` + +Then, to list all available options, use the `-h` or `--help` options as +follows: + +``` sh +nesi-add-kernel --help +``` + +Here is an example to add a TensorFlow kernel, using NeSI’s module: + +``` sh +nesi-add-kernel tf_kernel TensorFlow/2.8.2-gimkl-2022a-Python-3.10.5 +``` + +!!! warning + The name given to your kernel in `nesi-add-kernel KERNEL_NAME MODULE` must only include lowercase letters, underscores, and dashes. + +and to share the kernel with other members of your NeSI project: + +``` sh +nesi-add-kernel --shared tf_kernel_shared TensorFlow/2.8.2-gimkl-2022a-Python-3.10.5 +``` + +To list all the installed kernels, use the following command: + +``` sh +jupyter-kernelspec list +``` + +and to delete a specific kernel: + +``` sh +jupyter-kernelspec remove +``` + +where `` stands for the name of the kernel to delete. + +## Conda environment + +First, make sure the `JupyterLab` module is loaded: + +``` sh +module purge +module load JupyterLab +``` + +To add a Conda environment created using +`conda create -p `, use: + +``` sh +nesi-add-kernel my_conda_env -p +``` + +otherwise if created using `conda create -n `, use: + +``` sh +nesi-add-kernel my_conda_env -n +``` + +## Virtual environment + +If you want to use a Python virtual environment, don’t forget to specify +which Python module you used to create it. + +For example, if we create a virtual environment named `my_test_venv` +using Python 3.10.5: + +``` sh +module purge +module load Python/3.10.5-gimkl-2022a +python -m venv my_test_venv +``` + +to create the corresponding `my_test_kernel` kernel, we need to use the +command: + +``` sh +module purge +module load JupyterLab +nesi-add-kernel my_test_kernel Python/3.10.5-gimkl-2022a --venv my_test_venv +``` + +## Singularity container + +!!! danger + + This section has not been tested on NeSI OnDemand + +To use a Singularity container, use the `-c` or `--container` options as +follows: + +``` sh +module purge +module load JupyterLab +nesi-add-kernel my_test_kernel -c +``` + +where `` is a path to your container image. + +Note that your container **must** have the `ipykernel` Python package +installed in it to be able to work as a Jupyter kernel. + +Additionally, you can use the `--container-args` option to pass more +arguments to the `singularity exec` command used to instantiate the +kernel. + +Here is an example instantiating a NVIDIA NGC container as a kernel. +First, we need to pull the container: + +``` sh +module purge +module load Singularity/3.11.3 +singularity pull nvidia_tf.sif docker://nvcr.io/nvidia/tensorflow:21.07-tf2-py3 +``` + +then we can instantiate the kernel, using the `--nv` singularity flag to +ensure that the GPU will be found at runtime (assuming our Jupyter +session has access to a GPU): + +``` sh +module purge +module load JupyterLab +nesi-add-kernel nvidia_tf -c nvidia_tf.sif --container-args "'--nv'" +``` + +Note that the double-quoting of `--nv` is needed to properly pass the +options to `singularity exec`. diff --git a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md index a0e8b0682..f2052402e 100644 --- a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md +++ b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md @@ -8,7 +8,7 @@ Jupyter allows you to create notebooks that contain live code, equations, visualisations and explanatory text. There are many uses for Jupyter, including data cleaning, analytics and visualisation, machine learning, numerical simulation, managing -[Slurm job submissions](../../../../Getting_Started/Next_Steps/Submitting_your_first_job.md) +[Slurm job submissions](../../../../Batch_Computing/Submitting_your_first_job.md) and workflows and much more. ## Accessing Jupyter on NeSI diff --git a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md.bak b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md.bak new file mode 100644 index 000000000..a0e8b0682 --- /dev/null +++ b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md.bak @@ -0,0 +1,106 @@ +# JupyterLab via OnDemand + + +## Introduction + +NeSI supports the use of [Jupyter](https://jupyter.org/) for interactive computing. +Jupyter allows you to create notebooks that contain live code, +equations, visualisations and explanatory text. There are many uses for +Jupyter, including data cleaning, analytics and visualisation, machine +learning, numerical simulation, managing +[Slurm job submissions](../../../../Getting_Started/Next_Steps/Submitting_your_first_job.md) +and workflows and much more. + +## Accessing Jupyter on NeSI + + +Jupyter at NeSI can be accessed via [NeSI OnDemand](https://ondemand.nesi.org.nz/) and launching the JupyterLab application there. +For more details see the [how-to guide](../../how_to_guide.md). + +## Jupyter user interface + +### JupyterLab + +[JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) +is the next generation of the Jupyter user interface and provides a way +to use notebooks, text editor, terminals and custom components together. + +### filesystems + +Your JupyterLab session will start in your home directory the first time you launch it. On subsequent launches it may remember your previous working directory and start there. + +NeSI will auto generate a directory within your home folder called `00_nesi_projects`, you will find symbolic links to projects and nobackup directories of your active projects. We do not recommend that you store files in this initial directory because next time you log into OnDemand the directory will be repopulated based on your user groups, instead switch to your home, project or nobackup directories first. + +If you wish to not have this folder recreated upon login then please place the following file in your HOME directory `.00_nesi_projects.stop` and this will stop the folder from being recreated upon login. + +### Jupyter kernels + +NeSI provides some default Python and R kernels that are available to all users and are based on some +of environment modules. It's also possible to create additional kernels that are visible only to +you (they can optionally be made visible to other members of a specific NeSI project that you belong to). See: + +- [Jupyter kernels - Tool-assisted management](./Jupyter_kernels_Tool_assisted_management.md) (recommended) +- [Jupyter kernels - Manual management](./Jupyter_kernels_Manual_management.md) + +### Jupyter terminal + +Some things to note about the JupyterLab terminal are: + +- when you launch the terminal application some environment modules + are already loaded, so you may want to run `module purge` +- processes launched directly in the JupyterLab terminal will probably + be killed when you Jupyter session times out + +## Installing JupyterLab extensions + +JupyterLab supports many extensions that enhance its functionality. At +NeSI we package some extensions into the default JupyterLab environment. +Keep reading if you need to install extensions yourself. + +Note, there were some changes related to extensions in JupyterLab 3.0 +and there are now multiple methods to install extensions. More details +about JupyterLab extensions can be found +[here](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html). +Check the extension's documentation to find out the supported +installation method for that particular extension. + +On NeSI OnDemand we support installing prebuilt extensions (i.e. pip installable +packages) from the terminal application. +First ensure you have the latest JupyterLab module loaded: + +```sh +module purge +module load JupyterLab +``` + +Then install the extension by running (the upstream documentation for the package +you are installing should specify the "packagename" that you should use): + +``` sh +pip install --user +``` + +For example, the [Dask extension](https://github.com/dask/dask-labextension#jupyterlab-4x) +can be installed with the following: + +``` sh +pip install --user dask-labextension +``` + +Note that we need to specify the `--user` option on the `pip install` command because you don't +have permission to install packages in the system directory. Adding `--user` installs the package +into your home directory instead. + +## Log files + +The log file of a JupyterLab session is saved in the OnDemand session directory +(a subdirectory under the *ondemand* directory in your home directory). +You can reach the session directory in the OnDemand file browser by clicking +the link in the session card under "My Interactive Sessions" in the NeSI +OnDemand web interface. The log file is named *session.log* within the session +directory. + +## External documentation + +- [Jupyter](https://jupyter.readthedocs.io/en/latest/) +- [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) diff --git a/docs/Interactive_Computing/OnDemand/Apps/RStudio.md b/docs/Interactive_Computing/OnDemand/Apps/RStudio.md index a1be62296..f6152159f 100644 --- a/docs/Interactive_Computing/OnDemand/Apps/RStudio.md +++ b/docs/Interactive_Computing/OnDemand/Apps/RStudio.md @@ -1,56 +1,56 @@ -# RStudio via OnDemand - - -## Logging in -![UPDATE WITH PROJECT](../../../assets/images/RStudio_via_OOD_on_NeSI_0.png){width=35%} ![](../../../assets/images/RStudio_via_OOD_on_NeSI_1.png){fig.align="right" width=62%} - -## Settings -Recommendation to set *Save Workspace to Never* to avoid saving large files to the workspace. This can be done by going to `Tools` -> `Global Options` -> `General` and setting the `Save workspace to .RData on exit` to `Never`. This will prevent the workspace from being unable to load due to not enough memory in the selected session. - -## Bugs - -### Plots not showing -The current R modules on NeSI OnDemand do not support the default graphics device due to a missing depedency, `cairo`. There is a one off fix for this by changing the backend graphics device from `Default` to `AGG` (Anti-Grain Geometry) in the RStudio settings. - -This can be done by going to `Tools` -> `Global Options` -> `Graphics` and switch `Default` to `AGG`. This will allow the plots to be displayed in the RStudio interface. You do not need to restart the RStudio session for this to take effect. - -![](../../../assets/images/RStudio_via_OOD_on_NeSI_2.png) - -Modules from 4.4 onwards will have this issue fixed. - -### Libraries not showing -There is a bug with the R-Geo and R-bundle-Biocondutor libraries not showing up in the RStudio interface. This is a known issue and is being worked on. There are two workarounds for this issue: - -1. Manually add the library to `.libPaths()` in the R console as shown below: - -```R -myPaths <- .libPaths() -myPaths <- c(myPaths, "/opt/nesi/CS400_centos7_bdw/R-Geo/4.3.2-foss-2023a") - -# reorder paths -myPaths <- c(myPaths[1], myPaths[3], myPaths[2]) - -# reasign the library paths -.libPaths(myPaths) - -# confirm the library paths -.libPaths() -[1] "/nesi/home/$USER/R/foss-2023a/4.3" -[2] "/opt/nesi/CS400_centos7_bdw/R-Geo/4.3.2-foss-2023a" -[3] "/opt/nesi/CS400_centos7_bdw/R/4.3.2-foss-2023a/lib64/R/library" -``` -2. Permanent fix by adding the library path(s) to the `.Rprofile` file in your home directory. This will automatically add the library path to the R console when it starts up. Copy and Paste the following lines to the file: - -``` -# CHECK LIBRARY PATHS -myPaths <- .libPaths() -newPaths <- c("/opt/nesi/CS400_centos7_bdw/R-Geo/4.3.1-gimkl-2022a", - "/opt/nesi/CS400_centos7_bdw/R-bundle-Bioconductor/3.17-gimkl-2022a-R-4.3.1> - -# join the two lists -myPaths <- c(myPaths, newPaths) - -# reassign the library paths -.libPaths(myPaths) -``` -NOTE: Replace the paths with the correct paths for the libraries you want to add. +# RStudio via OnDemand + + +## Logging in +![UPDATE WITH PROJECT](../../../assets/images/RStudio_via_OOD_on_NeSI_0.png){width=35%} ![](../../../assets/images/RStudio_via_OOD_on_NeSI_1.png){fig.align="right" width=62%} + +## Settings +Recommendation to set *Save Workspace to Never* to avoid saving large files to the workspace. This can be done by going to `Tools` -> `Global Options` -> `General` and setting the `Save workspace to .RData on exit` to `Never`. This will prevent the workspace from being unable to load due to not enough memory in the selected session. + +## Bugs + +### Plots not showing +The current R modules on NeSI OnDemand do not support the default graphics device due to a missing depedency, `cairo`. There is a one off fix for this by changing the backend graphics device from `Default` to `AGG` (Anti-Grain Geometry) in the RStudio settings. + +This can be done by going to `Tools` -> `Global Options` -> `Graphics` and switch `Default` to `AGG`. This will allow the plots to be displayed in the RStudio interface. You do not need to restart the RStudio session for this to take effect. + +![](../../../assets/images/RStudio_via_OOD_on_NeSI_2.png) + +Modules from 4.4 onwards will have this issue fixed. + +### Libraries not showing +There is a bug with the R-Geo and R-bundle-Biocondutor libraries not showing up in the RStudio interface. This is a known issue and is being worked on. There are two workarounds for this issue: + +1. Manually add the library to `.libPaths()` in the R console as shown below: + +```R +myPaths <- .libPaths() +myPaths <- c(myPaths, "/opt/nesi/CS400_centos7_bdw/R-Geo/4.3.2-foss-2023a") + +# reorder paths +myPaths <- c(myPaths[1], myPaths[3], myPaths[2]) + +# reasign the library paths +.libPaths(myPaths) + +# confirm the library paths +.libPaths() +[1] "/nesi/home/$USER/R/foss-2023a/4.3" +[2] "/opt/nesi/CS400_centos7_bdw/R-Geo/4.3.2-foss-2023a" +[3] "/opt/nesi/CS400_centos7_bdw/R/4.3.2-foss-2023a/lib64/R/library" +``` +2. Permanent fix by adding the library path(s) to the `.Rprofile` file in your home directory. This will automatically add the library path to the R console when it starts up. Copy and Paste the following lines to the file: + +``` +# CHECK LIBRARY PATHS +myPaths <- .libPaths() +newPaths <- c("/opt/nesi/CS400_centos7_bdw/R-Geo/4.3.1-gimkl-2022a", + "/opt/nesi/CS400_centos7_bdw/R-bundle-Bioconductor/3.17-gimkl-2022a-R-4.3.1> + +# join the two lists +myPaths <- c(myPaths, newPaths) + +# reassign the library paths +.libPaths(myPaths) +``` +NOTE: Replace the paths with the correct paths for the libraries you want to add. diff --git a/docs/Interactive_Computing/Slurm_Interactive_Sessions.md b/docs/Interactive_Computing/Slurm_Interactive_Sessions.md index 62cd02a92..e7a827795 100644 --- a/docs/Interactive_Computing/Slurm_Interactive_Sessions.md +++ b/docs/Interactive_Computing/Slurm_Interactive_Sessions.md @@ -12,7 +12,7 @@ you to use them interactively as you would the login node. There are two main commands that can be used to make a session, `srun` and `salloc`, both of which use most of the same options available to `sbatch` (see -[our Slurm Reference Sheet](../../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md)). +[our Slurm Reference Sheet](../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md)). !!! warning An interactive session will, once it starts, use the entire requested diff --git a/docs/Software/Available_Applications/ABAQUS.md b/docs/Software/Available_Applications/ABAQUS.md index 7f02e42f1..841e76397 100644 --- a/docs/Software/Available_Applications/ABAQUS.md +++ b/docs/Software/Available_Applications/ABAQUS.md @@ -44,7 +44,7 @@ parameter `academic=TEACHING` or `academic=RESEARCH` in a relevant intuitive formula ⌊ 5 x N0.422 where `N` is number of CPUs. -[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) +[Hyperthreading](../Parallel_Computing/Hyperthreading.md) can provide significant speedup to your computations, however hyperthreaded CPUs will use twice the number of licence tokens. It may be worth adding `#SBATCH --hint nomultithread` to your slurm script if diff --git a/docs/Software/Available_Applications/COMSOL.md b/docs/Software/Available_Applications/COMSOL.md index c5c34f0f2..4b7a9fc49 100644 --- a/docs/Software/Available_Applications/COMSOL.md +++ b/docs/Software/Available_Applications/COMSOL.md @@ -128,7 +128,7 @@ distribution. ## Interactive Use -Providing you have [set up X11](../Terminal_Setup/X11.md), you can +Providing you have [set up X11](../../Getting_Started/Accessing_the_HPCs/X11.md), you can open the COMSOL GUI by running the command `comsol`. Large jobs should not be run on the login node. diff --git a/docs/Software/Available_Applications/Delft3D.md b/docs/Software/Available_Applications/Delft3D.md index 6698b6002..598c1f718 100644 --- a/docs/Software/Available_Applications/Delft3D.md +++ b/docs/Software/Available_Applications/Delft3D.md @@ -17,7 +17,7 @@ tags: === "Serial" For when only one CPU is required, generally as part of a - [job array](../../Getting_Started/Next_Steps/Parallel_Execution.md#job-arrays). + [job array](../Parallel_Computing/Parallel_Execution.md#job-arrays). ```sl #!/bin/bash -e diff --git a/docs/Software/Available_Applications/GROMACS.md b/docs/Software/Available_Applications/GROMACS.md index dc30dd288..85f0f86f8 100644 --- a/docs/Software/Available_Applications/GROMACS.md +++ b/docs/Software/Available_Applications/GROMACS.md @@ -83,7 +83,7 @@ obtained with the Software. srun gmx-mpi mdrun-mpi -ntomp ${SLURM_CPUS_PER_TASK} -nomp ${SLURM_NNODES) -s input.tpr -o trajectory.trr -c struct.gro -e energies.edr ``` === "GPU" - For more information on using GPUs see [GPU use on NeSI](../Batch_Jobs/Using_GPUs.md) + For more information on using GPUs see [GPU use on NeSI](../../Batch_Computing/Using_GPUs.md) ```sl #!/bin/bash -e diff --git a/docs/Software/Available_Applications/Keras.md b/docs/Software/Available_Applications/Keras.md index f90c18b66..51d1f54bd 100644 --- a/docs/Software/Available_Applications/Keras.md +++ b/docs/Software/Available_Applications/Keras.md @@ -10,8 +10,8 @@ zendesk_section_id: 360000040076 Keras is a modular and extendable API for building neural networks in Python. Keras is included with TensorFlow. Note that there are -[CPU and](../../Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md) -[GPU versions](../../Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md) of +[CPU and](TensorFlow_on_CPUs.md) +[GPU versions](TensorFlow_on_GPUs.md) of TensorFlow, here we'll use TensorFlow 1.10 for GPUs, which is available as an environment module. diff --git a/docs/Software/Available_Applications/Lambda_Stack.md b/docs/Software/Available_Applications/Lambda_Stack.md index 8ba695132..9215550f1 100644 --- a/docs/Software/Available_Applications/Lambda_Stack.md +++ b/docs/Software/Available_Applications/Lambda_Stack.md @@ -14,7 +14,7 @@ status: [] Stack](https://lambdalabs.com/lambda-stack-deep-learning-software) is an AI software stack from Lambda containing PyTorch, TensorFlow, CUDA, cuDNN and more. On the HPC you can run Lambda Stack via -[Apptainer](../../Scientific_Computing/Supported_Applications/Apptainer.md) (based on the +[Apptainer](Apptainer.md) (based on the official [Dockerfiles](https://github.com/lambdal/lambda-stack-dockerfiles/)). We have provided some pre-built container images (under @@ -70,7 +70,7 @@ ${CONTAINER} echo "Hello World" The following steps will create a custom Lambda Stack kernel that can be accessed via NeSI's Jupyter service (based on the instructions at -[Jupyter_on_NeSI](../../Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md)). +[Jupyter_on_NeSI](../../Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md)). First, we need to create a kernel definition and wrapper that will launch the container image. Run the following commands on the Mahuika diff --git a/docs/Software/Available_Applications/MATLAB.md b/docs/Software/Available_Applications/MATLAB.md index 231b0a006..6a09f25ce 100644 --- a/docs/Software/Available_Applications/MATLAB.md +++ b/docs/Software/Available_Applications/MATLAB.md @@ -176,7 +176,7 @@ CUDA modules and select the appropriate one. For example, for MATLAB R2021a, use `module load CUDA/11.0.2` before launching MATLAB. If you want to know more about how to access the different type of -available GPUs on NeSI, check the [GPU use on NeSI](../Batch_Jobs/Using_GPUs.md) +available GPUs on NeSI, check the [GPU use on NeSI](../../Batch_Computing/Using_GPUs.md) support page. !!! tip "Support for A100 GPUs" diff --git a/docs/Software/Available_Applications/Miniforge3.md b/docs/Software/Available_Applications/Miniforge3.md index 055c47b3b..739c46778 100644 --- a/docs/Software/Available_Applications/Miniforge3.md +++ b/docs/Software/Available_Applications/Miniforge3.md @@ -7,10 +7,10 @@ tags: !!! note "Preferred Alternatives" - If you want a more reproducible and isolated environment, we - recommend using the [Apptainer containers](../../Scientific_Computing/Supported_Applications/Apptainer.md). + recommend using the [Apptainer containers](Apptainer.md). - If you only need access to Python and standard numerical libraries (numpy, scipy, matplotlib, etc.), you can use the - [Python environment module](../../Scientific_Computing/Supported_Applications/Python.md). + [Python environment module](Python.md). {% set app_name = page.title | trim %} {% set app = applications[app_name] %} diff --git a/docs/Software/Available_Applications/Supernova.md b/docs/Software/Available_Applications/Supernova.md index 79c3f19fa..0dd579a3e 100644 --- a/docs/Software/Available_Applications/Supernova.md +++ b/docs/Software/Available_Applications/Supernova.md @@ -126,7 +126,7 @@ takes the following general form `ssh -L :: -N ` - <d> An integer -- <server> see: [Standard Terminal Setup](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) +- <server> see: [Standard Terminal Setup](../../Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md) When details are added to the general form from the specifics in the snippet above, the following could be run.. diff --git a/docs/Software/Available_Applications/TensorFlow_on_CPUs.md b/docs/Software/Available_Applications/TensorFlow_on_CPUs.md index 32d83cd6a..401967121 100644 --- a/docs/Software/Available_Applications/TensorFlow_on_CPUs.md +++ b/docs/Software/Available_Applications/TensorFlow_on_CPUs.md @@ -10,7 +10,7 @@ status: deprecated TensorFlow is a popular software library for machine learning applications, see our -[TensorFlow](../../Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md) +[TensorFlow](TensorFlow_on_GPUs.md) article for further information. It is often used with GPUs, as runtimes of the computationally demanding training and inference steps are often shorter compared to multicore CPUs. However, running TensorFlow on CPUs @@ -100,7 +100,7 @@ srun python my_tensorflow_program.py If you are unsure about setting up the memory and runtime parameters, have a look at our article [Ascertaining job -dimensions](../../Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md). +dimensions](../Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md). Please also read the section on operator parallelisation below before you choose a number of CPUs. diff --git a/docs/Software/Available_Applications/TensorFlow_on_GPUs.md b/docs/Software/Available_Applications/TensorFlow_on_GPUs.md index 5548e41be..61ae60c2e 100644 --- a/docs/Software/Available_Applications/TensorFlow_on_GPUs.md +++ b/docs/Software/Available_Applications/TensorFlow_on_GPUs.md @@ -21,7 +21,7 @@ running TensorFlow with GPU support. !!! tip "See also" - To request GPU resources using `--gpus-per-node` option of Slurm, see the [GPU use on - NeSI](../Batch_Jobs/Using_GPUs.md) + NeSI](../../Batch_Computing/Using_GPUs.md) documentation page. - To run TensorFlow on CPUs instead, have a look at our article [TensorFlow on @@ -174,7 +174,7 @@ take into consideration the following: You can use containers to run your application on the NeSI platform. We provide support for -[Apptainer](../../Scientific_Computing/Supported_Applications/Apptainer.md) +[Apptainer](Apptainer.md) containers, that can be run by users without requiring additional privileges. Note that Docker containers can be converted into Apptainer containers. diff --git a/docs/Software/Available_Applications/VASP.md b/docs/Software/Available_Applications/VASP.md index 5ef77d082..5fbc9c3c4 100644 --- a/docs/Software/Available_Applications/VASP.md +++ b/docs/Software/Available_Applications/VASP.md @@ -152,8 +152,8 @@ production simulations. When considering which configuration to use for production you should take into account performance and compute unit cost. -See [Using GPUs](../Batch_Jobs/Using_GPUs.md), for further instructions, and -[Hardware](../Batch_Jobs/Hardware.md#gpgpus) for full GPU specifications. +See [Using GPUs](../../Batch_Computing/Using_GPUs.md), for further instructions, and +[Hardware](../../Batch_Computing/Hardware.md#gpgpus) for full GPU specifications. Some additional notes specific to running VASP on GPUs: diff --git a/docs/Software/Available_Applications/fastStructure.md b/docs/Software/Available_Applications/fastStructure.md index db3991f92..89d4ef9a1 100644 --- a/docs/Software/Available_Applications/fastStructure.md +++ b/docs/Software/Available_Applications/fastStructure.md @@ -55,4 +55,4 @@ To use this, use the `--format=str` flag and include the file **without** the fi Shout out to a [rather old blog post](https://flowersoftheocean.wordpress.com/2018/04/15/running-faststructure-and-associated-difficulties/) for solving this issue! -The `.str` files output by [ipyrad](../../Scientific_Computing/Supported_Applications/ipyrad.md) should work without issue. Otherwise you may want to convert `.vcf` files to `.bed` files using another tool and proceed with fastStructure using the `.bed` files. +The `.str` files output by [ipyrad](ipyrad.md) should work without issue. Otherwise you may want to convert `.vcf` files to `.bed` files using another tool and proceed with fastStructure using the `.bed` files. diff --git a/docs/Software/Installing_Applications_Yourself.md b/docs/Software/Installing_Applications_Yourself.md index c4060ee18..a7ed372d0 100644 --- a/docs/Software/Installing_Applications_Yourself.md +++ b/docs/Software/Installing_Applications_Yourself.md @@ -22,9 +22,9 @@ See [Software Installation Request](Software_Installation_Request.md) for guidel How to add package to an existing module will vary based on the language in question. -- [Python](../Scientific_Computing/Supported_Applications/Python.md#python-packages) -- [R](../Scientific_Computing/Supported_Applications/R.md#dealing-with-packages) -- [Julia](../Scientific_Computing/Supported_Applications/Julia.md#installing-julia-packages) +- [Python](Available_Applications/Python.md#python-packages) +- [R](Available_Applications/R.md#dealing-with-packages) +- [Julia](Available_Applications/Julia.md#installing-julia-packages) - [MATLAB](../Scientific_Computing/Supported_Applications/MATLAB.md#adding-support-packages) For other languages check if we have additional documentation for it diff --git a/docs/Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md b/docs/Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md index a0845a30b..2e64ca6ee 100644 --- a/docs/Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md +++ b/docs/Software/Parallel_Computing/Configuring_Dask_MPI_jobs.md @@ -78,7 +78,7 @@ dependencies: !!! info "See also" See the - [Miniforge3](../../Scientific_Computing/Supported_Applications/Miniforge3.md) + [Miniforge3](../Available_Applications/Miniforge3.md) page for more information on how to create and manage Miniconda environments on NeSI. @@ -97,7 +97,7 @@ then assigns different roles to the different ranks: This implies that **Dask-MPI jobs must be launched on at least 3 MPI ranks!** Ranks 0 and 1 often perform much less work than the other ranks, it can therefore be beneficial to use -[Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) +[Hyperthreading](Hyperthreading.md) to place these two ranks onto a single physical core. Ensure that activating hyperthreading does not slow down the worker ranks by running a short test workload with and without hyperthreading. @@ -261,7 +261,7 @@ where the `%runscript` section ensures that the Python script passed to Conda environment inside the container. !!! note Tips - You can build this container on NeSI,following the instructions from the [dedicated supportpage](../../Scientific_Computing/Supported_Applications/Apptainer.md) + You can build this container on NeSI,following the instructions from the [dedicated supportpage](../Available_Applications/Apptainer.md) ### Slurm configuration diff --git a/docs/Software/Parallel_Computing/Hyperthreading.md b/docs/Software/Parallel_Computing/Hyperthreading.md index 27886cfe0..a3a1032e3 100644 --- a/docs/Software/Parallel_Computing/Hyperthreading.md +++ b/docs/Software/Parallel_Computing/Hyperthreading.md @@ -34,7 +34,7 @@ once your job starts you will have twice the number of CPUs as `ntasks`. If you set `--cpus-per-task=n`, Slurm will request `n` logical CPUs per task, i.e., will set `n` threads for the job. Your code must be capable of running Hyperthreaded (for example using -[OpenMP](../../HPC_Software_Environment/OpenMP_settings.md)) +[OpenMP](OpenMP_settings.md)) if `--cpus-per-task > 1`. Setting `--hint=nomultithread` with `srun` or `sbatch` causes Slurm to @@ -187,7 +187,7 @@ considered a bonus. for MPI jobs that request the same number of tasks on every node, we recommend to specify `--mem` (i.e. memory per node) instead. See [How to request memory - (RAM)](../../../General/FAQs/How_do_I_request_memory.md) for more + (RAM)](../../Getting_Started/FAQs/How_do_I_request_memory.md) for more information. - Non-MPI jobs which specify `--cpus-per-task` and use **srun** should also set `--ntasks=1`, otherwise the program will be run twice in diff --git a/docs/Software/Parallel_Computing/MPI_Scaling_Example.md b/docs/Software/Parallel_Computing/MPI_Scaling_Example.md index 7181b25c0..79a36b83b 100644 --- a/docs/Software/Parallel_Computing/MPI_Scaling_Example.md +++ b/docs/Software/Parallel_Computing/MPI_Scaling_Example.md @@ -174,7 +174,7 @@ Let's run our Slurm script with sbatch and look at our output from Our job performed 5,000 seeds using 2 physical CPU cores (each MPI task will always receive 2 logical CPUs which is equal to 1 physical CPUs. For a more in depth explanation about logical and physical CPU cores see -our [Hyperthreading article](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md)) +our [Hyperthreading article](Hyperthreading.md)) and a maximum memory of 166,744KB (0.16 GB). In total, the job ran for 18 minutes and 51 seconds. diff --git a/docs/Software/Parallel_Computing/OpenMP_settings.md b/docs/Software/Parallel_Computing/OpenMP_settings.md index 0a98de5dc..5ebd21330 100644 --- a/docs/Software/Parallel_Computing/OpenMP_settings.md +++ b/docs/Software/Parallel_Computing/OpenMP_settings.md @@ -20,7 +20,7 @@ all that is necessary to get 16 OpenMP threads is: in your Slurm script - although this can sometimes be more complicated, e.g., with -[TensorFlow on CPUs](../../Scientific_Computing/Supported_Applications/TensorFlow_on_CPUs.md). +[TensorFlow on CPUs](../Available_Applications/TensorFlow_on_CPUs.md). In order to achieve good and consistent parallel scaling, additional settings may be required. This is particularly true on Mahuika where @@ -30,7 +30,7 @@ consistent, additional information can be found in our article [Thread Placement and Thread Affinity](./Thread_Placement_and_Thread_Affinity.md). 1. `--threads-per-core=2`. Use this option to tell srun or sbatch to -that you want to use [Hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md), +that you want to use [Hyperthreading](Hyperthreading.md), so use both of the virual CPUs available on each physical core, halving the number of physical cores you occupy. If you use hyperthreading, you will be charged for the number of physical cores that diff --git a/docs/Software/Parallel_Computing/Parallel_Execution.md b/docs/Software/Parallel_Computing/Parallel_Execution.md index a76d16e71..f9fe0cb63 100644 --- a/docs/Software/Parallel_Computing/Parallel_Execution.md +++ b/docs/Software/Parallel_Computing/Parallel_Execution.md @@ -15,7 +15,7 @@ The are three types of parallel execution we will cover are [Multi-Threading](# - `--mem-per-cpu=512MB` will give 512 MB of RAM per *logical* core. - If `--hint=nomultithread` is used then `--cpus-per-task` will now refer to physical cores, but `--mem-per-cpu=512MB` still refers to logical cores. -See [our article on hyperthreading](../../Scientific_Computing/Batch_Jobs/Hyperthreading.md) for more information. +See [our article on hyperthreading](Hyperthreading.md) for more information. ## Multi-threading diff --git a/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md b/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md index 2066a5bea..4f3bd7146 100644 --- a/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md +++ b/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md @@ -8,7 +8,7 @@ status: deprecated Multithreading with OpenMP and other threading libraries is an important way to parallelise scientific software for faster execution (see our article on [Parallel -Execution](../Getting_Started/Next_Steps/Parallel_Execution.md) for +Execution](Parallel_Execution.md) for an introduction). Care needs to be taken when running multiple threads on the HPC to achieve best performance - getting it wrong can easily increase compute times by tens of percents, sometimes even more. This is @@ -34,7 +34,7 @@ performance, as a socket connects the processor to its RAM and other processors. A processor in each socket consists of multiple physical cores, and each physical core is split into two logical cores using a technology called -[Hyperthreading](../Scientific_Computing/Batch_Jobs/Hyperthreading.md)). +[Hyperthreading](Hyperthreading.md)). A processor also includes caches - a [cache](https://en.wikipedia.org/wiki/CPU_cache) is very fast memory diff --git a/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md b/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md index fae8acd70..5ffa6a34e 100644 --- a/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md +++ b/docs/Software/Profiling_and_Debugging/Finding_Job_Efficiency.md @@ -182,7 +182,7 @@ time* the CPUs are in use. This is not enough to get a picture of overall job efficiency, as required CPU time *may vary by number of CPU*s. -The only way to get the full context, is to compare walltime performance between jobs at different scale. See [Job Scaling](../../Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md) for more details. +The only way to get the full context, is to compare walltime performance between jobs at different scale. See [Job Scaling](Job_Scaling_Ascertaining_job_dimensions.md) for more details. ### Example diff --git a/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md b/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md index 9ddf0ac20..07ffe87ab 100644 --- a/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md +++ b/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md @@ -31,7 +31,7 @@ ascertain how much of each of these resources you will need. Asking for too little or too much, however, can both cause problems: your jobs will be at increased risk of taking a long time in the queue or failing, and -your project's [fair share score](../../Scientific_Computing/Batch_Jobs/Fair_Share.md) +your project's [fair share score](../../Batch_Computing/Fair_Share.md) is likely to suffer. Your project's fair share score will be reduced in view of compute time spent regardless of whether you obtain a result or diff --git a/docs/Storage/File_Systems_and_Quotas/File_permissions_and_groups.md b/docs/Storage/File_Systems_and_Quotas/File_permissions_and_groups.md index 531076bdf..0c59234e7 100644 --- a/docs/Storage/File_Systems_and_Quotas/File_permissions_and_groups.md +++ b/docs/Storage/File_Systems_and_Quotas/File_permissions_and_groups.md @@ -84,7 +84,7 @@ project group. directory will inherit neither the group nor the setgid bit. You probably don't want this to happen. For instructions on how to prevent it, please see our article: - [How can I let my fellow project team members read or write my files?](../../General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md) + [How can I let my fellow project team members read or write my files?](../../Getting_Started/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md) By default, the world, i.e. people not in the project team, have no privileges in respect of a project directory, with certain exceptions. @@ -139,6 +139,6 @@ If we agree to set up a special-purpose directory for you, we will discuss and a suitable permissions model. !!! prerequisite "See also" - - [How can I let my fellow project team members read or write my files?](../../General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md) - - [How can I give read-only team members access to my files?](../../General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md) + - [How can I let my fellow project team members read or write my files?](../../Getting_Started/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md) + - [How can I give read-only team members access to my files?](../../Getting_Started/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md) - [filesystems and quotas](./Filesystems_and_Quotas.md) diff --git a/docs/Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md b/docs/Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md index 6fb756ad5..376e9003f 100644 --- a/docs/Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md +++ b/docs/Storage/File_Systems_and_Quotas/Filesystems_and_Quotas.md @@ -103,7 +103,7 @@ filesystem. The default per-project quotas are as described in the above table; if you require more temporary (scratch) space for your project than the default quota allows for, you can discuss your requirements with us during -[the project application process](../../General/Policy/How_we_review_applications.md), +[the project application process](../../Getting_Started/Policy/How_we_review_applications.md), or {% include "partials/support_request.html" %} at any time. To ensure this filesystem remains fit-for-purpose, we have a regular diff --git a/docs/Storage/Moving_files_to_and_from_the_cluster.md b/docs/Storage/Moving_files_to_and_from_the_cluster.md index 1c2a32b2e..c220445ba 100644 --- a/docs/Storage/Moving_files_to_and_from_the_cluster.md +++ b/docs/Storage/Moving_files_to_and_from_the_cluster.md @@ -15,12 +15,12 @@ Find more information on [our filesystem](./File_Systems_and_Quotas/Filesystems_ ## OnDemand Requiring only a web browser, the instructions are same whether your are connecting from a Windows, Mac or a Linux computer. -See [OnDemand how to guide](../Scientific_Computing/Interactive_computing_with_OnDemand/how_to_guide.md) for more info. +See [OnDemand how to guide](../Interactive_Computing/OnDemand/how_to_guide.md) for more info. ## Standard Terminal !!! prerequisite - Have SSH setup as described in [Standard Terminal Setup](../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) + Have SSH setup as described in [Standard Terminal Setup](../Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md) In a local terminal the following commands can be used to: @@ -38,7 +38,7 @@ scp mahuika: !!! note - This will only work if you have set up aliases as described in - [Terminal Setup](../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md). + [Terminal Setup](../Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md). - As the term 'mahuika' is defined locally, the above commands *only works when using a local terminal* (i.e. not on Mahuika). - If you are using Windows subsystem, the root paths are different @@ -54,7 +54,7 @@ your password. ## File Managers !!! prerequisite - Have SSH setup as described in [Standard Terminal Setup](../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md) + Have SSH setup as described in [Standard Terminal Setup](../Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md) Most file managers can be used to connect to a remote directory simply by typing in the address bar provided your have an active connection to @@ -70,19 +70,19 @@ This **does not** work for Finder (Mac default) ![files](../assets/images/Moving_files_to_and_from_the_cluster_1.png) If your default file manager does not support mounting over SFTP, see -[Can I use SSHFS to mount the cluster filesystem on my local machine?](../General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md). +[Can I use SSHFS to mount the cluster filesystem on my local machine?](../Getting_Started/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md). ## MobaXterm !!! prerequisite - [MobaXterm Setup Windows](../Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md) + [MobaXterm Setup Windows](../Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md) See [Standard Terminal]](Moving_files_to_and_from_the_cluster.md#standard-terminal), [Rclone]](Moving_files_to_and_from_the_cluster.md#rclone), or [Rsync]](Moving_files_to_and_from_the_cluster.md#rsync) for information on how to move files to and from the HPC in the terminal. ## WinSCP !!! prerequisite - [WinSCP-PuTTY Setup Windows](../Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md) + [WinSCP-PuTTY Setup Windows](../Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md) As WinSCP uses multiple tunnels for file transfer you will be required to authenticate again on your first file operation of the session. The diff --git a/fixlinks.py b/fixlinks.py new file mode 100644 index 000000000..c365f0a29 --- /dev/null +++ b/fixlinks.py @@ -0,0 +1,88 @@ +# python +import argparse, os, re, sys +from pathlib import Path + +MD_ROOT = Path("docs") +LINK_RE = re.compile(r'\[([^\]]+)\]\(([^)]+)\)') + +def all_md_files(root): + return [p for p in root.rglob("*.md")] + +def resolve_target(base_md, target): + # separate anchor + target_path, *anchor = target.split('#',1) + anchor = ('#' + anchor[0]) if anchor else '' + if not target_path: + return target, False # anchor-only + # if target is directory index usually index.md? + cand = (base_md.parent / target_path).resolve() + # try direct existence + if cand.exists(): + return os.path.relpath(cand, base_md.parent) + anchor, True + # try adding .md + if not target_path.endswith(".md"): + cand2 = (base_md.parent / (target_path + ".md")).resolve() + if cand2.exists(): + return os.path.relpath(cand2, base_md.parent) + anchor, True + return None, False + +def find_candidates(basename, root): + return [p for p in root.rglob("*.md") if p.name == basename] + +def main(dry_run): + md_files = all_md_files(MD_ROOT) + fixes = [] + for md in md_files: + text = md.read_text(encoding="utf8") + changed = text + for m in LINK_RE.finditer(text): + link_text = m.group(1) + target = m.group(2).strip() + if target.startswith(("http://","https://","mailto:")): + continue + if target.startswith("/"): + # absolute path inside site — leave for manual review + continue + # try to resolve relative target + newrel, ok = resolve_target(md, target) + if ok: + # target exists as given relative path - nothing to do + continue + # not found: try to find file by basename + base = os.path.basename(target.split('#',1)[0]) + if not base: + continue + candidates = find_candidates(base, MD_ROOT) + if len(candidates) == 1: + cand = candidates[0] + rel = os.path.relpath(cand, md.parent) + # preserve anchor + anchor = '' + if '#' in target: + anchor = '#' + target.split('#',1)[1] + new_target = rel.replace(os.path.sep, "/") + anchor + fixes.append((md, target, new_target)) + changed = changed.replace("(%s)" % target, "(%s)" % new_target) + elif len(candidates) > 1: + print("MULTIPLE CANDIDATES:", md, target, "=>", [str(p) for p in candidates]) + else: + print("NO CANDIDATE:", md, target) + if fixes and not dry_run: + # backup then write + bak = md.with_suffix(md.suffix + ".bak") + if not bak.exists(): + bak.write_bytes(text.encode("utf8")) + md.write_text(changed, encoding="utf8") + # report + if fixes: + print("\nProposed / Applied fixes:") + for f in fixes: + print(f[0], ":", f[1], "=>", f[2]) + else: + print("No fixes proposed.") + +if __name__ == "__main__": + parser = argparse.ArgumentParser() + parser.add_argument("--apply", action="store_true", help="apply fixes") + args = parser.parse_args() + main(dry_run=not args.apply) From 676c915e67d05f53d2c81ff50ce376ba9bfb50aa Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 19:15:20 +1300 Subject: [PATCH 13/25] fix some stragler links --- docs/Announcements/.pages.yml | 2 +- docs/Announcements/Release_Notes/index.md | 2 +- ...nts_are_optimised_for_Machine_Learning_and_data_science.md | 4 ++-- docs/redirect_map.yml | 3 --- fixlinks.py | 3 +++ 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/Announcements/.pages.yml b/docs/Announcements/.pages.yml index 15ebe5fee..4ad570eab 100644 --- a/docs/Announcements/.pages.yml +++ b/docs/Announcements/.pages.yml @@ -3,6 +3,6 @@ nav: - Autodeletion_returning_for_scratch_filesystem.md - December_holiday_support_restrictions.md - Identity_Changes_for_Crown_Research_Institutes.md - - Known_Issues_HPC3 + - Known_Issues_HPC3.md - Release_Notes diff --git a/docs/Announcements/Release_Notes/index.md b/docs/Announcements/Release_Notes/index.md index c0f0ed2b9..b6c97fc85 100644 --- a/docs/Announcements/Release_Notes/index.md +++ b/docs/Announcements/Release_Notes/index.md @@ -19,7 +19,7 @@ be located under Storage, Long-Term Storage ## 3rd party applications -3rd party applications listed under [Supported Applications](../../Scientific_Computing/Supported_Applications/index.md) +3rd party applications listed under [Supported Applications](../../Software/Available_Applications/index.md) have child pages with details about the available versions on NeSI, and a reference to the vendor release notes or documentation. diff --git a/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md b/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md index a06864325..435500b87 100644 --- a/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md +++ b/docs/Getting_Started/FAQs/What_software_environments_are_optimised_for_Machine_Learning_and_data_science.md @@ -18,7 +18,7 @@ include: can get right into using and exploring the several built-in packages or create custom code. -- [Jupyter on NeSI](../../Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/index.md)is +- [Jupyter on NeSI](../../Interactive_Computing/OnDemand/Apps/JupyterLab/index.md) is particularly well suited to artificial intelligence and machine learning workloads. [RStudio](../../Interactive_Computing/OnDemand/Apps/RStudio.md) and/or Conda can be accessed via Jupyter. @@ -31,7 +31,7 @@ include: create comprehensive workflows. For more information about available software and applications, you -can [browse our catalogue](../../Scientific_Computing/Supported_Applications/index.md). +can [browse our catalogue](../../Software/Available_Applications/index.md). As pictured in the screenshot below, you can type keywords into the catalogue's search field to browse by a specific software name or using diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index 9f3bedf32..957021330 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -1,4 +1,3 @@ -Scientific_Computing/Terminal_Setup/Ubuntu_LTS_terminal_Windows.md: Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md hc.md: index.md hc/en-gb.md: index.md Storage/Freezer_long_term_storage.md : Storage/Long_Term_Storage/Freezer_long_term_storage.md @@ -68,12 +67,10 @@ Scientific_Computing/Batch_Jobs/Checking_resource_usage.md : Batch_Computing/Che Scientific_Computing/Batch_Jobs/Checksums.md : Batch_Computing/Checksums.md Scientific_Computing/Batch_Jobs/Fair_Share.md : Batch_Computing/Fair_Share.md Scientific_Computing/Batch_Jobs/Hardware.md : Batch_Computing/Hardware.md -Scientific_Computing/Batch_Jobs/Hyperthreading.md : Batch_Computing/Hyperthreading.md Scientific_Computing/Batch_Jobs/Job_Checkpointing.md : Batch_Computing/Job_Checkpointing.md Scientific_Computing/Batch_Jobs/Job_Limits.md : Batch_Computing/Job_Limits.md Scientific_Computing/Batch_Jobs/Job_prioritisation.md : Batch_Computing/Job_prioritisation.md Scientific_Computing/Batch_Jobs/SLURM-Best_Practice.md : Batch_Computing/SLURM-Best_Practice.md -Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md : Software/Thread_Placement_and_Thread_Affinity.md Scientific_Computing/HPC_Software_Environment/Temporary_directories.md : Batch_Computing/Temporary_directories.md Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md: Interactive_Computing/Slurm_Interactive_Sessions.md Scientific_Computing/Batch_Jobs/Slurm_Interactive_Sessions.md : Interactive_Computing/Slurm_Interactive_Sessions.md diff --git a/fixlinks.py b/fixlinks.py index c365f0a29..98da5cac1 100644 --- a/fixlinks.py +++ b/fixlinks.py @@ -1,4 +1,7 @@ # python + + +# Note: Partially AI generated. Not to be trusted. import argparse, os, re, sys from pathlib import Path From b0f62da251c3636f39aad82b322c85d162fdbbe8 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Mon, 1 Dec 2025 19:47:26 +1300 Subject: [PATCH 14/25] last broken links --- .../Slurm_Interactive_Sessions.md | 457 ------------------ docs/Batch_Computing/Using_GPUs.md | 8 +- .../Accessing_the_HPCs/First_Time_Login.md | 4 +- .../FAQs/Mahuika_HPC3_Differences.md | 2 +- .../Available_Applications/Lambda_Stack.md | 2 +- .../TensorFlow_on_GPUs.md | 15 +- .../Installing_Applications_Yourself.md | 4 +- docs/Software/Software_Version_Management.md | 2 +- fixlinks.py | 34 +- 9 files changed, 25 insertions(+), 503 deletions(-) delete mode 100644 docs/Batch_Computing/Slurm_Interactive_Sessions.md diff --git a/docs/Batch_Computing/Slurm_Interactive_Sessions.md b/docs/Batch_Computing/Slurm_Interactive_Sessions.md deleted file mode 100644 index e7a827795..000000000 --- a/docs/Batch_Computing/Slurm_Interactive_Sessions.md +++ /dev/null @@ -1,457 +0,0 @@ ---- -created_at: '2020-01-05T21:43:18Z' -tags: - - interactive - - scheduling -description: How to run an interactive session on the NeSI cluster. ---- - -A SLURM interactive session reserves resources on compute nodes allowing -you to use them interactively as you would the login node. - -There are two main commands that can be used to make a session, `srun` -and `salloc`, both of which use most of the same options available to -`sbatch` (see -[our Slurm Reference Sheet](../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md)). - -!!! warning - An interactive session will, once it starts, use the entire requested - block of CPU time and other resources unless earlier exited, even - if unused. To avoid unnecessary charges to your project, don't forget - to exit an interactive session once finished. - -## Using `srun --pty bash` - -`srun` will add your resource request to the queue. When the allocation -starts, a new bash session will start up on **one of the granted -nodes.** - -For example; - -```sh -srun --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 8 --mem-per-cpu 1500 --time 24:00:00 --pty bash -``` - -You will receive a message. - -```out -srun: job 10256812 queued and waiting for resources -``` - -And when the job starts: - -```out -srun: job 10256812 has been allocated resources -[wbn079 ~ SUCCESS ]$ -``` - -Note the host name in the prompt has changed to the compute node -`wbn079`. - -For a full description of `srun` and its options, see the -[schedmd documentation](https://slurm.schedmd.com/archive/{{config.extra.slurm}}/srun.html). - -## Using `salloc` - -`salloc` functions similarly `srun --pty bash` in that it will add your -resource request to the queue. However the allocation starts, a new bash -session will start up on **the login node.** This is useful for running -a GUI on the login node, but your processes on the compute nodes. - -For example: - -```sh -salloc --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 8 --mem-per-cpu 1500 --time 24:00:00 -``` - -You will receive a message. - -```out -salloc: Pending job allocation 10256925 -salloc: job 10256925 queued and waiting for resources -``` - -And when the job starts; - -```out -salloc: job 10256925 has been allocated resources -salloc: Granted job allocation 10256925 -[mahuika01~ SUCCESS ]$ -``` - -Note the that you are still on the login node `mahuika01`, however you -will now have permission to `ssh` to any node you have a session on . - -For a full description of `salloc` and its options, see -[here](https://slurm.schedmd.com/archive/{{config.extra.slurm}}/salloc.html). - -### Requesting a postponed start - -`salloc` lets you specify that a job is not to start before a specified -time, however the job may still be delayed if requested resources are -not available. You can request a start time using the `--begin` flag. - -The `--begin` flag takes either absolute or relative times as values. - -!!! warning - If you specify absolute dates and/or times, Slurm will interpret those - according to your environment's current time zone. Ensure that you - know what time zone your environment is using, for example by running - `date` in the same terminal session. - -- `--begin=16:00` means start the job no earlier than 4 p.m. today. - (Seconds are optional, but the time must be given in 24-hour - format.) -- `--begin=11/05/20` means start the job on (or after) 5 - November 2020. Note that Slurm uses American date formats. - `--begin=2020-11-05` is another Slurm-acceptable way of saying the - same thing, and possibly easier for a New Zealander. -- `--begin=2020-11-05T16:00:00` means start the job on (or after) 4 - p.m. on 5 November 2020. -- `--begin=now+1hour` means wait at least one hour before starting the - job. -- `--begin=now+60` means wait at least one minute before starting the - job. - -If no `--begin` argument is given, the default behaviour is to start as -soon as possible. - -### While you wait - -It's quite common to have to wait for some time before your interactive -session starts, even if you specified, expressly or by implication, that -the job is to start as soon as possible. - -While you're waiting, you will not have use of that shell prompt. **Do -not use `Ctrl`-`C` to get the prompt back, as doing so will cancel the -job.** If you need a shell prompt, detach your `tmux` or `screen` -session, or switch to (or open) another terminal session to the same -cluster's login node. - -In the same way, before logging out (for example, if you choose to shut -down your workstation at the end of the working day), be sure to detach -the `tmux` or `screen` session. In fact, we recommend detaching whenever -you leave your workstation unattended for a while, in case your computer -turns off or goes to sleep or its connection to the internet is -disrupted while you're away. - -## Running Python+JupyterLab in Interactive Mode - -!!! warning - If you are using a windows computer, this method has currently - been tested in VSCode, WSL powershell, and WSL Ubuntu. We have not - tested it yet in Putty or Mobaxterm - -To run Python+JupyterLab in interactive mode, first we need to load -your interactive session: - -```sh -srun --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 2 --mem 8G --time 24:00:00 --pty bash -``` - -Then, we need to start up Python, install JupyterLab if you dont have it -yet, and obtain the hostname and the port: - -```sh -# Load Python -module load Python - -# Install and activate a python virtual environment (or activate your -# current virtual environment). -python3 -m venv venv -source venv/bin/activate - -# Install JupyterLab -pip3 install JupyterLab - -# Select a random port -PORT=$(shuf -i8000-9999 -n1) - -# Check the hostname and port - we will need this later, you can also -# see it at the start of your prompt -hostname | cut -d'.' -f1 # <-- This is the hostname -echo $PORT # <-- This is the port -``` - -Make a note of the hostname and the port, given by the `hostname | cut -d'.' -f1` -and `echo $PORT` commands. Then, we need to start up JupyterLab: - -```sh -# Start Jupyter. This might take a minute -jupyter lab --no-browser --ip=0.0.0.0 --port=$PORT -``` - -Make a note of the second URL given by JupyterLab once it launches. -For instance: - -```sh -[C 2025-11-03 14:34:31.797 ServerApp] - - To access the server, open this file in a browser: - file:///home/john.doe/.local/share/jupyter/runtime/jpserver-2965439-open.html - Or copy and paste one of these URLs: - http://c003.hpc.nesi.org.nz:9339/lab?token=e6ff816a27867d88311bcc9f04141402590af48c2fd5f117 - http://127.0.0.1:9339/lab?token=e6ff816a27867d88311bcc9f04141402590af48c2fd5f117 -``` - -The `http://127.0.0.1:9339/lab?token=e6ff816a27867d88311bcc9f04141402590af48c2fd5f117` -address in this case will be our url that we will use to launch JupyterLabs - -In a second terminal on your local machine (or a second screen in tmux or screen), -type the following: - -```sh -ssh -L PORT:HOSTNAME:PORT mahuika - -#For example: -#ssh -L 9339:c003:9339 mahuika -``` - -Then, in your browser, type in the URL from before - -```sh -http://127.0.0.1:PORT/lab?token=TOKEN - -# For example: -# http://127.0.0.1:9339/lab?token=e6ff816a27867d88311bcc9f04141402590af48c2fd5f117 -``` - -You will now be able to see and work wih Python+JupyterLab in your web browser. - - -## Running Julia+Pluto.ji in Interactive Mode - -!!! warning - If you are using a windows computer, this method has currently - been tested in VSCode, WSL powershell, and WSL Ubuntu. We have not - tested it yet in Putty or Mobaxterm - -To run Julia+Pluto.ji in interactive mode, first we need to load -your interactive session: - -```sh -srun --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 2 --mem 8G --time 24:00:00 --pty bash -``` - -Then, we need to start up Julia and obtain the hostname and the port: - -```sh -# Load Julia -module load Julia - -# Select a random port -PORT=$(shuf -i8000-9999 -n1) - -# Check the hostname and port - we will need this later, you can also -# see it at the start of your prompt -hostname | cut -d'.' -f1 # <-- This is the hostname -echo $PORT # <-- This is the port - -# Export port to a variable name -export pluto_port=${PORT} -``` - -Make a note of the hostname and the port, given by the `hostname | cut -d'.' -f1` -and `echo $PORT` commands. Then, we need to start up Julia, install and -run Pluto.ji: - -```sh -#Start Julia -julia - -# Install Pluto.ji. This might take a minute -import Pkg; Pkg.add("Pluto") - -# Start Pluto. This might take a minute -using Pluto -Pluto.run(host="0.0.0.0",port=parse(Int, ENV["pluto_port"]),launch_browser=false) -``` - -Take a note of the information given for the URL - -```sh -[ Info: Loading... -┌ Info: -│ Go to http://0.0.0.0:9627/?secret=mXmq6659 in your browser to start writing ~ have fun! -└ -``` - -Here, we will be using `http://0.0.0.0:9627/?secret=mXmq6659` to access -Pluto. - -Next, open up a second terminal on your local machine (or a second screen -in tmux or screen), and type the following: - -```sh -ssh -L PORT:HOSTNAME:PORT mahuika - -#For example: -#ssh -L 9627:mc081:9627 mahuika -``` - -Then, in your browser, type in the URL from before - -```sh -http://0.0.0.0:PORT/?secret=SECRET - -# For example: -# http://0.0.0.0:9627/?secret=mXmq6659 -``` - -You will now be able to see and work wih Julia+Pluto in your web browser. - - -## Setting up a detachable terminal - -!!! warning - If you don't request your interactive session from within a detachable - terminal, any interruption to the controlling terminal, for example by - your computer going to sleep or losing its connection to the internet, - will permanently cancel that interactive session and remove it from - the queue, whether it has started or not. - -1. Connect to a login node. -2. Start up `tmux` or `screen`. - -## Modifying an existing interactive session - -Whether your interactive session is already running or is still waiting -in the queue, you can make a range of changes to it using the `scontrol` -command. Some changes are off limits for ordinary users, such as -increasing the maximum permitted wall time, or unsafe, like decreasing -the memory request. But many other changes are allowed. - -### Postponing the start of an interactive job - -Suppose you submitted an interactive job just after lunch, and it's -already 4 p.m. and you're leaving in an hour. You decide that even if -the job starts now, you won't have time to do everything you need to do -before the office shuts and you have to leave. Even worse, the job might -start at 11 p.m. after you've gone to bed, and you'll get to work at -9:00 the next morning and find that it has wasted ten wall-hours of -time. - -Slurm offers an easy solution: Identify the job, and use `scontrol` to -postpone its start time. - -!!! note - Job IDs are unique to each cluster but not across the whole of NeSI. - Therefore, `scontrol` must be run on a node belonging to the cluster - where the job is queued. - -The following command will delay the start of the job with numeric ID -12345678 until (at the earliest) 9:30 a.m. the next day: - -```sh -scontrol update jobid=12345678 StartTime=tomorrowT09:30:00 -``` - -This variation, if run on a Friday, will delay the start of the same job -until (at the earliest) 9:30 a.m. on Monday: - -```sh -scontrol update jobid=12345678 StartTime=now+3daysT09:30:00 -``` - -!!! warning - Don't just set `StartTime=tomorrow` with no time specification unless - you like the idea of your interactive session starting at midnight or - in the wee hours of the morning. - -### Bringing forward the start of an interactive job - -In the same way, you can use scontrol to set a job's start time to -earlier than its current value. A likely application is to allow a job -to start immediately even though it stood postponed to a later time: - -```sh -scontrol update jobid=12345678 StartTime=now -``` - -### Other changes using `scontrol` - -There are many other changes you can make by means of `scontrol`. For -further information, please see -[the `scontrol` documentation](https://slurm.schedmd.com/archive/{{config.extra.slurm}}/scontrol.html). - -## Modifying multiple interactive sessions at once - -In the same way, if you have several interactive sessions waiting to -start on the same cluster, you might want to postpone them all using a -single command. To do so, you will first need to identify them, hence -the earlier suggestion to something specific to interactive jobs in the -job name. - -For example, if all your interactive job names start with the text "InteractiveJob", -you could do this: - -```sh -# -u $(whoami) restricts the search to my jobs only. -# The --states=PD option restricts the search to pending jobs only. -# -squeue -u $(whoami) --states=PD -o "%A %j" | grep "InteractiveJob" -``` - -The above command will return a list of your jobs whose names *start* -with the text "InteractiveJob". In this respect, it's more flexible than the `-n` -option to `squeue`, which requires the entire job name string in order -to identify a match. - -In order to use `scontrol`, we need to throw away all of the line except -for the job ID, so let's use `awk` to do this, and send the output to -`scontrol` via `xargs`: - -```sh -squeue -u $(whoami) --states=PD -o "%A %j" | grep "InteractiveJob" | \ -awk '{print $1}' | \ -xargs -I {} scontrol update jobid={} StartTime=tomorrowT09:30:00 -``` - - - -## Cancelling an interactive session - -You can cancel a pending interactive session by attaching the relevant -session, putting the job in the foreground (if necessary) and pressing -`Ctrl`-`C` on your keyboard. - -To cancel all your queued interactive sessions on a cluster in one fell -swoop, a command like the following should do the trick: - -```sh -squeue -u $(whoami) --states=PD -o "%A %j" | grep "InteractiveJob" | \ -awk '{print $1}' | \ -xargs -I {} scancel {} -``` - -To cancel all your running interactive sessions on a cluster in one fell -swoop, a command like the following should do the trick: - -```sh -squeue -u $(whoami) --states=R -o "%A %j" | grep "InteractiveJob" | \ -awk '{print $1}' | \ -xargs -I {} scancel {} -``` - -If you frequently use interactive jobs, we recommend doing this before -you go away on leave or fieldwork or other lengthy absence. diff --git a/docs/Batch_Computing/Using_GPUs.md b/docs/Batch_Computing/Using_GPUs.md index d2aa6dc96..0d78cc67d 100644 --- a/docs/Batch_Computing/Using_GPUs.md +++ b/docs/Batch_Computing/Using_GPUs.md @@ -28,7 +28,7 @@ the following option in the header of your submission script: You can specify the type and number of GPU you need using the following syntax -``` sl +```sl #SBATCH --gpus-per-node=: ``` @@ -81,7 +81,7 @@ It is recommended to specify the exact GPU type required; otherwise, the job may You can also use the `--gpus-per-node`option in -[Slurm interactive sessions](./Slurm_Interactive_Sessions.md), +[Slurm interactive sessions](../Interactive_Computing/Slurm_Interactive_Sessions.md), with the `srun` and `salloc` commands. For example: ``` sh @@ -111,7 +111,7 @@ duration of 30 minutes. ## Load CUDA and cuDNN modules To use an Nvidia GPU card with your application, you need to load the -driver and the CUDA toolkit via the [environment modules](../../Software/Available_Applications/index.md) +driver and the CUDA toolkit via the [environment modules](../Software/Available_Applications/index.md) mechanism: ``` sh @@ -232,7 +232,7 @@ applications: - [ABAQUS](../Software/Available_Applications/ABAQUS.md#examples) - [GROMACS](../Software/Available_Applications/GROMACS.md) - [Lambda Stack](../Software/Available_Applications/Lambda_Stack.md) -- [Matlab](../../Software/Available_Applications/MATLAB.md#using-gpus) +- [Matlab](../Software/Available_Applications/MATLAB.md#using-gpus) - [TensorFlow on GPUs](../Software/Available_Applications/TensorFlow_on_GPUs.md) And programming toolkits: diff --git a/docs/Getting_Started/Accessing_the_HPCs/First_Time_Login.md b/docs/Getting_Started/Accessing_the_HPCs/First_Time_Login.md index 9ef529346..5c1b7ff52 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/First_Time_Login.md +++ b/docs/Getting_Started/Accessing_the_HPCs/First_Time_Login.md @@ -20,7 +20,7 @@ tags: NeSI’s services and technologies are now hosted by REANNZ as a national eResearch Infrastructure Platform. Some of our tools (as pictured in the screenshot below) will retain a ‘NeSI’ brand as we transition our services and develop a longer-term strategy for this integrated platform. -1. Log into [my.nesi](my.nesi.org.nz) +1. Log into [my.nesi](http://my.nesi.org.nz) 2. Go to [**OnDemand**](https://ondemand.nesi.org.nz/). It will automatically take you to the Tuakiri login screen. ![alt text](../../assets/images/ondemand_login_0.png) @@ -28,7 +28,7 @@ tags: 3. Select your affiliated institution, and log in using your institutional account. Example below shows the University of Auckland login screen. ![alt text](../../assets/images/ondemand_login_1.png) -4. If you haven't logged into OnDemand or our HPC platforms before, you wil need to set up new authentication credentials. This is in addition to your institutional MFA process. +4. If you haven't logged into OnDemand or our HPC platforms before, you will need to set up new authentication credentials. This is in addition to your institutional MFA process. ![alt text](../../assets/images/ondemand_login_2.png) !!! note diff --git a/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md b/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md index c4ea994b8..ef65896a7 100644 --- a/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md +++ b/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md @@ -47,7 +47,7 @@ There are snapshots for short-term recovery of deleted files, in `/home/.snapsho ## Access via Web browser -[OnDemand](../../Scientific_Computing/Interactive_computing_with_OnDemand/index.md) has replaced JupyterHub. +[OnDemand](../../Interactive_Computing/Ondemand/index.md) has replaced JupyterHub. OnDemand is more flexible and can deliver more GUI based apps. ## Software diff --git a/docs/Software/Available_Applications/Lambda_Stack.md b/docs/Software/Available_Applications/Lambda_Stack.md index 9215550f1..b23db19b5 100644 --- a/docs/Software/Available_Applications/Lambda_Stack.md +++ b/docs/Software/Available_Applications/Lambda_Stack.md @@ -20,7 +20,7 @@ official have provided some pre-built container images (under */opt/nesi/containers/lambda-stack/*) or you can build your own. In the following sections, we will show you how to run Lambda Stack in a Slurm job or interactively via -[JupyterLab](../../Scientific_Computing/Interactive_computing_with_OnDemand/Apps/JupyterLab/index.md). +[JupyterLab](../../Interactive_Computing//OnDemand/Apps/JupyterLab/index.md). You can list the available Lambda Stack version on NeSI by running: diff --git a/docs/Software/Available_Applications/TensorFlow_on_GPUs.md b/docs/Software/Available_Applications/TensorFlow_on_GPUs.md index 61ae60c2e..58932e578 100644 --- a/docs/Software/Available_Applications/TensorFlow_on_GPUs.md +++ b/docs/Software/Available_Applications/TensorFlow_on_GPUs.md @@ -20,19 +20,17 @@ running TensorFlow with GPU support. !!! tip "See also" - To request GPU resources using `--gpus-per-node` option of Slurm, - see the [GPU use on - NeSI](../../Batch_Computing/Using_GPUs.md) + see the [GPU use on NeSI](../../Batch_Computing/Using_GPUs.md) documentation page. - To run TensorFlow on CPUs instead, have a look at our article - [TensorFlow on - CPUs](TensorFlow_on_CPUs.md) + [TensorFlow on CPUs](TensorFlow_on_CPUs.md) for tips on how to configure TensorFlow and Slurm for optimal performance. ## Use NeSI modules TensorFlow is available on Mahuika as an -[environment module](../../Getting_Started/Next_Steps/The_HPC_environment.md) +[environment module](index.md) ``` sh module load TensorFlow/2.4.1-gimkl-2020a-Python-3.8.2 @@ -182,16 +180,15 @@ Apptainer containers. For TensorFlow, we recommend using the [official container provided by NVIDIA](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow). More information about using Apptainer with GPU enabled containers is -available on the [NVIDIA GPU -Containers](../Containers/NVIDIA_GPU_Containers.md) +available on the [NVIDIA GPU Containers](../Containers/NVIDIA_GPU_Containers.md) support page. ## Specific versions for A100 Here are the recommended options to run TensorFlow on the A100 GPUs: -- If you use TensorFlow 1, use the TF1 [container provided by - NVIDIA](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow), +- If you use TensorFlow 1, use the TF1 + [container provided by NVIDIA](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow), which comes with a version of TensorFlow 1.15 compiled specifically to support the A100 GPUs (Ampere architecture). Other official Python packages won't support the A100, triggering various crashes diff --git a/docs/Software/Installing_Applications_Yourself.md b/docs/Software/Installing_Applications_Yourself.md index a7ed372d0..af351d939 100644 --- a/docs/Software/Installing_Applications_Yourself.md +++ b/docs/Software/Installing_Applications_Yourself.md @@ -8,7 +8,7 @@ tags: Before installing your own applications, first check; - The software you want is not already installed. `module spider ` can be used to search software, -or see [Supported Applications](../Scientific_Computing/Supported_Applications/index.md). +or see [Supported Applications](index.md). - If you are looking for a new version of existing software, {% include "partials/support_request.html" %} and we will install the new version. - If you would like us to install something for you or help you install something yourself {% include "partials/support_request.html" %}. If the software is popular, We may decide to install it centrally, in which case there will be no additional steps for you. Otherwise the software will be installed in your project directory, in which case it is your responsibility to maintain. @@ -25,7 +25,7 @@ How to add package to an existing module will vary based on the language in ques - [Python](Available_Applications/Python.md#python-packages) - [R](Available_Applications/R.md#dealing-with-packages) - [Julia](Available_Applications/Julia.md#installing-julia-packages) -- [MATLAB](../Scientific_Computing/Supported_Applications/MATLAB.md#adding-support-packages) +- [MATLAB](Available_Applications/MATLAB.md#adding-support-packages) For other languages check if we have additional documentation for it in our [application documentation](../Scientific_Computing/Supported_Applications/index.md). diff --git a/docs/Software/Software_Version_Management.md b/docs/Software/Software_Version_Management.md index 978a5289f..f08cac2c0 100644 --- a/docs/Software/Software_Version_Management.md +++ b/docs/Software/Software_Version_Management.md @@ -12,7 +12,7 @@ zendesk_section_id: 360000040056 Much of the software installed on the NeSI cluster have multiple versions available as shown on the -[supported applications page](../Scientific_Computing/Supported_Applications/index.md) +[supported applications page](index.md) or by using the `module avail` or `module spider` commands. If only the application name is given a default version will be chosen, diff --git a/fixlinks.py b/fixlinks.py index 98da5cac1..db6490afe 100644 --- a/fixlinks.py +++ b/fixlinks.py @@ -1,6 +1,4 @@ -# python - - +# Tries to replace all internal broken links with less broken ones. # Note: Partially AI generated. Not to be trusted. import argparse, os, re, sys from pathlib import Path @@ -11,23 +9,18 @@ def all_md_files(root): return [p for p in root.rglob("*.md")] -def resolve_target(base_md, target): +def resolve_path(base_md, target): # separate anchor target_path, *anchor = target.split('#',1) anchor = ('#' + anchor[0]) if anchor else '' if not target_path: - return target, False # anchor-only + return False # anchor-only # if target is directory index usually index.md? cand = (base_md.parent / target_path).resolve() # try direct existence if cand.exists(): - return os.path.relpath(cand, base_md.parent) + anchor, True - # try adding .md - if not target_path.endswith(".md"): - cand2 = (base_md.parent / (target_path + ".md")).resolve() - if cand2.exists(): - return os.path.relpath(cand2, base_md.parent) + anchor, True - return None, False + return True + return False def find_candidates(basename, root): return [p for p in root.rglob("*.md") if p.name == basename] @@ -39,17 +32,10 @@ def main(dry_run): text = md.read_text(encoding="utf8") changed = text for m in LINK_RE.finditer(text): - link_text = m.group(1) target = m.group(2).strip() - if target.startswith(("http://","https://","mailto:")): - continue - if target.startswith("/"): - # absolute path inside site — leave for manual review + if target.startswith(("http://","https://","mailto:","/")): continue - # try to resolve relative target - newrel, ok = resolve_target(md, target) - if ok: - # target exists as given relative path - nothing to do + if resolve_path(md, target): continue # not found: try to find file by basename base = os.path.basename(target.split('#',1)[0]) @@ -71,10 +57,6 @@ def main(dry_run): else: print("NO CANDIDATE:", md, target) if fixes and not dry_run: - # backup then write - bak = md.with_suffix(md.suffix + ".bak") - if not bak.exists(): - bak.write_bytes(text.encode("utf8")) md.write_text(changed, encoding="utf8") # report if fixes: @@ -86,6 +68,6 @@ def main(dry_run): if __name__ == "__main__": parser = argparse.ArgumentParser() - parser.add_argument("--apply", action="store_true", help="apply fixes") + parser.add_argument("--apply", action="store_true", help="will not write out unless present") args = parser.parse_args() main(dry_run=not args.apply) From 4ce9b5962d6d6443b50f8fc53554d2cf62473e70 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 11:58:49 +1300 Subject: [PATCH 15/25] Fix contracts and billing --- .../.pages.yml | 0 .../Billing_process.md | 0 .../Types_of_contracts.md | 0 docs/redirect_map.yml | 17 ++++++++++++++++- 4 files changed, 16 insertions(+), 1 deletion(-) rename docs/Service_Subscriptions/{Contracts_and_billing_processes => Contracts_&_Billing}/.pages.yml (100%) rename docs/Service_Subscriptions/{Contracts_and_billing_processes => Contracts_&_Billing}/Billing_process.md (100%) rename docs/Service_Subscriptions/{Contracts_and_billing_processes => Contracts_&_Billing}/Types_of_contracts.md (100%) diff --git a/docs/Service_Subscriptions/Contracts_and_billing_processes/.pages.yml b/docs/Service_Subscriptions/Contracts_&_Billing/.pages.yml similarity index 100% rename from docs/Service_Subscriptions/Contracts_and_billing_processes/.pages.yml rename to docs/Service_Subscriptions/Contracts_&_Billing/.pages.yml diff --git a/docs/Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md b/docs/Service_Subscriptions/Contracts_&_Billing/Billing_process.md similarity index 100% rename from docs/Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md rename to docs/Service_Subscriptions/Contracts_&_Billing/Billing_process.md diff --git a/docs/Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md b/docs/Service_Subscriptions/Contracts_&_Billing/Types_of_contracts.md similarity index 100% rename from docs/Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md rename to docs/Service_Subscriptions/Contracts_&_Billing/Types_of_contracts.md diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index 957021330..0e340d8ef 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -150,4 +150,19 @@ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md : Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC.md Scientific_Computing/Training/Webinars.md : Getting_Started/Getting_Help/Webinars.md Scientific_Computing/Training/Workshops.md : Getting_Started/Getting_Help/Workshops.md -Batch_Computing/Hyperthreading.md : Software/Parallel_Computing/Hyperthreading.md +Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project: Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project +General/NeSI_Policies/Proposal_Development_allocations +Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project +General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members +General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members +Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project +General/Announcements/HPC3/ +Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project/ +General/Announcements/HPC3/ + + + + + +Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md : Service_Subscriptions/Contracts_&_Billing/Billing_process.md +Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md : Service_Subscriptions/Contracts_&_Billing/Types_of_contracts.md From b8a97344a99f2c6a4f539ab44924396d13ca4431 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 12:01:56 +1300 Subject: [PATCH 16/25] delete.bak --- .../Jupyter_kernels_Manual_management.md.bak | 286 ------------------ ...er_kernels_Tool_assisted_management.md.bak | 160 ---------- 2 files changed, 446 deletions(-) delete mode 100644 docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md.bak delete mode 100644 docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md.bak diff --git a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md.bak b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md.bak deleted file mode 100644 index 48c6e4c60..000000000 --- a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Manual_management.md.bak +++ /dev/null @@ -1,286 +0,0 @@ ---- -created_at: 2025-01-24 -description: How to set up your own custom kernals for use on NeSI JupyterHub -tags: - - JupyterHub - - Python - - R ---- - -# Jupyter kernels - Manual management - -!!! warning - - NeSI OnDemand is in development and accessible to early access users only. - If you are interested in helping us test it please [contact us](mailto:support@nesi.org.nz). - -## Introduction - -Jupyter kernels execute the code that you write. NeSI provides a number of -Python and R kernels by default, which can be selected from the Launcher. - -Many packages are preinstalled in our default Python and R environments -and these can be extended further as described on the -[Python](../../../../Scientific_Computing/Supported_Applications/Python.md) and -[R](../../../../Scientific_Computing/Supported_Applications/R.md) support -pages. - -## Adding a custom Python kernel - -!!! note "see also" - See the [Jupyter kernels - Tool-assisted management](./Jupyter_kernels_Tool_assisted_management.md) - page for the **preferred** way to register kernels, which uses the - `nesi-add-kernel` command line tool to automate most of these manual - steps. - -You can configure custom Python kernels for running your Jupyter -notebooks. This could be necessary and/or recommended in some -situations, including: - -- if you wish to load a different combination of environment modules - than those we load in our default kernels -- if you would like to activate a virtual environment or conda - environment before launching the kernel - -The following example will create a custom kernel based on the -Miniconda3 environment module (but applies to other environment modules -too). - -In a terminal run the following commands to load a Miniconda environment -module: - -``` sh -module purge -module load Miniconda3 -``` - -Now create a conda environment named "my-conda-env" using Python 3.6. -The *ipykernel* Python package is required but you can change the names -of the environment, version of Python and install other Python packages -as required. - -``` sh -conda create --name my-conda-env python=3.11 -source $(conda info --base)/etc/profile.d/conda.sh -conda activate my-conda-env -conda install ipykernel -# you can pip/conda install other packages here too -``` - -Now create a Jupyter kernel based on your new conda environment: - -``` sh -python -m ipykernel install --user --name my-conda-env --display-name="My Conda Env" -``` - -We must now edit the kernel to load the required NeSI environment -modules before the kernel is launched. Change to the directory the -kernelspec was installed to -`~/.local/share/jupyter/kernels/my-conda-env`, (assuming you kept -`--name my-conda-env` in the above command): - -``` sh -cd ~/.local/share/jupyter/kernels/my-conda-env -``` - -Now create a wrapper script, called `wrapper.sh`, with the following -contents: - -``` sh -#!/usr/bin/env bash - -# load required modules here -module purge -module load Miniconda3 - -# activate conda environment -source $(conda info --base)/etc/profile.d/conda.sh -conda deactivate # workaround for https://github.com/conda/conda/issues/9392 -conda activate my-conda-env - -# run the kernel -exec python $@ -``` - -Make the wrapper script executable: - -``` sh -chmod +x wrapper.sh -``` - -Next edit the *kernel.json* to change the first element of the argv list -to point to the wrapper script we just created. The file should look -like this (change <username> to your NeSI username): - -```json -{ - "argv": [ - "/home//.local/share/jupyter/kernels/my-conda-env/wrapper.sh", - "-m", - "ipykernel_launcher", - "-f", - "{connection_file}" - ], - "display_name": "My Conda Env", - "language": "python" -} -``` - -After refreshing JupyterLab your new kernel should show up in the -Launcher as "My Conda Env". - -## Sharing a Python kernel with your project team members - -You can also configure a shared Python kernel that others with access to -the same NeSI project will be able to load. If this kernel is based on a -Python virtual environment, Conda environment or similar, you must make -sure it also exists in a shared location (other users cannot see your -home directory). - -The example below shows creating a shared Python kernel based on the -`Python/3.8.2-gimkl-2020a` module and also loads the -`ETE/3.1.1-gimkl-2020a-Python-3.8.2` module. - -In a terminal run the following commands to load the Python and ETE -environment modules: - -``` sh -module purge -module load Python/3.8.2-gimkl-2020a -module load ETE/3.1.1-gimkl-2020a-Python-3.8.2 -``` - -Now create a Jupyter kernel within your project directory, based on your -new virtual environment: - -``` sh -python -m ipykernel install --prefix=/nesi/project//.jupyter --name shared-ete-env --display-name="Shared ETE Env" -``` - -Next change to the kernel directory, which for the above command would -be: - -``` sh -cd /nesi/project//.jupyter/share/jupyter/kernels/shared-ete-env -``` - -Create a wrapper script, *wrapper.sh*, with the following contents: - -``` sh -#!/usr/bin/env bash - -# load necessary modules here -module purge -module load Python/3.8.2-gimkl-2020a -module load ETE/3.1.1-gimkl-2020a-Python-3.8.2 - -# run the kernel -exec python $@ -``` - -Note we also load the ETE module so that we can use that from our -kernel. - -Make the wrapper script executable: - -``` sh -chmod +x wrapper.sh -``` - -Next, edit the *kernel.json* to change the first element of the argv -list to point to the wrapper script we just created. The file should -look like this (change <project\_code> to your NeSI project code): - -```json -{ - "argv": [ - "/nesi/project//.jupyter/share/jupyter/kernels/shared-ete-env/wrapper.sh", - "-m", - "ipykernel_launcher", - "-f", - "{connection_file}" - ], - "display_name": "Shared Conda Env", - "language": "python" -} -``` - -After refreshing JupyterLab your new kernel should show up in the -Launcher as "Shared Virtual Env". - -## Custom kernel in a Singularity container - -An example showing setting up a custom kernel running in a Singularity -container can be found on our [Lambda Stack](../../../../Scientific_Computing/Supported_Applications/Lambda_Stack.md#lambda-stack-via-jupyter) -support page. - -## Adding a custom R kernel - -You can configure custom R kernels for running your Jupyter notebooks. -The following example will create a custom kernel based on the -R/3.6.2-gimkl-2020a environment module and will additionally load an -MPFR environment module (e.g. if you wanted to load the Rmpfr package). - -In a terminal run the following commands to load the required -environment modules: - -``` sh -module purge -module load IRkernel/1.1.1-gimkl-2020a-R-3.6.2 -module load Python/3.8.2-gimkl-2020a -``` - -The IRkernel module loads the R module as a dependency and provides the -R kernel for Jupyter. Python is required to install the kernel (since -Jupyter is written in Python). - -Now create an R Jupyter kernel based on your new conda environment: - -``` sh -R -e "IRkernel::installspec(name='myrwithmpfr', displayname = 'R with MPFR', user = TRUE)" -``` - -We must now to edit the kernel to load the required NeSI environment -modules when the kernel is launched. Change to the directory the -kernelspec was installed to -(~/.local/share/jupyter/kernels/myrwithmpfr, assuming you kept `--name -myrwithmpfr` in the above command): - -``` sh -cd ~/.local/share/jupyter/kernels/myrwithmpfr -``` - -Now create a wrapper script in that directory, called *wrapper.sh*, with -the following contents: - -``` sh -#!/usr/bin/env bash - -# load required modules here -module purge -module load MPFR/4.0.2-GCCcore-9.2.0 -module load IRkernel/1.1.1-gimkl-2020a-R-3.6.2 - -# run the kernel -exec R $@ -``` - -Make the wrapper script executable: - -``` sh -chmod +x wrapper.sh_ - "argv": [ - "/home//.local/share/jupyter/kernels/myrwithmpfr/wrapper.sh", - "--slave", - "-e", - "IRkernel::main()", - "--args", - "{connection_file}" - ], - "display_name": "R with MPFR", - "language": "R" -} -``` - -After refreshing JupyterLab your new R kernel should show up in the -Launcher as "R with MPFR". diff --git a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md.bak b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md.bak deleted file mode 100644 index 21f48afe4..000000000 --- a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/Jupyter_kernels_Tool_assisted_management.md.bak +++ /dev/null @@ -1,160 +0,0 @@ ---- -title: Jupyter kernels - Tool-assisted management -description: -tags: - - JupyterHub - - Python - - R ---- - -## Introduction - -Jupyter can execute code in different computing environments using -*kernels*. Some kernels are provided by default (Python, R, etc.) but -you may want to register your computing environment to use it in -notebooks. For example, you may want to load a specific environment -module in your kernel or use a Conda environment. - -To register a Jupyter kernel, you can follow the steps highlighted in -the [Jupyter kernels - Manual management](./Jupyter_kernels_Manual_management.md) -or use the `nesi-add-kernel` tool provided within the [Jupyter on NeSI service](https://jupyter.nesi.org.nz). -This page details the latter option, which we recommend. - -## Getting started - -First you need to open a terminal. It can be from a session on Jupyter -on NeSI or from a regular ssh connection on Mahuika login node. If you -use the ssh option, make sure to load the JupyterLab module to have -access to the `nesi-add-kernel` tool: - -``` sh -module purge # remove all previously loaded modules -module load JupyterLab -``` - -Then, to list all available options, use the `-h` or `--help` options as -follows: - -``` sh -nesi-add-kernel --help -``` - -Here is an example to add a TensorFlow kernel, using NeSI’s module: - -``` sh -nesi-add-kernel tf_kernel TensorFlow/2.8.2-gimkl-2022a-Python-3.10.5 -``` - -!!! warning - The name given to your kernel in `nesi-add-kernel KERNEL_NAME MODULE` must only include lowercase letters, underscores, and dashes. - -and to share the kernel with other members of your NeSI project: - -``` sh -nesi-add-kernel --shared tf_kernel_shared TensorFlow/2.8.2-gimkl-2022a-Python-3.10.5 -``` - -To list all the installed kernels, use the following command: - -``` sh -jupyter-kernelspec list -``` - -and to delete a specific kernel: - -``` sh -jupyter-kernelspec remove -``` - -where `` stands for the name of the kernel to delete. - -## Conda environment - -First, make sure the `JupyterLab` module is loaded: - -``` sh -module purge -module load JupyterLab -``` - -To add a Conda environment created using -`conda create -p `, use: - -``` sh -nesi-add-kernel my_conda_env -p -``` - -otherwise if created using `conda create -n `, use: - -``` sh -nesi-add-kernel my_conda_env -n -``` - -## Virtual environment - -If you want to use a Python virtual environment, don’t forget to specify -which Python module you used to create it. - -For example, if we create a virtual environment named `my_test_venv` -using Python 3.10.5: - -``` sh -module purge -module load Python/3.10.5-gimkl-2022a -python -m venv my_test_venv -``` - -to create the corresponding `my_test_kernel` kernel, we need to use the -command: - -``` sh -module purge -module load JupyterLab -nesi-add-kernel my_test_kernel Python/3.10.5-gimkl-2022a --venv my_test_venv -``` - -## Singularity container - -!!! danger - - This section has not been tested on NeSI OnDemand - -To use a Singularity container, use the `-c` or `--container` options as -follows: - -``` sh -module purge -module load JupyterLab -nesi-add-kernel my_test_kernel -c -``` - -where `` is a path to your container image. - -Note that your container **must** have the `ipykernel` Python package -installed in it to be able to work as a Jupyter kernel. - -Additionally, you can use the `--container-args` option to pass more -arguments to the `singularity exec` command used to instantiate the -kernel. - -Here is an example instantiating a NVIDIA NGC container as a kernel. -First, we need to pull the container: - -``` sh -module purge -module load Singularity/3.11.3 -singularity pull nvidia_tf.sif docker://nvcr.io/nvidia/tensorflow:21.07-tf2-py3 -``` - -then we can instantiate the kernel, using the `--nv` singularity flag to -ensure that the GPU will be found at runtime (assuming our Jupyter -session has access to a GPU): - -``` sh -module purge -module load JupyterLab -nesi-add-kernel nvidia_tf -c nvidia_tf.sif --container-args "'--nv'" -``` - -Note that the double-quoting of `--nv` is needed to properly pass the -options to `singularity exec`. From 8f0e43010d8447d806b8c025913d7a15d8da6877 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 12:02:32 +1300 Subject: [PATCH 17/25] 1 more bak --- .../OnDemand/Apps/JupyterLab/index.md.bak | 106 ------------------ 1 file changed, 106 deletions(-) delete mode 100644 docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md.bak diff --git a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md.bak b/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md.bak deleted file mode 100644 index a0e8b0682..000000000 --- a/docs/Interactive_Computing/OnDemand/Apps/JupyterLab/index.md.bak +++ /dev/null @@ -1,106 +0,0 @@ -# JupyterLab via OnDemand - - -## Introduction - -NeSI supports the use of [Jupyter](https://jupyter.org/) for interactive computing. -Jupyter allows you to create notebooks that contain live code, -equations, visualisations and explanatory text. There are many uses for -Jupyter, including data cleaning, analytics and visualisation, machine -learning, numerical simulation, managing -[Slurm job submissions](../../../../Getting_Started/Next_Steps/Submitting_your_first_job.md) -and workflows and much more. - -## Accessing Jupyter on NeSI - - -Jupyter at NeSI can be accessed via [NeSI OnDemand](https://ondemand.nesi.org.nz/) and launching the JupyterLab application there. -For more details see the [how-to guide](../../how_to_guide.md). - -## Jupyter user interface - -### JupyterLab - -[JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) -is the next generation of the Jupyter user interface and provides a way -to use notebooks, text editor, terminals and custom components together. - -### filesystems - -Your JupyterLab session will start in your home directory the first time you launch it. On subsequent launches it may remember your previous working directory and start there. - -NeSI will auto generate a directory within your home folder called `00_nesi_projects`, you will find symbolic links to projects and nobackup directories of your active projects. We do not recommend that you store files in this initial directory because next time you log into OnDemand the directory will be repopulated based on your user groups, instead switch to your home, project or nobackup directories first. - -If you wish to not have this folder recreated upon login then please place the following file in your HOME directory `.00_nesi_projects.stop` and this will stop the folder from being recreated upon login. - -### Jupyter kernels - -NeSI provides some default Python and R kernels that are available to all users and are based on some -of environment modules. It's also possible to create additional kernels that are visible only to -you (they can optionally be made visible to other members of a specific NeSI project that you belong to). See: - -- [Jupyter kernels - Tool-assisted management](./Jupyter_kernels_Tool_assisted_management.md) (recommended) -- [Jupyter kernels - Manual management](./Jupyter_kernels_Manual_management.md) - -### Jupyter terminal - -Some things to note about the JupyterLab terminal are: - -- when you launch the terminal application some environment modules - are already loaded, so you may want to run `module purge` -- processes launched directly in the JupyterLab terminal will probably - be killed when you Jupyter session times out - -## Installing JupyterLab extensions - -JupyterLab supports many extensions that enhance its functionality. At -NeSI we package some extensions into the default JupyterLab environment. -Keep reading if you need to install extensions yourself. - -Note, there were some changes related to extensions in JupyterLab 3.0 -and there are now multiple methods to install extensions. More details -about JupyterLab extensions can be found -[here](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html). -Check the extension's documentation to find out the supported -installation method for that particular extension. - -On NeSI OnDemand we support installing prebuilt extensions (i.e. pip installable -packages) from the terminal application. -First ensure you have the latest JupyterLab module loaded: - -```sh -module purge -module load JupyterLab -``` - -Then install the extension by running (the upstream documentation for the package -you are installing should specify the "packagename" that you should use): - -``` sh -pip install --user -``` - -For example, the [Dask extension](https://github.com/dask/dask-labextension#jupyterlab-4x) -can be installed with the following: - -``` sh -pip install --user dask-labextension -``` - -Note that we need to specify the `--user` option on the `pip install` command because you don't -have permission to install packages in the system directory. Adding `--user` installs the package -into your home directory instead. - -## Log files - -The log file of a JupyterLab session is saved in the OnDemand session directory -(a subdirectory under the *ondemand* directory in your home directory). -You can reach the session directory in the OnDemand file browser by clicking -the link in the session card under "My Interactive Sessions" in the NeSI -OnDemand web interface. The log file is named *session.log* within the session -directory. - -## External documentation - -- [Jupyter](https://jupyter.readthedocs.io/en/latest/) -- [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) From d08c88b593793240096536209f3642432c647e83 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 12:50:23 +1300 Subject: [PATCH 18/25] Added check for capitalisation and line in style guide. @janamakar --- checks/run_meta_check.py | 12 +++++++++++- docs/NEWPAGE.md | 2 ++ requirements.in | 1 + requirements.txt | 24 ++++++++++++------------ 4 files changed, 26 insertions(+), 13 deletions(-) diff --git a/checks/run_meta_check.py b/checks/run_meta_check.py index 20ac10cee..af83556ce 100755 --- a/checks/run_meta_check.py +++ b/checks/run_meta_check.py @@ -11,6 +11,7 @@ import yaml import os import time +from titlecase import titlecase from pathlib import Path # Ignore files if they match this regex @@ -27,7 +28,7 @@ MAX_TITLE_LENGTH = 28 # As font isn't monospace, this is only approx MAX_HEADER_LENGTH = 32 # minus 2 per extra header level -MIN_TAGS = 2 +MIN_TAGS = 1 RANGE_SIBLING = [4, 8] DOC_ROOT = "docs" @@ -324,6 +325,14 @@ def title_length(): Try to keep it under {MAX_TITLE_LENGTH} characters to avoid word wrapping in the nav.", } +def title_capitalisation(): + correct_title = titlecase(title) + if title != correct_title: + yield { + "line": _get_lineno(r"^title:.*$"), + "message": f"Title '{title}' uses incorrect capitalisation. \ +'{correct_title}' is preferred", + } def minimum_tags(): if "tags" not in meta or not isinstance(meta["tags"], list): @@ -407,6 +416,7 @@ def dynamic_slurm_link(): ENDCHECKS = [ title_redundant, title_length, + title_capitalisation, meta_missing_description, meta_unexpected_key, minimum_tags, diff --git a/docs/NEWPAGE.md b/docs/NEWPAGE.md index 84ec8668d..f5d22050a 100644 --- a/docs/NEWPAGE.md +++ b/docs/NEWPAGE.md @@ -92,6 +92,8 @@ By default, the filename will be use as title of the article/category. Try to keep your title short enough that it does not 'wrap' (become more than one line) in the nav, this usually happens around 24-ish characters however this will vary depending on the letters being used. +Use [Title Case](https://apastyle.apa.org/style-grammar-guidelines/capitalization/title-case), pretty much every word should be capitalised except for the little ones. + !!! tip "File Name hygiene" Regular 'snake_case' naming conventions should be used for articles/categories, i.e. no non-alphanumeric characters (except `_` and `-`). diff --git a/requirements.in b/requirements.in index c44594ca3..38b04a3cb 100644 --- a/requirements.in +++ b/requirements.in @@ -22,6 +22,7 @@ linkcheckmd symspellpy pyspelling flashtext +titlecase # additional tools pip-tools diff --git a/requirements.txt b/requirements.txt index 1a28ddf51..777f07143 100644 --- a/requirements.txt +++ b/requirements.txt @@ -12,7 +12,7 @@ aiosignal==1.4.0 # via aiohttp annotated-types==0.7.0 # via pydantic -anyio==4.11.0 +anyio==4.12.0 # via httpx attrs==25.4.0 # via aiohttp @@ -22,7 +22,7 @@ babel==2.17.0 # mkdocs-material backrefs==6.1 # via mkdocs-material -beautifulsoup4==4.14.2 +beautifulsoup4==4.14.3 # via pyspelling bracex==2.6 # via wcmatch @@ -56,9 +56,9 @@ cryptography==46.0.3 # via pyjwt editdistpy==0.1.6 # via symspellpy -essentials==1.1.8 +essentials==1.1.9 # via essentials-openapi -essentials-openapi==1.2.1 +essentials-openapi==1.3.0 # via neoteroi-mkdocs filelock==3.20.0 # via cachecontrol @@ -146,7 +146,7 @@ mkdocs==1.6.1 # mkdocs-simple-hooks # mkdocs-spellcheck # neoteroi-mkdocs -mkdocs-awesome-nav==3.2.0 +mkdocs-awesome-nav==3.3.0 # via -r requirements.in mkdocs-bootstrap4==0.1.5 # via -r requirements.in @@ -174,7 +174,7 @@ mkdocs-section-index==0.3.10 # via -r requirements.in mkdocs-simple-hooks==0.1.5 # via -r requirements.in -mkdocs-spellcheck==1.1.2 +mkdocs-spellcheck==1.2.0 # via -r requirements.in msgpack==1.1.2 # via cachecontrol @@ -184,7 +184,7 @@ multidict==6.7.0 # yarl natsort==8.4.0 # via mkdocs-awesome-nav -neoteroi-mkdocs==1.1.3 +neoteroi-mkdocs==1.2.0 # via -r requirements.in packaging==25.0 # via @@ -209,7 +209,7 @@ proselint==0.16.0 # via -r requirements.in pycparser==2.23 # via cffi -pydantic==2.12.4 +pydantic==2.12.5 # via mkdocs-awesome-nav pydantic-core==2.41.5 # via pydantic @@ -221,7 +221,7 @@ pygments==2.19.2 # rich pyjwt[crypto]==2.10.1 # via pygithub -pymdown-extensions==10.17.1 +pymdown-extensions==10.17.2 # via mkdocs-material pynacl==1.6.1 # via pygithub @@ -229,7 +229,7 @@ pyproject-hooks==1.2.0 # via # build # pip-tools -pyspelling==2.12 +pyspelling==2.12.1 # via -r requirements.in python-dateutil==2.9.0.post0 # via @@ -262,8 +262,6 @@ six==1.17.0 # python-dateutil smmap==5.0.2 # via gitdb -sniffio==1.3.1 - # via anyio soupsieve==2.8 # via # beautifulsoup4 @@ -274,6 +272,8 @@ symspellpy==6.9.0 # via -r requirements.in termcolor==3.2.0 # via mkdocs-macros-plugin +titlecase==2.4.1 + # via -r requirements.in typing-extensions==4.15.0 # via # aiosignal From 22cd59549dd7a7dfcf2adea990a8764f15c88f81 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 13:02:23 +1300 Subject: [PATCH 19/25] Break up accounts proects etc --- docs/Batch_Computing/Fair_Share.md | 2 +- docs/Getting_Started/.pages.yml | 4 ++- .../Connecting_to_the_Cluster.md | 2 +- .../Accessing_the_HPCs/First_Time_Login.md | 4 +-- .../.pages.yml | 4 +-- .../Allocations_and_Extensions.md | 10 +++--- .../Quarterly_allocation_periods.md | 11 +++---- .../What_is_an_allocation.md | 3 +- ...ount_Profile.md => Creating_an_Account.md} | 6 ++-- ...ccount_Requests_for_non_Tuakiri_Members.md | 4 +-- .../Policy/How_we_review_applications.md | 2 +- .../Policy/Institutional_allocations.md | 2 +- .../Policy/Merit_allocations.md | 2 +- .../Policy/Postgraduate_allocations.md | 2 +- .../Proposal_Development_allocations.md | 2 +- .../Adding_Members_to_your_Project.md} | 4 +-- .../Applying_for_a_New_Project.md} | 6 ++-- .../Applying_to_Join_a_Project.md} | 2 +- .../Logging_in_to_my-nesi-org-nz.md | 4 +-- ..._renew_an_allocation_via_my-nesi-org-nz.md | 2 +- .../The_NeSI_Project_Request_Form.md | 2 +- .../Software/Available_Applications/MATLAB.md | 2 +- ...Job_Scaling_Ascertaining_job_dimensions.md | 2 +- .../Data_Transfer_using_Globus.md | 2 +- .../Globus_Quick_Start_Guide.md | 2 +- docs/redirect_map.yml | 32 +++++++++++-------- 26 files changed, 60 insertions(+), 60 deletions(-) rename docs/Getting_Started/{Accounts-Projects_and_Allocations => Allocations}/.pages.yml (81%) rename docs/Getting_Started/{Accounts-Projects_and_Allocations => Allocations}/Allocations_and_Extensions.md (96%) rename docs/Getting_Started/{Accounts-Projects_and_Allocations => Allocations}/Quarterly_allocation_periods.md (93%) rename docs/Getting_Started/{Accounts-Projects_and_Allocations => Allocations}/What_is_an_allocation.md (98%) rename docs/Getting_Started/{Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md => Creating_an_Account.md} (79%) rename docs/Getting_Started/{Accounts-Projects_and_Allocations/Adding_members_to_your_project.md => Projects/Adding_Members_to_your_Project.md} (86%) rename docs/Getting_Started/{Accounts-Projects_and_Allocations/Applying_for_a_new_project.md => Projects/Applying_for_a_New_Project.md} (94%) rename docs/Getting_Started/{Accounts-Projects_and_Allocations/Applying_to_join_a_project.md => Projects/Applying_to_Join_a_Project.md} (91%) diff --git a/docs/Batch_Computing/Fair_Share.md b/docs/Batch_Computing/Fair_Share.md index 1430b05c1..39c04f38b 100644 --- a/docs/Batch_Computing/Fair_Share.md +++ b/docs/Batch_Computing/Fair_Share.md @@ -18,7 +18,7 @@ Your *Fair Share score* is a number between **0** and **1**. Projects with a **larger** Fair Share score receive a **higher priority** in the queue. -A project is given an [allocation of compute units](../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md) +A project is given an [allocation of compute units](../Getting_Started/Allocations/What_is_an_allocation.md) over a given **period**. An institution also has a percentage **Fair Share entitlement** of each machine's deliverable capacity over that same period. diff --git a/docs/Getting_Started/.pages.yml b/docs/Getting_Started/.pages.yml index 60b008d44..0b63d7169 100644 --- a/docs/Getting_Started/.pages.yml +++ b/docs/Getting_Started/.pages.yml @@ -1,6 +1,8 @@ --- nav: - - Accounts, Projects and Allocations : Accounts-Projects_and_Allocations + - Creating_an_Account.md + - Projects and + - Allocations - Accessing_the_HPCs - Getting_Help - Cheat_Sheets diff --git a/docs/Getting_Started/Accessing_the_HPCs/Connecting_to_the_Cluster.md b/docs/Getting_Started/Accessing_the_HPCs/Connecting_to_the_Cluster.md index b13320395..f373b7fee 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Connecting_to_the_Cluster.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Connecting_to_the_Cluster.md @@ -8,7 +8,7 @@ tags: --- !!! prerequisite - - Have an [active account and project](../Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md). + - Have an [active account and project](../Creating_an_Account.md). Before you can start submitting work you will need some way of connecting to the NeSI clusters. diff --git a/docs/Getting_Started/Accessing_the_HPCs/First_Time_Login.md b/docs/Getting_Started/Accessing_the_HPCs/First_Time_Login.md index 5c1b7ff52..2d2be4f14 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/First_Time_Login.md +++ b/docs/Getting_Started/Accessing_the_HPCs/First_Time_Login.md @@ -11,8 +11,8 @@ tags: --- !!! prerequisite - - Have an [account](../Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md). - - Be a member of an [active project](../Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md). + - Have an [account](../Creating_an_Account.md). + - Be a member of an [active project](../Creating_an_Account.md). - Have a device with an authentication app. !!! note diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/.pages.yml b/docs/Getting_Started/Allocations/.pages.yml similarity index 81% rename from docs/Getting_Started/Accounts-Projects_and_Allocations/.pages.yml rename to docs/Getting_Started/Allocations/.pages.yml index f9c63598b..d018988ec 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/.pages.yml +++ b/docs/Getting_Started/Allocations/.pages.yml @@ -1,8 +1,8 @@ --- nav: - - Creating_an_Account_Profile.md + - What_is_an_allocation.md + - Allocations_and_Extensions.md - Applying_for_a_new_project.md - Applying_to_join_a_project.md - - What_is_an_allocation.md - Quarterly_allocation_periods.md - "*" diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Allocations_and_Extensions.md b/docs/Getting_Started/Allocations/Allocations_and_Extensions.md similarity index 96% rename from docs/Getting_Started/Accounts-Projects_and_Allocations/Allocations_and_Extensions.md rename to docs/Getting_Started/Allocations/Allocations_and_Extensions.md index 3d02afa79..2408dda59 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Allocations_and_Extensions.md +++ b/docs/Getting_Started/Allocations/Allocations_and_Extensions.md @@ -1,11 +1,9 @@ --- created_at: '2018-05-18T02:34:03Z' -tags: [] -title: Project Extensions and New Allocations on Existing Projects -vote_count: 1 -vote_sum: 1 -zendesk_article_id: 360000202196 -zendesk_section_id: 360000196195 +tags: +- projects +- allocations +title: Allocations & Extensions --- NeSI recognises that research programmes often continue over several diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md b/docs/Getting_Started/Allocations/Quarterly_allocation_periods.md similarity index 93% rename from docs/Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md rename to docs/Getting_Started/Allocations/Quarterly_allocation_periods.md index d24eb366b..c744f4316 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md +++ b/docs/Getting_Started/Allocations/Quarterly_allocation_periods.md @@ -1,11 +1,8 @@ --- created_at: '2021-09-14T03:20:56Z' -tags: [] -title: Quarterly allocation periods -vote_count: 0 -vote_sum: 0 -zendesk_article_id: 4406437522703 -zendesk_section_id: 360000196195 +tags: +- allocations +title: Quarterly Allocation Periods --- Applications for new allocations on existing projects are accepted and @@ -54,4 +51,4 @@ month. wait for the following call before your request is considered. If you have questions about the review cycles or other steps involved -with getting access to NeSI, {% include "partials/support_request.html" %} \ No newline at end of file +with getting access to NeSI, {% include "partials/support_request.html" %} diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md b/docs/Getting_Started/Allocations/What_is_an_allocation.md similarity index 98% rename from docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md rename to docs/Getting_Started/Allocations/What_is_an_allocation.md index 254ed3359..3bd8535b1 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md +++ b/docs/Getting_Started/Allocations/What_is_an_allocation.md @@ -1,10 +1,9 @@ --- created_at: '2020-02-25T02:35:13Z' tags: - - Allocation - Allocations - Compute -title: What is an allocation? +title: What is an Allocation? --- Because NeSI's resources are limited, we manage access to our resources diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md b/docs/Getting_Started/Creating_an_Account.md similarity index 79% rename from docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md rename to docs/Getting_Started/Creating_an_Account.md index bdb39ee10..0c38ff4e1 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md +++ b/docs/Getting_Started/Creating_an_Account.md @@ -10,7 +10,7 @@ tags: !!! prerequisite Either an active login at a Tuakiri member institution, or - [a Tuakiri Virtual Home account in respect of your current place of work or study](../Policy/Account_Requests_for_non_Tuakiri_Members.md). + [a Tuakiri Virtual Home account in respect of your current place of work or study](./Policy/Account_Requests_for_non_Tuakiri_Members.md). 1. Access [my.nesi.org.nz](https://my.nesi.org.nz) via your browser and log in with either your institutional credentials, or your Tuakiri @@ -24,6 +24,6 @@ tags: our records. !!! prerequisite "What next?" - - [Apply for Access](./Applying_for_a_new_project.md), + - [Apply for Access](./Projects/Applying_for_a_New_Project.md), either submit an application for a new project or - [join an existing project](./Applying_to_join_a_project.md). + [join an existing project](./Projects/Applying_to_Join_a_Project.md). diff --git a/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md b/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md index a841c3f29..a1b99d8dc 100644 --- a/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md +++ b/docs/Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md @@ -44,5 +44,5 @@ my.nesi.org.nz. !!! note "What next?" - [Project Eligibility](Allocation_classes.md) - - [Applying for a new project.](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md) - - [Applying to join an existing project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md). + - [Applying for a new project.](../Projects/Applying_for_a_New_Project.md) + - [Applying to join an existing project](../Projects/Applying_to_Join_a_Project.md). diff --git a/docs/Getting_Started/Policy/How_we_review_applications.md b/docs/Getting_Started/Policy/How_we_review_applications.md index d64b70e57..a943226bf 100644 --- a/docs/Getting_Started/Policy/How_we_review_applications.md +++ b/docs/Getting_Started/Policy/How_we_review_applications.md @@ -43,7 +43,7 @@ new projects is as follows: research teams. 5. **Decision and notification:** If we approve an initial allocation for your project, we will typically award the project an - [allocation of compute units and also an online storage allocation](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md), + [allocation of compute units and also an online storage allocation](../Allocations/What_is_an_allocation.md), from one of [our allocation classes](Allocation_classes.md). In an case, we will send you an email telling you about our decision. diff --git a/docs/Getting_Started/Policy/Institutional_allocations.md b/docs/Getting_Started/Policy/Institutional_allocations.md index 17b4f577b..80fee228f 100644 --- a/docs/Getting_Started/Policy/Institutional_allocations.md +++ b/docs/Getting_Started/Policy/Institutional_allocations.md @@ -33,4 +33,4 @@ allocation. Read more about [how we review applications](How_we_review_applications.md). To learn more about NeSI Projects or to apply for a new project, please -read our article [Applying for a NeSI Project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md). +read our article [Applying for a NeSI Project](../Projects/Applying_for_a_New_Project.md). diff --git a/docs/Getting_Started/Policy/Merit_allocations.md b/docs/Getting_Started/Policy/Merit_allocations.md index ded727cff..f8f7304e1 100644 --- a/docs/Getting_Started/Policy/Merit_allocations.md +++ b/docs/Getting_Started/Policy/Merit_allocations.md @@ -55,4 +55,4 @@ Read more about [how we review applications](How_we_review_applications.md). To learn more about REANNZ HPC Projects or to apply for a new project, please -read our article [Applying for a REANNZ HPC Project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md). +read our article [Applying for a REANNZ HPC Project](../Projects/Applying_for_a_New_Project.md). diff --git a/docs/Getting_Started/Policy/Postgraduate_allocations.md b/docs/Getting_Started/Policy/Postgraduate_allocations.md index 0c63508b1..fea155ae8 100644 --- a/docs/Getting_Started/Policy/Postgraduate_allocations.md +++ b/docs/Getting_Started/Policy/Postgraduate_allocations.md @@ -40,4 +40,4 @@ Read more about [how we review applications](How_we_review_applications.md). To learn more about NeSI Projects, and to apply please review the -content of the section entitled [Applying for a NeSI Project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md). +content of the section entitled [Applying for a NeSI Project](../Projects/Applying_for_a_New_Project.md). diff --git a/docs/Getting_Started/Policy/Proposal_Development_allocations.md b/docs/Getting_Started/Policy/Proposal_Development_allocations.md index 46406ff72..d9d78db72 100644 --- a/docs/Getting_Started/Policy/Proposal_Development_allocations.md +++ b/docs/Getting_Started/Policy/Proposal_Development_allocations.md @@ -35,4 +35,4 @@ The [How Applications are Reviewed](How_we_review_applications.md) section provides additional important information for applicants. To learn more about NeSI Projects, and to apply please review the -content of the section entitled [Applying for a NeSI Project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md). +content of the section entitled [Applying for a NeSI Project](../Projects/Applying_for_a_New_Project.md). diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Adding_members_to_your_project.md b/docs/Getting_Started/Projects/Adding_Members_to_your_Project.md similarity index 86% rename from docs/Getting_Started/Accounts-Projects_and_Allocations/Adding_members_to_your_project.md rename to docs/Getting_Started/Projects/Adding_Members_to_your_Project.md index 9b556e603..81e0a52f4 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Adding_members_to_your_project.md +++ b/docs/Getting_Started/Projects/Adding_Members_to_your_Project.md @@ -9,8 +9,8 @@ description: How to add a new member to your project. --- !!! prerequisite - - Have a [Account profile](./Creating_an_Account_Profile.md). - - Be the **owner** of a [project](./Applying_for_a_new_project.md). + - Have a [Account profile](../Creating_an_Account.md). + - Be the **owner** of a [project](./Applying_for_a_New_Project.md). 1. Log in to [my.nesi.org.nz](https://my.nesi.org.nz/) via your browser. 2. Under **List Projects**, click on the project you want to add members to. diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md b/docs/Getting_Started/Projects/Applying_for_a_New_Project.md similarity index 94% rename from docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md rename to docs/Getting_Started/Projects/Applying_for_a_New_Project.md index f8de64064..932697a39 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md +++ b/docs/Getting_Started/Projects/Applying_for_a_New_Project.md @@ -7,7 +7,7 @@ tags: --- !!! prerequisite - - Have a [Account profile](./Creating_an_Account_Profile.md). + - Have a [Account profile](../Creating_an_Account.md). - NIWA researchers only: read and follow the [NIWA internal documentation for gaining access to the HPCs](https://one.niwa.co.nz/display/ONE/High+Performance+Computing+Facility+Services) (this link is only valid from within the NIWA network or VPN). @@ -21,7 +21,7 @@ tags: - Become familiar with foundational HPC skills, for example by attending a NeSI introductory workshop, one of our [weekly introductory sessions (or watching the - recording)](../../Getting_Started/Getting_Help/Introductory_Material.md), + recording)](../Getting_Help/Introductory_Material.md), or having one or more of your project team members do so. - Review our [allocation classes](../Policy/Allocation_classes.md). If you don't think you currently qualify for any class other than @@ -80,4 +80,4 @@ is relevant. [reviewed](../Policy/How_we_review_applications.md), after which you will be informed of the outcome. - We may contact you if further details are required. - - When your project is approved you will be able to [login for the first time](../../Getting_Started/Accessing_the_HPCs/First_Time_Login.md). + - When your project is approved you will be able to [login for the first time](../Accessing_the_HPCs/First_Time_Login.md). diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md b/docs/Getting_Started/Projects/Applying_to_Join_a_Project.md similarity index 91% rename from docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md rename to docs/Getting_Started/Projects/Applying_to_Join_a_Project.md index 53f4ec4fb..950008182 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md +++ b/docs/Getting_Started/Projects/Applying_to_Join_a_Project.md @@ -8,7 +8,7 @@ tags: --- !!! prerequisite - - You must have an [account](./Creating_an_Account_Profile.md). + - You must have an [account](../Creating_an_Account.md). ## How to join a project diff --git a/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md b/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md index 7042f7ef3..fefd33b94 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md +++ b/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md @@ -12,7 +12,7 @@ zendesk_section_id: 360001059296 We allow students, academics, alumni and researchers to securely login and create a [NeSI account -profile](../Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md) +profile](../Creating_an_Account.md) using the credentials granted by their home organisation via Tuakiri. ### Tuakiri - federated identity and access management @@ -24,7 +24,7 @@ but many other institutions, including private sector organisations and most central and local government agencies, are not. See also [Creating a NeSI Account -Profile](../Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md) +Profile](../Creating_an_Account.md) ### Support for users outside the Tuakiri federation diff --git a/docs/Getting_Started/my-nesi-org-nz/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.md b/docs/Getting_Started/my-nesi-org-nz/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.md index 3714cb8b0..b35a4539f 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.md +++ b/docs/Getting_Started/my-nesi-org-nz/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.md @@ -70,5 +70,5 @@ Please be aware that: - An allocation from an institution's entitlement is subject to approval by that institution. -See [Project Extensions and New Allocations on Existing Projects](../Accounts-Projects_and_Allocations/Allocations_and_Extensions.md) +See [Project Extensions and New Allocations on Existing Projects](../Allocations/Allocations_and_Extensions.md) for more details. diff --git a/docs/Getting_Started/my-nesi-org-nz/The_NeSI_Project_Request_Form.md b/docs/Getting_Started/my-nesi-org-nz/The_NeSI_Project_Request_Form.md index 0c63fb7c3..44f11d7fa 100644 --- a/docs/Getting_Started/my-nesi-org-nz/The_NeSI_Project_Request_Form.md +++ b/docs/Getting_Started/my-nesi-org-nz/The_NeSI_Project_Request_Form.md @@ -8,7 +8,7 @@ zendesk_article_id: 360003648716 zendesk_section_id: 360001059296 --- -See [Applying for a NeSI project](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md)  +See [Applying for a NeSI project](../Projects/Applying_for_a_New_Project.md)  for how to access the form. ## Preparing a request to use NeSI resources diff --git a/docs/Software/Available_Applications/MATLAB.md b/docs/Software/Available_Applications/MATLAB.md index 6a09f25ce..8895ed87d 100644 --- a/docs/Software/Available_Applications/MATLAB.md +++ b/docs/Software/Available_Applications/MATLAB.md @@ -186,7 +186,7 @@ support page. !!! tip "GPU cost" A GPU device-hour costs more than a core-hour, depending on the type - of GPU. You can find a comparison table in our [What is an allocation?](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md) + of GPU. You can find a comparison table in our [What is an allocation?](../../Getting_Started/Allocations/What_is_an_allocation.md) support page. ### GPU Example diff --git a/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md b/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md index 07ffe87ab..9852b7fdd 100644 --- a/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md +++ b/docs/Software/Profiling_and_Debugging/Job_Scaling_Ascertaining_job_dimensions.md @@ -43,7 +43,7 @@ not. | Memory | The job may wait in the queue for longer. Your fair share score will fall more than necessary. | Your job will fail, probably with an 'OUT OF MEMORY' error, segmentation fault or bus error. This may not happen immediately. | | Wall time | The job may wait in the queue for longer than necessary | The job will run out of time and get killed. | -***See [What is an allocation?](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md) for more details on how each resource effects your compute usage.*** +***See [What is an allocation?](../../Getting_Started/Allocations/What_is_an_allocation.md) for more details on how each resource effects your compute usage.*** It is therefore important to try and make your jobs resource requests reasonably accurate. In this article we will discuss how you can scale diff --git a/docs/Storage/Data_Transfer_Services/Data_Transfer_using_Globus.md b/docs/Storage/Data_Transfer_Services/Data_Transfer_using_Globus.md index 7f7f6780e..249f36052 100644 --- a/docs/Storage/Data_Transfer_Services/Data_Transfer_using_Globus.md +++ b/docs/Storage/Data_Transfer_Services/Data_Transfer_using_Globus.md @@ -23,7 +23,7 @@ To use Globus to transfer data to/from NeSI platforms, you need: 1. A Globus account (see [Initial Globus Sign-Up and Globus ID](../../Storage/Data_Transfer_Services/Initial_Globus_Sign_Up-and_your_Globus_Identities.md)) 2. An active NeSI account (see - [Creating a NeSI Account](../../Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md)) + [Creating a NeSI Account](../../Getting_Started/Creating_an_Account.md)) 3. Access privileges to the Globus endpoint/collection you plan on transferring data from or to. This endpoint/collection could be a personal one on your workstation, or it could be managed diff --git a/docs/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide.md b/docs/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide.md index 810bd5c5b..0b17cf951 100644 --- a/docs/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide.md +++ b/docs/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide.md @@ -11,7 +11,7 @@ between two Globus Data Transfer Nodes (DTNs). To use Globus to transfer data to or from NeSI, you need: 1. An active NeSI account (see - [Creating a NeSI Account](../../Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md)) + [Creating a NeSI Account](../../Getting_Started/Creating_an_Account.md)) 2. A Globus account (see [Initial Globus Sign-Up and Globus ID](../../Storage/Data_Transfer_Services/Initial_Globus_Sign_Up-and_your_Globus_Identities.md)) 3. Access to Globus DTNs or endpoints diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index 0e340d8ef..b9bfe4ee9 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -150,19 +150,23 @@ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md : Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC.md Scientific_Computing/Training/Webinars.md : Getting_Started/Getting_Help/Webinars.md Scientific_Computing/Training/Workshops.md : Getting_Started/Getting_Help/Workshops.md -Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project: Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project -General/NeSI_Policies/Proposal_Development_allocations -Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project -General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members -General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members -Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project -General/Announcements/HPC3/ -Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project/ -General/Announcements/HPC3/ - - - - - +Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project: Getting_Started/Projects/Applying_for_a_new_project +General/NeSI_Policies/Proposal_Development_allocations.md : General/Policy/Proposal_Development_allocations.md +Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md: Getting_Started/Projects/Applying_to_join_a_project.md +General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members: Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md +General/Announcements/HPC3/: Getting_Started/Creating_an_Account.md Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md : Service_Subscriptions/Contracts_&_Billing/Billing_process.md Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md : Service_Subscriptions/Contracts_&_Billing/Types_of_contracts.md +Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md : Getting_Started/Creating_an_Account.md +Getting_Started/Accounts-Projects_and_Allocations/Adding_members_to_your_project.md : Getting_Started/Projects/Adding_members_to_your_project.md +Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md : Getting_Started/Projects/Applying_for_a_new_project.md +Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md : Getting_Started/Projects/Applying_to_join_a_project.md +Getting_Started/Projects/Adding_members_to_your_project.md : Getting_Started/Projects/Adding_Members_to_your_Project.md +Getting_Started/Projects/Applying_to_join_a_project.md : Getting_Started/Projects/Applying_to_Join_a_Project.md +Getting_Started/Projects/Applying_for_a_new_project.md : Getting_Started/Projects/Applying_for_a_New_Project.md +Getting_Started/Accounts-Projects_and_Allocations/Allocations_and_Extensions.md : Getting_Started/Allocations/Allocations_and_Extensions.md +Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md : Getting_Started/Allocations/Quarterly_allocation_periods.md +Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md : Getting_Started/Allocations/What_is_an_allocation.md +Getting_Started/Allocations/Allocations_and_Extensions.md : Getting_Started/Allocations/Allocations_&_Extensions.md +Getting_Started/Allocations/Allocations_&_Extensions.md : Getting_Started/Allocations/Allocations_and_Extensions.md +Getting_Started/Creating_an_Account_Profile.md : Getting_Started/Creating_an_Account.md From 84763134b617f42c7c1120aca4ae1fdc054afdc6 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 14:10:16 +1300 Subject: [PATCH 20/25] Make fixlink fuzzy --- fixlinks.py | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/fixlinks.py b/fixlinks.py index db6490afe..685a351f6 100644 --- a/fixlinks.py +++ b/fixlinks.py @@ -7,7 +7,7 @@ LINK_RE = re.compile(r'\[([^\]]+)\]\(([^)]+)\)') def all_md_files(root): - return [p for p in root.rglob("*.md")] + return [p for p in root.rglob("**/*.md")] def resolve_path(base_md, target): # separate anchor @@ -22,9 +22,22 @@ def resolve_path(base_md, target): return True return False -def find_candidates(basename, root): +def find_exact_candidates(basename, root): return [p for p in root.rglob("*.md") if p.name == basename] +def find_similar_candidates(basename, root): + return [p for p in root.rglob("*.md") if jaccard_similarity(p.name, basename) > 0.5] + + +def jaccard_similarity(s1, s2): + set1 = set(s1.split(".")[0].lower().split("_")) # Split into words + set2 = set(s2.split(".")[0].lower().split("_")) + intersection = len(set1.intersection(set2)) + union = len(set1.union(set2)) + if (intersection / union) > 0: + print(f"{s2}:{s1} {intersection / union}") + return intersection / union + def main(dry_run): md_files = all_md_files(MD_ROOT) fixes = [] @@ -41,10 +54,11 @@ def main(dry_run): base = os.path.basename(target.split('#',1)[0]) if not base: continue - candidates = find_candidates(base, MD_ROOT) + candidates = find_exact_candidates(base, MD_ROOT) + if ~len(candidates): + candidates = find_similar_candidates(base, MD_ROOT) if len(candidates) == 1: - cand = candidates[0] - rel = os.path.relpath(cand, md.parent) + rel = os.path.relpath(candidates[0], md.parent) # preserve anchor anchor = '' if '#' in target: From 5413c4cc4573a20fa72ed0c689401dabf467e190 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 14:11:49 +1300 Subject: [PATCH 21/25] fix links --- docs/Getting_Started/.pages.yml | 2 +- .../Accessing_the_HPCs/Git_Bash_Windows.md | 4 ++-- .../MobaXterm_Setup_Windows.md | 2 +- .../Standard_Terminal_Setup.md | 2 +- .../WinSCP-PuTTY_Setup_Windows.md | 2 +- docs/Getting_Started/Allocations/.pages.yml | 3 --- .../FAQs/Mahuika_HPC3_Differences.md | 2 +- docs/Getting_Started/Projects/.pages.yml | 4 ++++ .../OnDemand/Apps/.pages.yml | 3 --- docs/Service_Subscriptions/.pages.yml | 2 +- .../.pages.yml | 0 .../Billing_process.md | 0 .../Types_of_contracts.md | 0 .../Thread_Placement_and_Thread_Affinity.md | 2 +- docs/Software/Software_Version_Management.md | 2 +- docs/Storage/Long_Term_Storage/.pages.yml | 1 - .../Moving_files_to_and_from_the_cluster.md | 2 +- docs/redirect_map.yml | 16 +++++++--------- 18 files changed, 22 insertions(+), 27 deletions(-) create mode 100644 docs/Getting_Started/Projects/.pages.yml rename docs/Service_Subscriptions/{Contracts_&_Billing => Contracts_and_Billing}/.pages.yml (100%) rename docs/Service_Subscriptions/{Contracts_&_Billing => Contracts_and_Billing}/Billing_process.md (100%) rename docs/Service_Subscriptions/{Contracts_&_Billing => Contracts_and_Billing}/Types_of_contracts.md (100%) diff --git a/docs/Getting_Started/.pages.yml b/docs/Getting_Started/.pages.yml index 0b63d7169..f8792971b 100644 --- a/docs/Getting_Started/.pages.yml +++ b/docs/Getting_Started/.pages.yml @@ -1,7 +1,7 @@ --- nav: - Creating_an_Account.md - - Projects and + - Projects - Allocations - Accessing_the_HPCs - Getting_Help diff --git a/docs/Getting_Started/Accessing_the_HPCs/Git_Bash_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/Git_Bash_Windows.md index 6d518f6fb..167320218 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Git_Bash_Windows.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Git_Bash_Windows.md @@ -9,8 +9,8 @@ title: Git Bash (Windows) --- !!! prerequisite - - Have a [NeSI account.](../../Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md). - - Be a member of an [active project.](../../Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md) + - Have a [NeSI account.](../Creating_an_Account.md). + - Be a member of an [active project.](../Projects/Applying_to_Join_a_Project.md) ## First time setup diff --git a/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md index c2e98548e..5d0485597 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md +++ b/docs/Getting_Started/Accessing_the_HPCs/MobaXterm_Setup_Windows.md @@ -12,7 +12,7 @@ description: How to set up cluster access using MobaXterm It is recommended to use [OnDemand](https://ondemand.nesi.org.nz/) for file browsing, up and downloading and terminal access if you would normally have used MobaXterm. !!! prerequisite - - Have an [active account and project.](../../Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md) + - Have an [active account and project.](../Creating_an_Account.md) - [Download MobaXterm](https://mobaxterm.mobatek.net/download-home-edition.html) - Followed the steps in [Standard Terminal](Standard_Terminal_Setup.md). diff --git a/docs/Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md b/docs/Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md index e76537e87..bc6ee448e 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Standard_Terminal_Setup.md @@ -7,7 +7,7 @@ description: How to setup your ssh config file in order to connect to the HPC cl --- !!! prerequisite - - Have an [active account and project.](../../Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md) + - Have an [active account and project.](../Creating_an_Account.md) - Have one of: - Built in Linux/Mac terminal - [Windows Subsystem for Linux](Windows_Subsystem_for_Linux_WSL.md) diff --git a/docs/Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md index 041bc8695..f09671ecb 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md +++ b/docs/Getting_Started/Accessing_the_HPCs/WinSCP-PuTTY_Setup_Windows.md @@ -8,7 +8,7 @@ title: WinSCP/PuTTY Setup (Windows) --- !!! prerequisite - - Have an [active account and project.](../../Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md) + - Have an [active account and project.](../Creating_an_Account.md) - Be using the Windows operating system. WinSCP is an SCP client for windows implementing the SSH protocol from diff --git a/docs/Getting_Started/Allocations/.pages.yml b/docs/Getting_Started/Allocations/.pages.yml index d018988ec..d21d35fba 100644 --- a/docs/Getting_Started/Allocations/.pages.yml +++ b/docs/Getting_Started/Allocations/.pages.yml @@ -2,7 +2,4 @@ nav: - What_is_an_allocation.md - Allocations_and_Extensions.md - - Applying_for_a_new_project.md - - Applying_to_join_a_project.md - - Quarterly_allocation_periods.md - "*" diff --git a/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md b/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md index ef65896a7..70b6fc620 100644 --- a/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md +++ b/docs/Getting_Started/FAQs/Mahuika_HPC3_Differences.md @@ -47,7 +47,7 @@ There are snapshots for short-term recovery of deleted files, in `/home/.snapsho ## Access via Web browser -[OnDemand](../../Interactive_Computing/Ondemand/index.md) has replaced JupyterHub. +[OnDemand](../../Interactive_Computing/OnDemand/index.md) has replaced JupyterHub. OnDemand is more flexible and can deliver more GUI based apps. ## Software diff --git a/docs/Getting_Started/Projects/.pages.yml b/docs/Getting_Started/Projects/.pages.yml new file mode 100644 index 000000000..0bff99eea --- /dev/null +++ b/docs/Getting_Started/Projects/.pages.yml @@ -0,0 +1,4 @@ +--- +nav: + - Applying_for_a_New_Project.md + - "*" diff --git a/docs/Interactive_Computing/OnDemand/Apps/.pages.yml b/docs/Interactive_Computing/OnDemand/Apps/.pages.yml index 18a3265ac..62129ab64 100644 --- a/docs/Interactive_Computing/OnDemand/Apps/.pages.yml +++ b/docs/Interactive_Computing/OnDemand/Apps/.pages.yml @@ -1,8 +1,5 @@ --- nav: - - JupyterLab: JupyterLab - - RStudio: RStudio.md - - MATLAB: MATLAB.md - VS Code: VSCode.md - Virtual desktop: virtual_desktop.md - "*" diff --git a/docs/Service_Subscriptions/.pages.yml b/docs/Service_Subscriptions/.pages.yml index cbe183922..74f7dd255 100644 --- a/docs/Service_Subscriptions/.pages.yml +++ b/docs/Service_Subscriptions/.pages.yml @@ -1,5 +1,5 @@ --- nav: - - Contracts_and_billing_processes + - Contracts & Billing: Contracts_and_Billing - Service_Governance - "*" diff --git a/docs/Service_Subscriptions/Contracts_&_Billing/.pages.yml b/docs/Service_Subscriptions/Contracts_and_Billing/.pages.yml similarity index 100% rename from docs/Service_Subscriptions/Contracts_&_Billing/.pages.yml rename to docs/Service_Subscriptions/Contracts_and_Billing/.pages.yml diff --git a/docs/Service_Subscriptions/Contracts_&_Billing/Billing_process.md b/docs/Service_Subscriptions/Contracts_and_Billing/Billing_process.md similarity index 100% rename from docs/Service_Subscriptions/Contracts_&_Billing/Billing_process.md rename to docs/Service_Subscriptions/Contracts_and_Billing/Billing_process.md diff --git a/docs/Service_Subscriptions/Contracts_&_Billing/Types_of_contracts.md b/docs/Service_Subscriptions/Contracts_and_Billing/Types_of_contracts.md similarity index 100% rename from docs/Service_Subscriptions/Contracts_&_Billing/Types_of_contracts.md rename to docs/Service_Subscriptions/Contracts_and_Billing/Types_of_contracts.md diff --git a/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md b/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md index 4f3bd7146..f4a45c0c5 100644 --- a/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md +++ b/docs/Software/Parallel_Computing/Thread_Placement_and_Thread_Affinity.md @@ -48,7 +48,7 @@ cores (our current HPCs have 18 to 20 cores). Each core can also be further divided into two logical cores (or hyperthreads, as mentioned before). -![NodeSocketCore.png](../assets/images/Thread_Placement_and_Thread_Affinity.png) +![NodeSocketCore.png](../../assets/images/Thread_Placement_and_Thread_Affinity.png) It is very important to note the following: diff --git a/docs/Software/Software_Version_Management.md b/docs/Software/Software_Version_Management.md index f08cac2c0..b8aca28b8 100644 --- a/docs/Software/Software_Version_Management.md +++ b/docs/Software/Software_Version_Management.md @@ -12,7 +12,7 @@ zendesk_section_id: 360000040056 Much of the software installed on the NeSI cluster have multiple versions available as shown on the -[supported applications page](index.md) +[supported applications page](./index.md) or by using the `module avail` or `module spider` commands. If only the application name is given a default version will be chosen, diff --git a/docs/Storage/Long_Term_Storage/.pages.yml b/docs/Storage/Long_Term_Storage/.pages.yml index 57308d3dc..1b9a2fb8e 100644 --- a/docs/Storage/Long_Term_Storage/.pages.yml +++ b/docs/Storage/Long_Term_Storage/.pages.yml @@ -3,6 +3,5 @@ nav: - Configuring_s3cmd.md - Freezer_Guide.md - Other_Useful_Commands.md - - Troubleshooting.md - "*" - Release Notes freezer.nesi.org.nz: Release_Notes_freezer-nesi-org-nz diff --git a/docs/Storage/Moving_files_to_and_from_the_cluster.md b/docs/Storage/Moving_files_to_and_from_the_cluster.md index c220445ba..e91edbbb2 100644 --- a/docs/Storage/Moving_files_to_and_from_the_cluster.md +++ b/docs/Storage/Moving_files_to_and_from_the_cluster.md @@ -8,7 +8,7 @@ tags: --- !!! prerequisite - Have an [active account and project.](../Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md) + Have an [active account and project.](../Getting_Started/Creating_an_Account.md) Find more information on [our filesystem](./File_Systems_and_Quotas/Filesystems_and_Quotas.md). diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index b9bfe4ee9..307d82a4d 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -150,23 +150,21 @@ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md : Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC.md Scientific_Computing/Training/Webinars.md : Getting_Started/Getting_Help/Webinars.md Scientific_Computing/Training/Workshops.md : Getting_Started/Getting_Help/Workshops.md -Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project: Getting_Started/Projects/Applying_for_a_new_project -General/NeSI_Policies/Proposal_Development_allocations.md : General/Policy/Proposal_Development_allocations.md -Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md: Getting_Started/Projects/Applying_to_join_a_project.md -General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members: Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md -General/Announcements/HPC3/: Getting_Started/Creating_an_Account.md +Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project.md: Getting_Started/Projects/Applying_for_a_New_Project +General/NeSI_Policies/Proposal_Development_allocations.md : Getting_Started/Policy/Proposal_Development_allocations.md +Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md: Getting_Started/Projects/Applying_to_Join_a_Project.md +General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members.md: Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md +General/Announcements/HPC3.md: Getting_Started/Creating_an_Account.md Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md : Service_Subscriptions/Contracts_&_Billing/Billing_process.md Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md : Service_Subscriptions/Contracts_&_Billing/Types_of_contracts.md Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md : Getting_Started/Creating_an_Account.md Getting_Started/Accounts-Projects_and_Allocations/Adding_members_to_your_project.md : Getting_Started/Projects/Adding_members_to_your_project.md -Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md : Getting_Started/Projects/Applying_for_a_new_project.md -Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md : Getting_Started/Projects/Applying_to_join_a_project.md +Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md : Getting_Started/Projects/Applying_for_a_New_Project.md +Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md : Getting_Started/Projects/Applying_to_Join_a_Project.md Getting_Started/Projects/Adding_members_to_your_project.md : Getting_Started/Projects/Adding_Members_to_your_Project.md Getting_Started/Projects/Applying_to_join_a_project.md : Getting_Started/Projects/Applying_to_Join_a_Project.md Getting_Started/Projects/Applying_for_a_new_project.md : Getting_Started/Projects/Applying_for_a_New_Project.md Getting_Started/Accounts-Projects_and_Allocations/Allocations_and_Extensions.md : Getting_Started/Allocations/Allocations_and_Extensions.md Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md : Getting_Started/Allocations/Quarterly_allocation_periods.md Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md : Getting_Started/Allocations/What_is_an_allocation.md -Getting_Started/Allocations/Allocations_and_Extensions.md : Getting_Started/Allocations/Allocations_&_Extensions.md -Getting_Started/Allocations/Allocations_&_Extensions.md : Getting_Started/Allocations/Allocations_and_Extensions.md Getting_Started/Creating_an_Account_Profile.md : Getting_Started/Creating_an_Account.md From b8dfc1dc378da7af82446240570bcee3a3f99b3f Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 14:15:02 +1300 Subject: [PATCH 22/25] fix map --- docs/Software/Installing_Applications_Yourself.md | 2 +- docs/redirect_map.yml | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/Software/Installing_Applications_Yourself.md b/docs/Software/Installing_Applications_Yourself.md index af351d939..085ebdb0b 100644 --- a/docs/Software/Installing_Applications_Yourself.md +++ b/docs/Software/Installing_Applications_Yourself.md @@ -8,7 +8,7 @@ tags: Before installing your own applications, first check; - The software you want is not already installed. `module spider ` can be used to search software, -or see [Supported Applications](index.md). +or see [Supported Applications](Available_Applications/index.md). - If you are looking for a new version of existing software, {% include "partials/support_request.html" %} and we will install the new version. - If you would like us to install something for you or help you install something yourself {% include "partials/support_request.html" %}. If the software is popular, We may decide to install it centrally, in which case there will be no additional steps for you. Otherwise the software will be installed in your project directory, in which case it is your responsibility to maintain. diff --git a/docs/redirect_map.yml b/docs/redirect_map.yml index 307d82a4d..a91061939 100644 --- a/docs/redirect_map.yml +++ b/docs/redirect_map.yml @@ -150,15 +150,15 @@ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC_YouTube_ Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md : Getting_Started/Getting_Help/Introduction_to_computing_on_the_NeSI_HPC.md Scientific_Computing/Training/Webinars.md : Getting_Started/Getting_Help/Webinars.md Scientific_Computing/Training/Workshops.md : Getting_Started/Getting_Help/Workshops.md -Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project.md: Getting_Started/Projects/Applying_for_a_New_Project +Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project.md: Getting_Started/Projects/Applying_for_a_New_Project.md General/NeSI_Policies/Proposal_Development_allocations.md : Getting_Started/Policy/Proposal_Development_allocations.md Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md: Getting_Started/Projects/Applying_to_Join_a_Project.md General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members.md: Getting_Started/Policy/Account_Requests_for_non_Tuakiri_Members.md General/Announcements/HPC3.md: Getting_Started/Creating_an_Account.md -Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md : Service_Subscriptions/Contracts_&_Billing/Billing_process.md -Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md : Service_Subscriptions/Contracts_&_Billing/Types_of_contracts.md +Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md : Service_Subscriptions/Contracts_and_Billing/Billing_process.md +Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md : Service_Subscriptions/Contracts_and_Billing/Types_of_contracts.md Getting_Started/Accounts-Projects_and_Allocations/Creating_an_Account_Profile.md : Getting_Started/Creating_an_Account.md -Getting_Started/Accounts-Projects_and_Allocations/Adding_members_to_your_project.md : Getting_Started/Projects/Adding_members_to_your_project.md +Getting_Started/Accounts-Projects_and_Allocations/Adding_members_to_your_project.md : Getting_Started/Projects/Adding_members_to_your_Project.md Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_project.md : Getting_Started/Projects/Applying_for_a_New_Project.md Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_a_project.md : Getting_Started/Projects/Applying_to_Join_a_Project.md Getting_Started/Projects/Adding_members_to_your_project.md : Getting_Started/Projects/Adding_Members_to_your_Project.md From aa6c292e449a35e7c570c2972b9e43d79cb19f01 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 14:17:07 +1300 Subject: [PATCH 23/25] fix link --- docs/Software/Software_Version_Management.md | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/docs/Software/Software_Version_Management.md b/docs/Software/Software_Version_Management.md index b8aca28b8..302be2c50 100644 --- a/docs/Software/Software_Version_Management.md +++ b/docs/Software/Software_Version_Management.md @@ -4,15 +4,11 @@ tags: - software - versions title: Software Version Management -vote_count: 0 -vote_sum: 0 -zendesk_article_id: 360001045096 -zendesk_section_id: 360000040056 --- Much of the software installed on the NeSI cluster have multiple versions available as shown on the -[supported applications page](./index.md) +[supported applications page](Available_Applications/index.md) or by using the `module avail` or `module spider` commands. If only the application name is given a default version will be chosen, From 18fcbd64f54c70fb7ff08c4f6cc81f3b0be1eb19 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 14:20:12 +1300 Subject: [PATCH 24/25] temorary bad fix --- .github/workflows/checks.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.github/workflows/checks.yml b/.github/workflows/checks.yml index 09c171110..7be2b25cf 100644 --- a/.github/workflows/checks.yml +++ b/.github/workflows/checks.yml @@ -139,6 +139,7 @@ jobs: - if: ${{needs.get.outputs.filelist}} name: Check markdown meta. run: | + pip3 install titlecase shopt -s globstar extglob python3 checks/run_meta_check.py ${{needs.get.outputs.filelist}} slurmcheck: From 4a9154ca8345763f043398d62b9d20dc76383173 Mon Sep 17 00:00:00 2001 From: "callumnmw@gmail.com" Date: Wed, 3 Dec 2025 14:23:38 +1300 Subject: [PATCH 25/25] Available_Applications --- docs/Software/Installing_Applications_Yourself.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Software/Installing_Applications_Yourself.md b/docs/Software/Installing_Applications_Yourself.md index 085ebdb0b..3842b6cec 100644 --- a/docs/Software/Installing_Applications_Yourself.md +++ b/docs/Software/Installing_Applications_Yourself.md @@ -28,7 +28,7 @@ How to add package to an existing module will vary based on the language in ques - [MATLAB](Available_Applications/MATLAB.md#adding-support-packages) For other languages check if we have additional documentation for it -in our [application documentation](../Scientific_Computing/Supported_Applications/index.md). +in our [application documentation](Available_Applications/index.md). ## Other Applications