Note
- It is possible to run R directly on the login (including ThinLinc) nodes.
- only be done for short and small jobs
- otherwise become slow for all users.
- If you want to work interactively with your code or data, you should
- start an interactive session.
- If you rather will run a script which won't use any interactive user input while running, you should
- start a batch job, see previous session.
- How do I proceed to work interactively on a compute node
- Show how to reach the calculation nodes on UPPMAX and HPC2N
- Test some commands on the calculation nodes
Compute allocations in this workshop
- Rackham:
naiss2024-22-107
- Kebnekaise:
hpc2n2024-025
../mermaid/uppmax2.mmd
../mermaid/kebnekaise.mmd
In order to run interactively, you need to have compute nodes allocated to run on, and this is done through the batch system.
Warning
(HPC2N) Do note that it is not real interactivity as you probably mean it, as you will have to run it as a R script instead of by starting R and giving commands inside it. The reason for this is that you are not actually logged into the compute node and only sees the output of the commands you run.
To run interactively, you need to allocate resources on the cluster first. You can use the command salloc
/intereactive
to allow interactive use of resources allocated to your job. When the resources are allocated, you need to preface commands with srun
in order to run on the allocated nodes instead of the login node.
- First, you make a request for resources with
interactive
/salloc
, like this:
UPPMAX (interactive)
$ interactive -n <tasks> --time=HHH:MM:SS -A naiss2024-22-107
HPC2N (salloc)
$ salloc -n <tasks> --time=HHH:MM:SS -A hpc2n2024-025
- where <tasks> is the number of tasks (or cores, for default 1 task per core), time is given in
hours, minutes, and seconds (maximum T168 hours), and then you give the id for your project (naiss2023-22-44 for this course)
- Your request enters the job queue just like any other job, and interactive/salloc will tell you that it is
waiting for the requested resources. When salloc tells you that your job has been allocated resources, you can interactively run programs on those resources with
srun
. The commands you run withsrun
will then be executed on the resources your job has been allocated. If you do not preface withsrun
the command is run on the login node!- You can now run R scripts on the allocated resources directly instead of waiting for
your batch job to return a result. This is an advantage if you want to test your R script or perhaps figure out which parameters are best.
Warning
Let us use ThinLinc
- ThinLinc app:
<user>@rackham-gui.uppmax.uu.se
- ThinLinc in web browser:
https://rackham-gui.uppmax.uu.se
This requires 2FA!
Using terminal
- Remember to have X11 installed!
On Mac
- install XQuartz
On Windows
- Use MobaXterm or
- install XMING and use with Putty or PowerShell
Requesting 4 cores for 10 minutes, then running R
UPPMAX
[bjornc@rackham2 ~]$ interactive -A naiss2024-22-107 -p devcore -n 4 -t 10:00
You receive the high interactive priority.
There are free cores, so your job is expected to start at once.
Please, use no more than 6.4 GB of RAM.
Waiting for job 29556505 to start...
Starting job now -- you waited for 1 second.
Let us check that we actually run on the compute node:
[bjornc@r483 ~]$ srun hostname
r483.uppmax.uu.se
r483.uppmax.uu.se
r483.uppmax.uu.se
r483.uppmax.uu.se
We are! Notice that we got a response from all four cores we have allocated.
HPC2N
[~]$ salloc -n 4 --time=00:30:00 -A hpc2n2024-025
salloc: Pending job allocation 20174806
salloc: job 20174806 queued and waiting for resources
salloc: job 20174806 has been allocated resources
salloc: Granted job allocation 20174806
salloc: Waiting for resource configuration
salloc: Nodes b-cn0241 are ready for job
[~]$ module load GCC/10.3.0 OpenMPI/4.1.1 R/4.0.4
[~]$
Let us check that we actually run on the compute node:
[~]$ srun hostname
b-cn0241.hpc2n.umu.se
b-cn0241.hpc2n.umu.se
b-cn0241.hpc2n.umu.se
b-cn0241.hpc2n.umu.se
We are. Notice that we got a response from all four cores we have allocated.
Warning
- You need to reload all modules you used on the login node!!!
[bjornc@r484 ~]$ module load R/4.1.1
The script Adding two numbers from user input (serial_sum.R
)
- You will find it in the exercise directory
exercises/r/
so go there withcd
. - Otherwise, use your favourite editor and add the text below and save as
serial_sum.R
.
# This program will add two numbers that are provided by the user
args = commandArgs(trailingOnly = TRUE)
res = as.numeric(args[1]) + as.numeric(args[2])
print(paste("The sum of the two numbers is", res))
Running the script
- Note that the commands are the same for both HPC2N and UPPMAX!
Running a R script in the allocation we made further up. Notice that since we asked for 4 cores, the script is run 4 times, since it is a serial script
$ srun Rscript serial_sum.R 3 4
[1] "The sum of the two numbers is 7"
[1] "The sum of the two numbers is 7"
[1] "The sum of the two numbers is 7"
[1] "The sum of the two numbers is 7"
Without the srun
command, R won't understand that it can use several cores. Therefore the program is run only once.
$ Rscript serial_sum.R 3 4
[1] "The sum of the two numbers is 7"
Running R interpreter (UPPMAX)
- First start R and check available workers with
future
> library(future)
> availableWorkers()
[1] "r483" "r483" "r483" "r483"
> availableCores()
nproc
4
When you have finished using the allocation, either wait for it to end, or close it with exit
Don't do it now!
We shall test RStudio first in the next session!
UPPMAX
[bjornc@r483 ~]$ exit exit [screen is terminating] Connection to r483 closed. [bjornc@rackham2 ~]$
HPC2N
[~]$ exit exit salloc: Relinquishing job allocation 20174806 salloc: Job allocation 20174806 has been revoked. [~]$
Start an interactive session on a calculation node by a SLURM allocation
- At HPC2N:
salloc
... - At UPPMAX:
interactive
...
- At HPC2N:
- Follow the same procedure as usual by loading the R module and possible prerequisites.