If you just want to configure the queue setup, look into the documentation. The following details on the code flow for job submission to the queue.
Every time pyiron submits a job to the queue (reachable from the current location - for remote setup this is run on the remote machine) it runs:
pyiron_base/pyiron_base/jobs/job/runfunction.py
Lines 406 to 420 in b1e1884
The job submission is handled by the queue adapter which populates the slurm run template
#!/bin/bash
#SBATCH --output=time.out
#SBATCH --job-name={{job_name}}
#SBATCH --workdir={{working_directory}}
#SBATCH --get-user-env=L
#SBATCH --partition=slurm
{%- if run_time_max %}
#SBATCH --time={{ [1, run_time_max // 60]|max }}
{%- endif %}
{%- if memory_max %}
#SBATCH --mem={{memory_max}}G
{%- endif %}
#SBATCH --cpus-per-task={{cores}}
(copied from here)
and submits this into the queue. I.e. the command running will be
command = (
"python -m pyiron_base.cli wrapper -p "
+ job.working_directory
+ " -j "
+ str(job.job_id)
)
which essentially does a job.load()
and a job.run()
on the compute node.
The job.run()
calls finally
pyiron_base/pyiron_base/jobs/job/runfunction.py
Lines 488 to 513 in b1e1884
where the str(executable) or the executable.executable_path point to the shell script for the chosen version as defined in the resources.
e.g. run multi core LAMMPS 2020.03.03 (run_lammps_2020.03.03_mpi.sh):
#!/bin/bash
mpiexec -n $1 --oversubscribe lmp_mpi -in control.inp;
(copied from here)