Skip to content

Commit

Permalink
Add cpus_per_task parameter and adapt docstrings.
Browse files Browse the repository at this point in the history
Co-authored-by: Lucas <lucas.galerykaeser@gmail.com>
  • Loading branch information
jendrikseipp and galerykaeser committed Jul 5, 2021
1 parent b293199 commit 31e9fa2
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 6 deletions.
3 changes: 2 additions & 1 deletion docs/news.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ Lab
^^^
* Automatically group multiple runs into one Slurm task when the number
of runs exceeds the maximum number of Slurm tasks (Jendrik Seipp).
* Add ``time_limit_per_task`` parameter to ``SlurmEnvironment``.
* Add ``time_limit_per_task`` parameter to ``SlurmEnvironment`` (Jendrik Seipp).
* Add ``cpus_per_task`` parameter to ``SlurmEnvironment`` (Lucas Galery Käser).

Downward Lab
^^^^^^^^^^^^
Expand Down
2 changes: 2 additions & 0 deletions lab/data/slurm-job-header.template
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
#SBATCH --time=%(time_limit_per_task)s
### Set memory limit.
#SBATCH --mem-per-cpu=%(memory_per_cpu)s
### Set number of cores per task.
#SBATCH --cpus-per-task=%(cpus_per_task)s
### Number of tasks in array job.
#SBATCH --array=1-%(num_tasks)d
### Adjustment to priority ([-2147483645, 2147483645]).
Expand Down
19 changes: 14 additions & 5 deletions lab/environments.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,9 +135,10 @@ def __init__(self, email=None, extra_options=None, **kwargs):
Use *extra_options* to pass additional options. The
*extra_options* string may contain newlines. Slurm example that
reserves two cores per run::
uses a given set of nodes (additional nodes will be used if the
given ones don't satisfy the resource constraints)::
extra_options='#SBATCH --cpus-per-task=2'
extra_options='#SBATCH --nodelist=ase[1-5,7,10]'
See :py:class:`~lab.environments.Environment` for inherited
parameters.
Expand Down Expand Up @@ -285,6 +286,7 @@ def __init__(
qos=None,
time_limit_per_task=None,
memory_per_cpu=None,
cpus_per_task=1,
export=None,
setup=None,
**kwargs,
Expand Down Expand Up @@ -327,6 +329,9 @@ def __init__(
slack). We use a soft instead of a hard limit so that child
processes can raise the limit.
*cpus_per_task* sets the number of cores to be allocated per Slurm
task (default: 1).
Examples that reserve the maximum amount of memory available per core:
>>> env1 = BaselSlurmEnvironment(partition="infai_1", memory_per_cpu="3872M")
Expand All @@ -339,7 +344,7 @@ def __init__(
>>> env = BaselSlurmEnvironment(
... partition="infai_1",
... memory_per_cpu="3G",
... extra_options="#SBATCH --cpus-per-task=4",
... cpus_per_task=4,
... )
Example that reserves 12 GiB of memory on infai_2:
Expand All @@ -349,7 +354,7 @@ def __init__(
>>> env = BaselSlurmEnvironment(
... partition="infai_2",
... memory_per_cpu="6G",
... extra_options="#SBATCH --cpus-per-task=2",
... cpus_per_task=2,
... )
Use *export* to specify a list of environment variables that
Expand Down Expand Up @@ -391,6 +396,7 @@ def __init__(
self.qos = qos
self.time_limit_per_task = time_limit_per_task
self.memory_per_cpu = memory_per_cpu
self.cpus_per_task = cpus_per_task
self.export = export
self.setup = setup

Expand Down Expand Up @@ -424,8 +430,11 @@ def _get_job_params(self, step, is_last):
job_params["qos"] = self.qos
job_params["time_limit_per_task"] = self.time_limit_per_task
job_params["memory_per_cpu"] = self.memory_per_cpu
job_params["cpus_per_task"] = self.cpus_per_task
memory_per_cpu_kb = SlurmEnvironment._get_memory_in_kb(self.memory_per_cpu)
job_params["soft_memory_limit"] = int(memory_per_cpu_kb * 0.98)
job_params["soft_memory_limit"] = int(
self.cpus_per_task * memory_per_cpu_kb * 0.98
)
job_params["nice"] = self.NICE_VALUE if is_run_step(step) else 0
job_params["environment_setup"] = self.setup

Expand Down

0 comments on commit 31e9fa2

Please sign in to comment.