Skip to content

add scales 1_cpn_2_nodes and 1_cpn_4_nodes#94

Merged
satishskamath merged 4 commits intoEESSI:mainfrom
smoors:more_scales
Nov 16, 2023
Merged

add scales 1_cpn_2_nodes and 1_cpn_4_nodes#94
satishskamath merged 4 commits intoEESSI:mainfrom
smoors:more_scales

Conversation

@smoors
Copy link
Copy Markdown
Collaborator

@smoors smoors commented Oct 14, 2023

No description provided.

@smoors
Copy link
Copy Markdown
Collaborator Author

smoors commented Oct 14, 2023

i ran a few gromacs tests with the new scales; everything seems to work fine, no changes needed in the hooks:

  • 1_core_per_node_2_nodes cpu-only
#!/bin/bash
#SBATCH --job-name="rfm_GROMACS_EESSI_12c54a39"
#SBATCH --ntasks=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --output=rfm_job.out
#SBATCH --error=rfm_job.err
#SBATCH --time=0:30:0
#SBATCH --nodes=2
#SBATCH --partition=skylake,skylake_mpi
somecommandthatfails
module load GROMACS/2021.3-foss-2021a
export OMP_NUM_THREADS=1
curl -LJO https://github.com/victorusu/GROMACS_Benchmark_Suite/raw/1.0.0/HECBioSim/Crambin/benchmark.tpr
srun gmx_mpi mdrun -nb cpu -s benchmark.tpr -dlb yes -npme -1 -ntomp 1

  • 1_core_per_node_4_cores cpu-only
#!/bin/bash
#SBATCH --job-name="rfm_GROMACS_EESSI_c4a6e54f"
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --output=rfm_job.out
#SBATCH --error=rfm_job.err
#SBATCH --time=0:30:0
#SBATCH --nodes=4
#SBATCH --partition=skylake,skylake_mpi
somecommandthatfails
module load GROMACS/2021.3-foss-2021a
export OMP_NUM_THREADS=1
curl -LJO https://github.com/victorusu/GROMACS_Benchmark_Suite/raw/1.0.0/HECBioSim/Crambin/benchmark.tpr
srun gmx_mpi mdrun -nb cpu -s benchmark.tpr -dlb yes -npme -1 -ntomp 1

  • 1_core_per_node_4_cores gpu
#!/bin/bash
#SBATCH --job-name="rfm_GROMACS_EESSI_271aca6c"
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --output=rfm_job.out
#SBATCH --error=rfm_job.err
#SBATCH --time=0:30:0
#SBATCH --partition=ampere_gpu
#SBATCH --gpus-per-node=1
module load GROMACS/2021.3-foss-2021a-CUDA-11.3.1
export OMP_NUM_THREADS=1
curl -LJO https://github.com/victorusu/GROMACS_Benchmark_Suite/raw/1.0.0/HECBioSim/Crambin/benchmark.tpr
srun gmx_mpi mdrun -nb gpu -s benchmark.tpr -dlb yes -npme -1 -ntomp 1

@boegel
Copy link
Copy Markdown
Contributor

boegel commented Oct 14, 2023

The benefit of 1_core_per_node_2_nodes is that it's crystal clear, but it is looooooong.

Would cpn make sense, so 1_cpn_2_nodes?
Or maybe both if we think that cpn may be too cryptic for some, with 1_cpn_2_nodes as alias for 1_core_per_node_2_nodes?

@smoors
Copy link
Copy Markdown
Collaborator Author

smoors commented Oct 14, 2023

1_cpn_2_nodes sounds good to me, and easy enough to remember once you know it.
that reminds me that we should add the new scales to the docs as well.

@smoors
Copy link
Copy Markdown
Collaborator Author

smoors commented Oct 19, 2023

hm, adding aliases like this is annoying when filtering a subset of scales.
the alternative would be to use something else than strings for the keys, but i think it's not worth it,
so i'll just remove the aliases and keep the _cpn_ strings

@smoors smoors changed the title add scales 1_core_per_node_2_nodes and 1_core_per_node_4_nodes add scales 1_cpn_2_nodes and 1_cpn_4_nodes Oct 19, 2023
@smoors smoors mentioned this pull request Oct 22, 2023
@satishskamath
Copy link
Copy Markdown
Collaborator

satishskamath commented Nov 16, 2023

CPU test on the GPU node 1_cpn_4_nodes

#!/bin/bash
#SBATCH --job-name="rfm_job"
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --output=rfm_job.out
#SBATCH --error=rfm_job.err
#SBATCH --time=0:30:0
#SBATCH -p gpu
#SBATCH --export=None
module load 2022
module load GROMACS/2021.6-foss-2022a-CUDA-11.7.0
export OMP_NUM_THREADS=1
curl -LJO https://github.com/victorusu/GROMACS_Benchmark_Suite/raw/1.0.0/HECBioSim/Crambin/benchmark.tpr
mpirun -np 4 gmx_mpi mdrun -nb cpu -s benchmark.tpr -dlb yes -npme -1 -ntomp 1

CPU test on the CPU node 1_cpn_2_nodes

#!/bin/bash
#SBATCH --job-name="rfm_job"
#SBATCH --ntasks=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --output=rfm_job.out
#SBATCH --error=rfm_job.err
#SBATCH --time=0:30:0
#SBATCH -p gpu
#SBATCH --export=None
module load 2022
module load GROMACS/2021.6-foss-2022a
export OMP_NUM_THREADS=1
curl -LJO https://github.com/victorusu/GROMACS_Benchmark_Suite/raw/1.0.0/HECBioSim/Crambin/benchmark.tpr
mpirun -np 2 gmx_mpi mdrun -nb cpu -s benchmark.tpr -dlb yes -npme -1 -ntomp 1

CPU test on the CPU node 1_cpn_2_nodes

#!/bin/bash
#SBATCH --job-name="rfm_job"
#SBATCH --ntasks=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --output=rfm_job.out
#SBATCH --error=rfm_job.err
#SBATCH --time=0:30:0
#SBATCH -p gpu
#SBATCH --export=None
module load 2022
module load GROMACS/2021.6-foss-2022a
export OMP_NUM_THREADS=1
curl -LJO https://github.com/victorusu/GROMACS_Benchmark_Suite/raw/1.0.0/HECBioSim/Crambin/benchmark.tpr
mpirun -np 2 gmx_mpi mdrun -nb cpu -s benchmark.tpr -dlb yes -npme -1 -ntomp 1

CPU test on the CPU node 1_cpn_4_nodes

#!/bin/bash
#SBATCH --job-name="rfm_job"
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --output=rfm_job.out
#SBATCH --error=rfm_job.err
#SBATCH --time=0:30:0
#SBATCH -p thin
#SBATCH --export=None
module load 2022
module load GROMACS/2021.6-foss-2022a-CUDA-11.7.0
export OMP_NUM_THREADS=1
curl -LJO https://github.com/victorusu/GROMACS_Benchmark_Suite/raw/1.0.0/HECBioSim/Crambin/benchmark.tpr
mpirun -np 4 gmx_mpi mdrun -nb cpu -s benchmark.tpr -dlb yes -npme -1 -ntomp 1

Copy link
Copy Markdown
Collaborator

@satishskamath satishskamath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested this on Snellius and it works.

@satishskamath satishskamath merged commit 1ba51dc into EESSI:main Nov 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants