Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container for (spack-manager) CUDA GPU Build of Exawind for NERSC Science Platform #575

Open
wants to merge 27 commits into
base: main
Choose a base branch
from

Conversation

ajpowelsnl
Copy link
Contributor

The proposed code specifies a spack-manager-based GPU-capable container on NERSC science platforms.

Build

podman-hpc build --no-cache -t <TAG_NAME> -f Dockerfile-containergpucuda .

Run

podman-hpc run --rm --gpu -it <TAG_NAME>

Expected Output

root@8953e3033a99:/exawind-entry/spack-manager# which exawind
/exawind-entry/spack-manager/snapshots/exawind/containergpucuda/2023-11-03/opt/linux-ubuntu22.04-zen3/gcc-11.4.0/exawind-git.d3c1aa4656fc3c6eccaec8c684671c82a3895172=multiphase-fyborjcfgnruxcks2vwzrc4guyw5toew/bin/exawind
root@8953e3033a99:/exawind-entry/spack-manager# exawind --help
usage: exawind [--awind NPROCS] [--nwind NPROCS] input_file
	-h,--help		Show this help message
	--awind NPROCS		Number of ranks for AMR-Wind (default = all ranks)
	--nwind NPROCS		Number of ranks for Nalu-Wind (default = all ranks)

Helpful Hints

  • After exiting a container, you must build the container again using a new tag name.
  • If using non-NERSC machines, Docker can be used instead of podman-hpc
  • Be aware that spack-manager depends on a pinned version of Spack, a point of configure / build / runtime fragility

Copy link
Collaborator

@psakievich psakievich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ajpowelsnl thanks for doing this and sorry it has taken so long to get a review going.

spack Outdated
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like it was an update to the submodule file, and not the spack commit? Is that right? We have a mirror only policy on spack changes so these changes would need to go into mainline spack.

@@ -97,6 +97,9 @@ def is_e4s():
"perlmutter": MachineData(
lambda: os.environ["NERSC_HOST"] == "perlmutter", "perlmutter-p1.nersc.gov"
),
"containergpucuda": MachineData(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I like this name. Would we expect this to build any container using cuda, or specifically containers on perlmutter? I would prefer to start with a more precise name and relax it rather than vice-versa.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants