We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Per @eugeneswalker, E4S has useful build caches and container base images that users can leverage, and so we should document that appropriately:
I've set up a public repo for running nightly Spack development builds of exago@develop for the following GPUs on UO Frank systems: https://gitlab.e4s.io/uo-public/exago The last workflow run can be seen from the pipelines page: https://gitlab.e4s.io/uo-public/exago/-/pipelines https://gitlab.e4s.io/uo-public/exago/-/pipelines/9711 https://gitlab.e4s.io/uo-public/exago/-/jobs/223788(AMD MI210 Job) https://gitlab.e4s.io/uo-public/exago/-/jobs/223787 (NVIDIA A100 Job) The workflow linked above uses the following CI container images: esw123/exago-rocm90a:2023.10.23 (for AMD MI210) esw123/exago-cuda80:2023.10.23 (for NVIDIA A100) These container images are built using recipes from the following public repository: https://gitlab.e4s.io/uo-public/exago-images CUDA Image: https://gitlab.e4s.io/uo-public/exago-images/-/tree/main/images/cuda https://gitlab.e4s.io/uo-public/exago-images/-/blob/main/images/cuda/Dockerfile https://gitlab.e4s.io/uo-public/exago-images/-/blob/main/images/cuda/spack.yaml ROCM Image: https://gitlab.e4s.io/uo-public/exago-images/-/tree/main/images/rocm https://gitlab.e4s.io/uo-public/exago-images/-/blob/main/images/rocm/Dockerfile https://gitlab.e4s.io/uo-public/exago-images/-/blob/main/images/rocm/spack.yaml The binaries used to generate the images above are publicly exposed via standard Spack mirror here: $ spack mirror add exago-ci https://cache.e4s.io/exago-ci $ spack buildcache keys -it $ spack buildcache list -al
I've set up a public repo for running nightly Spack development builds of exago@develop for the following GPUs on UO Frank systems:
The last workflow run can be seen from the pipelines page:
https://gitlab.e4s.io/uo-public/exago/-/pipelines
https://gitlab.e4s.io/uo-public/exago/-/pipelines/9711
https://gitlab.e4s.io/uo-public/exago/-/jobs/223788(AMD MI210 Job)
https://gitlab.e4s.io/uo-public/exago/-/jobs/223787 (NVIDIA A100 Job)
The workflow linked above uses the following CI container images:
These container images are built using recipes from the following public repository: https://gitlab.e4s.io/uo-public/exago-images
CUDA Image:
ROCM Image:
The binaries used to generate the images above are publicly exposed via standard Spack mirror here:
$ spack mirror add exago-ci https://cache.e4s.io/exago-ci $ spack buildcache keys -it $ spack buildcache list -al
The idea is that the CI images contain all the dependencies needed to build ExaGO from source, already installed. This is what I've been working on. Right now it just runs the basic ctest once the spack dev-build completes.
The idea is that the CI images contain all the dependencies needed to build ExaGO from source, already installed.
This is what I've been working on. Right now it just runs the basic ctest once the spack dev-build completes.
The text was updated successfully, but these errors were encountered:
https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic - we also have ghcr build cache from #85, so these should both be documented
Sorry, something went wrong.
hiop+cuda ^cuda@12
No branches or pull requests
Per @eugeneswalker, E4S has useful build caches and container base images that users can leverage, and so we should document that appropriately:
The text was updated successfully, but these errors were encountered: