-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Telcon: 2022 02 16
Peter Scheibel edited this page Feb 16, 2022
·
20 revisions
- Peter Scheibel
- Greg Becker
- Mark Krentel
- Massimiliano Culpo
- Nick Sly (LANL)
- Tammy Dahlgren
- Timothy Brown
- Wileam Phan
-
(Nick) upgrading Cray system (sles sp1 -> sp2 update, changes .so version on the system)
- This broke many build cache packages: those were linking against the older .so versions (and they were linking to the .so.x file rather than the .so file)
- Furthermore, that detail wasn't recorded in the hash
- Note: if this underlying .so was represented by an external, and that external version were updated alongside the .so version, then the binary cache could be invalidated
- libabigail could be used to compare the .so versions to check if they're ABI compatible
- In this case the package, 'libwebp', is needed by 'libjpeg-turbo', so one could record libwebp as a dependency of libjpeg-turbo
- Another option: record the minor version update in the OS, and allow the user to force Spack to consider it different
- This is probably more straightforward
- Generally though since the user has to manually resolve this, it would be similar to removing the cache (although if one removes the cache, and there is a system rollback, then you'd have to undo the cache removal)
-
(Wileam) Continuation from https://github.com/spack/spack/wiki/Telcon%3A-2022-01-26: vendored dependencies
- (Andrew) nvhpc installs CUDA, so which CUDA is being used if I install nvhpc with Spack?
- (Wileam) Should nvhpc be modularized like oneAPI? This would partially solve the embedded CUDA issue, I think
- nvhpc actually bundles more components, like openmpi, openblas, etc.
- https://docs.nvidia.com/hpc-sdk/hpc-sdk-release-notes/index.html#release-components
- (Greg) should we be manually removing these packages?
- It turns out that generally if a package depends on CUDA and builds with nvhpc, we will be linking the Spack-built CUDA
- So it turns out this more of a question of "I have nvhpc and it installs CUDA, how do I use that CUDA" vs. "nvhpc adds a CUDA and needs it, but I don't want it"
- The latter problem is harder to solve, but doesn't seem to actually be the issue in this case (it would only be an issue if nvhpc needed that CUDA)
- (Peter) Externals cut off dependencies: https://github.com/spack/spack/issues/9149#issuecomment-1020740273
- (Peter) Automating xcompiler for options that ought to be fed to underlying compiler for nvcc:
- Possible topic: multiple build systems
- e.g. a package changes build systems for some version
- See also https://github.com/spack/spack/pull/27021/files#diff-a69c213bdd36ddd464aa29f039985532107a6527f68c253fb5f0d204d7b462db
- i.e. the build system is often different on Windows
- IMO this is as simple as
- Have a when-style clause for activating a build_system
- When the build system is active, look for e.g.
cmake_install
vs. install - Users can just define
install
if they only use one build system
- Possible topic: Separating package repository from core
- There are some larger changes we plan which mean we can't do this immediately
- But we could record what is in the way and how to manage this transition
- Possible topic: new concretizer and handling of merged package repositories