Skip to content
Peter Scheibel edited this page Jan 19, 2022 · 19 revisions

Attendees

  • Peter Scheibel
  • Tammy Dahlgren
  • Massimiliano Culpo
  • Richarda Butler
  • Mark Krentel
  • Nils Fischer
  • Asher Mancinelli
  • Todd Gamblin

Agenda

  • (Nils) Are there plans for CI testing on individual packages?
    • (Todd) see share/spack/gitlab/cloud_pipelines/stacks/
      • Each stack encodes a specific set of packages
      • If you submit a PR for a package it will rebuild...
        • That package (if it's mentioned in a "stack")
        • Everything that depends on that package (assuming the package itself was rebuilt)
    • (Nils) Has PRs for paraview on Mac OS
      • It wasn't working on MacOS for weeks/months
      • For now we don't test builds on Mac (note that Paraview builds are tested on Linux: see the data-vis-sdk stack)
  • (Mark) I want reuse for a subset of packages, e.g. if I build package X which depends on Y and Z, then I may want to reuse Y/Z but build a new X (this is a partial reuse case)
  • (Asher) Need buildcaches on Frontier, running into problems
  • (Asher) In order to use external ROCm, the comments in the ROCm build system were essential
    • This should be promoted to full documentation
      • A docstring
      • And later probably the readthedocs
  • (Todd) The concretizer has a --reuse option (which prefers using already-built packages to building new ones where possible). If we make that the default, then what do we call the option that turns off reuse?
  • (Peter) Is there a need for use_variant: https://github.com/spack/spack/issues/28442#issuecomment-1014455111

Next week

  • (Andrew) nvhpc installs CUDA, so which CUDA is being used if I install nvhpc with Spack?
    • We also have this problem with mvapich2 (it installs its own hwloc)
    • This came up while discussing use_variants on 1/19/22 (https://github.com/spack/spack/issues/28442)
      • (Peter) I don't think this will be sufficient (or in other words, I think use_variants solve a separate problem)
    • Making it virtual was a suggestion
      • (Peter) I'm concerned about that since I think the idea was to make CUDA virtual, which imposes additional complexity on the CUDA package
    • (Peter) IMO there should be a way for a package to claim it "supplies" additional packages
      • i.e. nvhpc can say it supplies CUDA, and could add CUDA to the DAG
        • It would specify the versions etc. that come with it

Possible agenda items

  • Multiple providers of same virtual package
    • Case 1: we want blas from X and lapack from Y
    • Case 2: we actually want several instances of the same virtual (e.g. to run different instances of MPI for the same root)
  • Possibly revisit: https://github.com/spack/spack/discussions/24966
    • Concerning improvements to spack develop
  • Harmen (not sure if I can join): there are packages like libblastrampoline / libmpitrampoline which provide a blas / lapack / mpi interface to link to, and forwards calls to an actual blas / lapack / mpi provider lib. E.g. julia uses it to link to a blas interface for its binary deps, allowing the user to switch blas provider at runtime, avoiding abi issues. The problem is Spack only allows one provider per dag, but these packages both provides the virtual and depends on the virtual. How do we deal with this "composition" type of pattern? I was thinking: maybe we can relax unique provider per dag to unique provider per subgraph connected through link type deps? Then libblastrampoline can provide blas, and depend on blas as a run-type dependency.
Clone this wiki locally