-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
External packages #120
External packages #120
Conversation
…lers, variants and dependencies.
…d by those tests.
In |
@mplegendre Would it be possible to have Spack read in multiple packages.yaml files -- like a conf.d directory. I'd like to have packages be able to install their own package configuration yaml. |
@mplegendre @becker33: I agree with John here. If we had a directory where we could drop per-package configs, then it would be easier to have distros (TOSS, OpenHPC) tell Spack about themselves. |
Sure. This makes sense. The current yaml config has all config files repeating their config type on the first line, so the package.yaml file begins with the line 'package:'. We could just have spack read *.yaml from a directory and merge the configs based on their identifying line. |
Conflicts: lib/spack/spack/cmd/mirror.py lib/spack/spack/concretize.py lib/spack/spack/config.py lib/spack/spack/spec.py lib/spack/spack/stage.py var/spack/packages/mvapich2/package.py
doc : minor typos fixed
This looks like a really nice and well thought-out idea. Some thoughts:
Suppose I'm trying to build BigSystemA. As I build it, I make decisions about which versions of MPI, NetCDF, etc. I wish to use. I set up a configuration file BigSystemA.yaml describing these choices as concretization preferences. When I run Spack with this yaml file, it builds things the way I need for BigSystemA. A week later, I'm going to build BigSystemB on the same compute cluster. For technical reasons, BigSystemB needs different choices for MPI, NetCDF, etc. I want to make a new configuration file BigSystemB.yaml describing those choices. Now I build BigSystemB. Since Spack has no problem compiling multiple variations of packages, everything works seamlessly. I can execute BigSystemA/bin/runme or BigSystemB/bin/runme just fine and everyone is happy. Now, I want to see if I can compile BigSystemA with Intel compiler, instead of GCC I used last week. I will copy BigSystemA.yaml --> BigSystemA-Intel.yaml, change the compiler, and see if things can still build. So it's clear that I want per-project concretization preferences. BUT... I also want site-specific concretization preferences, which would be overridden by the per-project preferences. For example, maybe there's a policy on this compute cluster that users do NOT compile their own SSH. The site-specific yaml configuration would specify to use the (hypothetical) "+system" variant of openssh (which would hypothetically just do a dummy install of SSH, pointing to the system SSH). |
The old docs describe how to modify concretization by modifying the Spack code and extending an internal class. It's hoped that the new concretization preferences will allow you to do any interesting modifications via config files, so I removed the old docs. If we decide there are other concretization modifications that a user wants to control I think we should look into extending the config options before revisiting the ability to write custom concretization classes. |
Conflicts: lib/spack/spack/package.py
….g, 2 instead of 2.7.3)
…ack into features/external-packages Conflicts: lib/spack/docs/site_configuration.rst
I've remerged with develop and updated the documentation on this PR. |
Finally merged. Thanks Matt! |
* Add first version of bluepymm/bluepyopt/efel packages and dependencies
* Add first version of bluepymm/bluepyopt/efel packages and dependencies
Update esma_cmake for MAPL
Spack support for external packages, including tests and documentation.