Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhancement proposal : variant forwarding #391

Closed

Conversation

alalazo
Copy link
Member

@alalazo alalazo commented Jan 22, 2016

Rationale

This PR adds a forward keyword to the depends_on and extends directives to enable the forwarding of a variant from a dependent to a direct dependency. Forwarding is taken into consideration only if nothing was explicitly specified about the forwarded variant in the dependency.

Modifications
  • added a forward keyword to depends_on and extends
  • added unit tests for the new feature
  • added documentation for the new feature
  • modified the netlib stack to forward the variants shared and debug
  • fix the failing unit tests (Package vs. Spec creation order)
Examples
$ spack install netlib-lapack+shared
==> Forwarding variant "shared" from package netlib-lapack to package netlib-blas : +shared
==> Installing netlib-lapack
...
$ spack install netlib-scalapack
==> Forwarding variant "shared" from package netlib-scalapack to package netlib-lapack : +shared
==> Forwarding variant "shared" from package netlib-lapack to package netlib-blas : +shared
==> Installing netlib-scalapack
...
$ spack install netlib-scalapack+shared ^ netlib-lapack~shared
==> Forwarding variant "shared" from package netlib-lapack to package netlib-blas : ~shared

Implementation notes

I had originally some issues with regression tests, due to the fact that I was modifying directly a dictionary in spec.package that was shared among multiple specs. As I couldn't find a way of copying the dictionary during __init__, I made it a property to compute it lazily the first time it's needed. I am not sure this is the best approach in terms of clarity. Hints on alternative ways to accomplish the same thing are welcome.

@alalazo
Copy link
Member Author

alalazo commented Jan 22, 2016

@tgamblin : if you find this PR interesting, I think I need some help in one thing. There are currently 12 tests failing because I modify the dictionary spec.package.forwarded_variants, basically to substitute keys representing virtual dependencies with keys representing concrete ones. As far as I can see the package object is shared among multiple specs of the same package thus I should do something like:

spec.forwaded_variants = copy.copy(spec.package.forwarded_variants)

and then modify spec.forwaded_variants. Anyhow I can't do this copy in Spec.__init__ because it leads to infinite recursion, so I was wondering what is the correct way to deal with this.

@alalazo
Copy link
Member Author

alalazo commented Jan 25, 2016

@tgamblin : I'll be waiting for your feedback on this PR before writing a few lines in packaging_guide.rst to document the feature

@tgamblin
Copy link
Member

@alalazo: I really like this PR. The capability is obviously really useful, AND it goes through vdeps so that will solve a lot of consistency problems with lapack and bias.

There are a few things I would think about here:

  1. Currently, this only forwards variants to direct dependencies; not transitive ones. I suspect that it would be useful to forward settings to an entire sub-dag, skipping packages that don't support it. This could potentially be implemented the way we currently propagate compilers -- when concretizing a compiler spec, Spack looks up and grabs the compiler from the nearest ancestor that has its compiler set. I think this would be useful for things like cuda, where a large application might want all its libs built with cuda support enabled, but it might not depend directly on those deps.

    Would it make sense to allow forward to be a property of the variant directive and not a property of the depend_on directive? I could see uses for either.

  2. I think there are cases where simply forwarding the same variant value isn't enough. OpenMP comes to mind, as there are applications that both use OpenMP and call dependencies that can use OpenMP. Depending on the app, you might want the dependency and the app to be built with OpenMP, to use the most parallelism, or you might want the dependency compiled without OpenMP, to avoid nested parallelism. In that case you would want to forward ~openmp to dependencies when the parent is +openmp. Is it possible to invert the sense of a forwarded variant?

  3. We're soon going to be converting variants to a more general parameter system -- i.e. they won't just be flags; they might be things like dims=2 for codes that build only in 2d or 3d. So they could have bool, integer, or string values. I wouldn't worry about this too much -- I think we can merge this code first and integrate the other stuff later, but I wonder if that also changes some of the ways you'd want to forward parameters.

Thoughts?

@fpruvost has been thinking about this for a while and has a bunch of packages that use threading, MPI, and cuda. I wonder if he has any thoughts on the implementation. This would at least solve the combinatorial dep specification problem he reported on the mailing list, so he should look at this PR.

@tgamblin
Copy link
Member

@alalazo: w.r.t. tests, I think it's hard to avoid brittleness for things that get defined at package definition time. I have thought about modifying MockPackagesTest so that the package modules get imported new each time through, which would solve the problem. Right now the tests either save and restore class data, or they could do something like what you've done here. I think this is fine for the moment; it's no worse than some of the other tests.

@fpruvost
Copy link
Contributor

Hi,

thanks @tgamblin for having me in the loop.
I don't know if I can propose an implementation, I don't masterize spack enough!

Concerning @tgamblin thoughts 1, 2:

  1. yes you're absolutely right, restricting the propagation to a direct dependency is really restrictive. What would be very convenient is to propagate a variant to all the dag by default if some packages share the same variant name.
  2. I think that in the general case propagate the variant (i.e. enable/disable it if enabled/disabled at upper levels) is a good default behaviour. Of course if we want to disable this propagation in some specific cases we have to keep the ability to tell it at "spack install" execution time or with the default behaviour we could set in a configuration file. Anyway, what you propose here with the openmp example is a graal ;)

@alalazo
Copy link
Member Author

alalazo commented Jan 27, 2016

Fast answers first :

@tgamblin (answers to 1 and 2)
I went for local propagation of variants mainly because it seemed the best choice in terms of lines that needed to be changed and functionality. Besides, I like the simplicity of the fact that a package can influence directly only things he is aware of (its direct dependencies) with a fine control.

The cons of this is that to obtain global propagation of something all the packages in the relevant portion of the DAG need to be modified, and you can't have a "hole" in the chain (i.e. you can't have a variant propagated from A to C via B if B does not declare that variant). The last may be a major issue with this PR right now.

I didn't think of making forward a property of variants but if we decide to go for globally propagated variants this makes much more sense than having forward in depends_on. In both cases it should be easy to add functionality to forward the opposite value of a given flag.

@fpruvost (answers to 1 and 2)

OpenMP at point 2. is a nice counterexample of why finer control than just propagating the same variant all over the DAG may be needed.

It's true that being more specific about a spec at spack install time will always work, but I found rather inconvenient to have to specify :

spack install A+openmp ^ B~openmp ^ C~openmp

if it is known at package definition time that +openmp on A will ask for ~openmp on all its dependencies. I would rather say

spack install A+openmp

and let A take care of what is a "consequence" of +openmp.

On point 3. I'll post a separate comment with some thoughts that may be relevant also for #234 .

@gauthier12
Copy link
Contributor

Hi,
I am digging an old post but I can't find the answer. Was a "variant forward" mechanism merged in the current version ?
Is there a clean way to deal with combinatorial variant dependencies ?

@alalazo alalazo deleted the features/variant_forwarding branch April 28, 2020 11:18
@alalazo
Copy link
Member Author

alalazo commented Apr 28, 2020

Was a "variant forward" mechanism merged in the current version ?

@gauthier12 Nothing similar to this PR was ever merged. The good news is that something much better will be landing shortly: a more powerful concretizer that does not rely on a greedy algorithm and takes into account constraints wherever they come from. In this sense I suspect that most of the use cases that led to this PR will be covered by the new concretization algorithm. If you want to follow the final stages of development the feature branch on which this is being coded is https://github.com/spack/spack/tree/features/solver

@alalazo
Copy link
Member Author

alalazo commented Apr 28, 2020

Is there a clean way to deal with combinatorial variant dependencies ?

What do you mean by that? Can you make a concrete example?

@gauthier12
Copy link
Contributor

gauthier12 commented Apr 28, 2020

Is there a clean way to deal with combinatorial variant dependencies ?

What do you mean by that? Can you make a concrete example?

Thanks for your response.
I am trying to write spack package for dune (https://www.dune-project.org/), They are a set of more or less independent modules with common options.
For each relevant cmake option, I have defined a spack variant. Let's say, for example, support of lapack, vc and mkl.
In order to work properly, all the dune modules need to be compiled with the same options.
So I have written a spack package for each dune module and I have defined the common variants for all packages.
But ensuring the variant coherence within all the dune modules is a pain, what I've found on the forums is to write the combinations.
For my 3 variants example, it is to write the combinations

	depends_on('A+lapack+vc+mkl', when='+lapack+vc+mkl')
	depends_on('A+lapack+vc~mkl', when='+lapack+vc~mkl')
	depends_on('A+lapack~vc+mkl', when='+lapack~vc+mkl')
	depends_on('A+lapack~vc~mkl', when='+lapack~vc~mkl')
	depends_on('A~lapack+vc+mkl', when='~lapack+vc+mkl')
	depends_on('A~lapack+vc~mkl', when='~lapack+vc~mkl')
	depends_on('A~lapack~vc+mkl', when='~lapack~vc+mkl')
	depends_on('A~lapack~vc~mkl', when='~lapack~vc~mkl')

So 8 combinations for 3 variants and I have 14 common variants and I may have forgotten some.
So I wanted to "propagate" the variant to the dune dependent modules, I didn't need the transitive propagation only direct propagation as they are dune specific variants.
Maybe I am not considering it from the right angle, I don't know.

@alalazo
Copy link
Member Author

alalazo commented Apr 29, 2020

@gauthier12 I see two options here. The first one is to employ for loops within class definition:

def v_str(name, active):
    s = '+' if active else '~'
    s += name
    return s

class B(Package):
    names = ['lapack', 'vc', 'mkl']
    for active in itertools.product((True, False), repeat=3):
        variant_str = ''.join([v_str(name, value) for value, name in zip(active, names)])
        depends_on('A'+variant_str, when=variant_str)

This is quite involved and certainly not recommended in general, but is used in a few packages that present similar issues.

Maybe a better solution, if it makes sense for Dune, is to treat Dune modules as resources. This means having a single "Dune" package with multiple resource directives. These resources might be activated by a corresponding variant and Dune should be able to install them on request.

@gauthier12
Copy link
Contributor

@alalazo Thank you for your answer, the first solution you mention probably works but it doesn't seem practical especially if the number of variants increases but the second solution is a good idea !
I considered each module as a package but I didn't think about considering them as resources of one package and it can make sense.
There is dependencies between modules but it can be implemented inside the package as it is simple and internal between dune modules.
Thank you for your advice.

climbfuji added a commit to climbfuji/spack that referenced this pull request Jan 16, 2024
…edi_neptune_env_passive

Add fftw and netlib-lapack as passive dependencies for jedi-neptune-env
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants