Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

intel-oneapi-compilers/mpi: added some paths to intel-oneapi-mpi to make compilers and libs visible #20808

Merged
merged 19 commits into from Feb 4, 2021

Conversation

frankwillmore
Copy link
Member

added some paths to intel-oneapi-mpi to make compilers and libs visible with module

adamjstewart
adamjstewart previously approved these changes Jan 19, 2021
Copy link
Member

@adamjstewart adamjstewart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rscohn2 can you review?

frankwillmore and others added 6 commits January 19, 2021 10:12
glob not used

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
_library_path does the _join_prefix

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
_library_path does the _join_prefix

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
@frankwillmore
Copy link
Member Author

I added some environment vars to the generated module so that the compiler wrappers will be able to find things. Also, I'm patching all versions of libmpi.so to add an rpath for libfabric so that built executables will always be able to find libfabric. This introduces a build dependency on patchelf.

@rscohn2 when you have a chance, please look it over. It seems to be working as expected, and I can run wrapper-built executables with a purged module environment.

@scheibelp scheibelp self-assigned this Jan 21, 2021
@frankwillmore frankwillmore reopened this Jan 25, 2021
@tldahlgren tldahlgren changed the title WIP: added some paths to intel-oneapi-mpi to make compilers and libs visible WIP: intel-oneapi-compilers/mpi: added some paths to intel-oneapi-mpi to make compilers and libs visible Jan 28, 2021
@becker33 becker33 added this to In progress in Spack v0.16.1 release via automation Jan 29, 2021
@tldahlgren
Copy link
Contributor

@rscohn2 @scheibelp Is this PR ready for merging? Todd would like to make a point release and my impression is this PR should be included.

@tldahlgren tldahlgren changed the title WIP: intel-oneapi-compilers/mpi: added some paths to intel-oneapi-mpi to make compilers and libs visible intel-oneapi-compilers/mpi: added some paths to intel-oneapi-mpi to make compilers and libs visible Feb 3, 2021
@scheibelp
Copy link
Member

I think there is one minor thing remaining: #20808 (comment)

The setting of I_MPI_CC etc. seems like a good idea to me after thinking on it more.

rscohn2
rscohn2 previously approved these changes Feb 3, 2021
@tldahlgren
Copy link
Contributor

@frankwillmore Can you please resolve the conflict?

@scheibelp This PR has 2 approvals. Do you still have changes you want before it gets merged?

@scheibelp
Copy link
Member

scheibelp commented Feb 3, 2021

scheibelp This PR has 2 approvals. Do you still have changes you want before it gets merged?

Yes: #20808 (comment) (it sounds like this will be resolved in a day or two). However, I'm not aware of the timeline for the release (e.g. if it can be pushed to next week). Since there is a conflict I think it will be relatively easy to resolve the request and the sync issue at the same time.

Spack v0.16.1 release automation moved this from In progress to Review in progress Feb 3, 2021
@scheibelp
Copy link
Member

Hi @frankwillmore FYI there is also a merge conflict that will have to be resolved before this is merged.

@frankwillmore
Copy link
Member Author

Ok, I fixed the merge conflict and re-tested, looks OK.

@scheibelp scheibelp merged commit beb4d96 into spack:develop Feb 4, 2021
Spack v0.16.1 release automation moved this from Review in progress to Done Feb 4, 2021
@frankwillmore frankwillmore deleted the impi branch February 4, 2021 23:28
@tldahlgren tldahlgren moved this from Done to Merged in Spack v0.16.1 release Feb 5, 2021
tldahlgren pushed a commit that referenced this pull request Feb 10, 2021
Facilitate running intel-oneapi-mpi outside of Spack (set PATH,
LD_LIBRARY_PATH, etc. appropriately).

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
tldahlgren pushed a commit to tldahlgren/spack that referenced this pull request Feb 11, 2021
Facilitate running intel-oneapi-mpi outside of Spack (set PATH,
LD_LIBRARY_PATH, etc. appropriately).

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
tldahlgren pushed a commit to tldahlgren/spack that referenced this pull request Feb 18, 2021
Facilitate running intel-oneapi-mpi outside of Spack (set PATH,
LD_LIBRARY_PATH, etc. appropriately).

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
likask added a commit to likask/spack that referenced this pull request Feb 27, 2021
…spack_v0.16.1

* commit '8dd2d740b1fbd4335209240fcc42826d0a143f57': (79 commits)
  Update CHANGELOG and release version
  Resolve (post-cherry-picking) flake8 errors
  apple-clang: add correct path to compiler wrappers (spack#21662)
  intel-oneapi-compilers/mpi: add module support (spack#20808)
  intel-oneapi-compilers: add  to LD_LIBRARY_PATH so that it finds libimf.so (spack#20717)
  adding environment to OneMKL packages so that examples will build (spack#21377)
  add intel oneapi to compiler/pkg translations (spack#21448)
  llvm: "master" branch is now "main" branch (spack#21411)
  Print groups properly for spack find -d (spack#20028)
  store sbang_install_path in buildinfo, use for subsequent relocation (spack#20768)
  [WIP] relocate.py: parallelize test replacement logic (spack#19690)
  py-hovorod: fix typo on variant name in conflicts directive (spack#20906)
  concretizer: require at least a dependency type to say the dependency holds
  concretizer: dependency conditions cannot hold if package is external
  libyogrt: remove conflicts triggered by an invalid value (spack#20794)
  restore ability of dev-build to skip patches (spack#20351)
  intel-oneapi-mpi: virtual provider support (spack#20732)
  intel-oneapi-compilers package: correct module file (spack#20686)
  fix mpi lib paths, add virtual provides (spack#20693)
  Remove hard-coded standard C++ library selection and add more releases in llvm package (spack#19933)
  ...
matz-e added a commit to BlueBrain/spack that referenced this pull request Jul 30, 2021
* py-ipykernel: fix install (#19617)

There is a post-install routine in `ipykernel` that needs to be
called for proper registration with jupyter.

* hip support for umpire, chai, raja, camp (#19715)

* create HipPackage base class and do some refactoring

* comments and added conflict to raja for openmp with hip

* fix error handling for spack test results command (#19987)

* py-ipykernel: fix bug in phase method (#19986)

* py-ipykernel: fix bug in phase method

* Fix bug in executable calling

* recognize macOS 11.1 as big sur (#20038)

Big Sur versions go 11.0, 11.0.1, 11.1 (vs. prior versions that
only used the minor component)

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>

* Docs: remove duplication in Command Reference (#20021)

* concretizer: treat conditional providers correctly (#20086)

refers #20040

This modification emits rules like:

provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").

for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.

* concretizer: allow a bool to be passed as argument for tests dependencies (#20082)

refers #20079

Added docstrings to 'concretize' and 'concretized' to
document the format for tests.

Added tests for the activation of test dependencies.

* concretizer: prioritize matching compilers over newer versions (#20020)

fixes #20019

Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:

conflicts('%gcc@X.Y:', when='@:A.B')

where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.

* concretizer: treat target ranges in directives correctly (#19988)

fixes #19981

This commit adds support for target ranges in directives,
for instance:

conflicts('+foo', when='target=x86_64:,aarch64:')

If any target in a spec body is not a known target the
following clause will be emitted:

node_target_satisfies(Package, TargetConstraint)

when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.

* Typos: add missing closing parens (#20174)

* concretizer: swap priority of selecting provider and default variant (#20182)

refers #20040

Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.

This prevents the solver from avoiding expected configurations
just because they contain directives like:

depends_on('pkg+foo')

and `+foo` is not the default variant value for pkg.

* concretizer: remove ad-hoc rule for external packages (#20193)

fixes #20040

Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.

* spec: return early from concretization if a spec is already concrete (#20196)

* Fixes compile time errors (#20006)

Co-authored-by: michael laufer <michael.laufer@toganetworks.com>

* concretizer: don't optimize emitting version_satisfies() (#20128)

When all versions were allowed a version_satisfies rule was not emitted,
and this caused conditional directives to fail.

* boost: disable find_package's config mode for boost prior to v1.70.0 (#20198)

* Fix hipcc once more (#20095)

* concretizer: try hard to infer the real version of compilers (#20099)

fixes #20055

Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.

This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.

* concretizer: call inject_patches_variants() on the roots of the specs (#20203)

As was done in the old concretizer. Fixes an issue where conditionally
patched dependencies did not show up in spec (gdal+jasper)

* avoid circular import (#20236)

* environment installs: fix reporting. (#20004)

PR #15702 changed the invocation of the report context when installing
specs, do the same when building environments.

* concretizer: restrict maximizing variant values to MV variants (#20194)

* concretizer: each external version is allowed by definition (#20247)

Registering external versions among the lists of allowed ones
generates the correct rules for `version_satisfies`

* VTK-m: update to specify correct requirements to kokkos (#20097)

* concretizer: refactor handling of special variants dev_build and patches

Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.

The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.

- [x] move ad-hoc value handling code into spec_clauses so we do it in
  one place for CLI and packages

- [x] move handling of `variant_possible_value`, etc. into
  `concretize.lp`, where we can automatically infer variant existence
  more concisely.

- [x] simplify/clarify some of the code for variants in `spec_clauses()`

* bugfix: work around issue handling packages not in any repo

* concretizer: try hard to obtain all needed variant_possible_value()'s (#20102)

Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.

This modification ensures that open-ended variants (variants accepting any string 
or any integer) are projected to the finite set of values that are relevant for this 
concretization.

* Tests: enable re-use of post-install tests in smoke tests (#20298)

* concretizer: remove clingo command-line driver (#20362)

I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.

So far, the command line is faster than running through Python, but I'm
working on fixing that.  I found that if I do this:

```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp")       # code from spack solve --show asp hdf5
control.load("display.lp")

control.ground([("base", [])])
control.solve(...)
```

It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.

So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.

* package sanity: ensure all variant defaults are allowed values (#20373)

* concretizer: don't use one_of_iff for range constraints (#20383)

Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`.  The rules look like this:

```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```

So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.

We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.

We can replace rules like the ones above with facts like this:

```
version_satisfies("python", "2.6:", "2.4")
```

And ground them in `concretize.lp` with rules like this:

```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
  :- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
  :- version(Package, Version), version_satisfies(Package, Constraint, Version).
```

The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.

- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed

* Fix comparisons for abstract specs (#20341)

bug only relevant for python3

* unit-tests: ensure that installed packages can be reused (#20307)

refers #20292

Added a unit test that ensures we can reuse installed
packages even if in the repository variants have been
removed or added.

* ci: fixes for compiler bootstrapping (#17563)

This PR addresses a number of issues related to compiler bootstrapping.

Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.

2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec.  Allow them to build with less
aggressive target optimization settings.

3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly.  Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.

This PR also:
- adds testing
- improves concretizer handling of target ranges

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* asp: memoize the list of all target_specs to speed-up setup phase (#20473)

* asp: memoize the list of all target_specs to speed-up setup phase

* asp: memoize using a cache per solver object

* concretizer: add #defined statements to avoid warnings.

`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.

* concretizer: pull _develop_specs_from_env out of main setup loop

* concretizer: spec_clauses should traverse dependencies

There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.

* concretizer: move conditional dependency logic into `concretize.lp`

Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.

This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.

The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.

This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.

* concretizer: avoid redundant grounding on dependency types

* concretizer: emit facts for constraints on imposed dependencies

* concretizer: emit facts for integrity constraints

* concretizer: fix failing unit tests

* concretizer: optimized loop on node platforms

We can speed-up the computation by avoiding a
double loop in a cardinality constraint and
enforcing the rule instead as an integrity
constraint.

* concretizer: optimize loop on compiler version

Similar to the optimization on platform

* concretizer: refactor conditional rules to be less repetitious (#20507)

We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:

```
    node(Package) :- attr("node", Package).
    attr("node", Package) :- node(Package).

    version(Package, Version) :- attr("version", Package, Version).
    attr("version", Package, Version) :- version(Package, Version).
```

We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* Add Intel oneAPI packages (#20411)

This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:

* An inheritable IntelOneApiPackage which knows how to invoke the
  installation script based on which components are requested
* For components which include headers/libraries, an inheritable
  IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
  icc/ifortran but these are not currently detected in this PR

* bugfix: do not write empty default dicts/lists in envs (#20526)

Environment yaml files should not have default values written to them.

To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).

Includes regression test.

* concretizer: generate facts for externals

Generate only facts for external specs. Substitute the
use of already grounded rules with non-grounded rules
in concretize.lp

* bugfix: infinite loop when building a set from incomplete specs (#20649)

This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:

```python
set([spec.root for spec in self._specs.values()])
```

It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.

The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:

```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```

We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.

* concretizer: more detailed section headers in concretize.lp

* concretizer: make _condtion_id_counter an iterator

* concretizer: consolidate handling of virtuals into spec_clauses

* concretizer: convert virtuals to facts; move all rules to `concretize.lp`

This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.

The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.

To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.

One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:

    :- virtual_node(Virtual),
       version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
       not version_constraint_satisfies(Virtual, V1, V2).

This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.

We can investigate adding synthetic versions for virtuals in the future,
to speed this up.

* concretizer: remove rule generation code from concretizer

Our program only generates facts now, so remove all unused code related
to generating cardinality constraints and rules.

* concretizer: simplify handling of virtual version constraints

Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:

    version_constraint_satisfies("mpi", ":1", ":3")
    version_constraint_satisfies("mpi", ":3", ":1")

To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.

* concretizer: use consistent naming for compiler predicates (#20677)

Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.

- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`

* concretizer: make rules on virtual packages more linear

fixes #20679

In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.

* Remove hard-coded standard C++ library selection and add more releases in llvm package (#19933)

* Restore OS based Clang default choice of C++ standard library.

* Add LLVM 11.0.1 release

* fix mpi lib paths, add virtual provides (#20693)

* intel-oneapi-compilers package: correct module file (#20686)

This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).

* intel-oneapi-mpi: virtual provider support (#20732)

Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).

* restore ability of dev-build to skip patches (#20351)

At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.

* libyogrt: remove conflicts triggered by an invalid value (#20794)

fixes #20611

The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.

* concretizer: dependency conditions cannot hold if package is external

fixes #20736

Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.

This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.

* concretizer: require at least a dependency type to say the dependency holds

fixes #20784

Similarly to the previous bug, here we were deducing
conditions to be imposed on nodes that were not part
of the DAG.

* py-hovorod: fix typo on variant name in conflicts directive (#20906)

* [WIP] relocate.py: parallelize test replacement logic (#19690)

* sbang pushed back to callers;
star moved to util.lang

* updated unit test

* sbang test moved; local tests pass

Co-authored-by: Nathan Hanford <hanford1@llnl.gov>

* store sbang_install_path in buildinfo, use for subsequent relocation (#20768)

* Print groups properly for spack find -d (#20028)

* llvm: "master" branch is now "main" branch (#21411)

* add intel oneapi to compiler/pkg translations (#21448)

* adding environment to OneMKL packages so that examples will build (#21377)

* intel-oneapi-compilers: add  to LD_LIBRARY_PATH so that it finds libimf.so (#20717)

* add  to LD_LIBRARY_PATH so that it finds libimf.so

* amrex: fix handling of CUDA arch (#20786)

* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)

This better enables the collective set to be deployed togethor satisfying
eachothers dependencies

* r-sf: fix dependency error (#20898)

* improve documentation for Rocm (hip amd builds) (#20812)

* improve documentation

* astyle: Fix makefile for install parameter (#20899)

* llvm-doe: added new package (#20719)

The package contains duplicated code from llvm/package.py,
will supersede solve.

* r-e1071: added v1.7-4 (#20891)

* r-diffusionmap: added v1.2.0 (#20881)

* r-covr: added v3.5.1 (#20868)

* r-class: added v7.3-17 (#20856)

* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)

For the `~mpi` variant, the environment variable `HDF5_DIR` is still required.  I moved this command out of the `+mpi` conditional.

* py-hovorod: fix typo on variant name in conflicts directive (#20906)

* fujitsu-fftw: Add new package (#20824)

* pocl: added v1.6 (#20932)

Made version 1.5 or lower conflicts with a64fx.

* PCL: add new package (#20933)

* r-rle: new package (#20916)

Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.

* r-ellipsis: added v0.3.1 (#20913)

* libconfig: add build dependency on texinfo (#20930)

* r-flexmix: add v2.3-17 (#20924)

* r-fitdistrplus: add v1.1-3 (#20923)

* r-fit-models: add v0.64 (#20922)

* r-fields: add v11.6 (#20921)

* r-fftwtools: add v0.9-9 (#20920)

* r-farver: add v2.0.3 (#20919)

* r-expm: add v0.999-6 (#20918)

* cln: add build dependency on texinfo (#20928)

* r-expint: add v0.1-6 (#20917)

* r-envstats: add v2.4.0 (#20915)

* r-energy: add v1.7-7 (#20914)

* r-ellipse: add v0.4.2 (#20912)

* py-fiscalyear: add v0.3.0 (#20911)

* r-ecp: add v3.1.3 (#20910)

* r-plotmo: add v3.6.0 (#20909)

* Improve gcc detection in llvm. (#20189)

Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>

* hatchet: updated urls (#20908)

* py-anuga: add new package (#20782)

* libvips: added v8.10.5 (#20902)

* libzmq: add platform conditions to libbsd dependency (#20893)

* r-dtw: add v1.22-3 (#20890)

* r-dt: add v0.17 (#20889)

* r-dosnow: add v1.0.19 (#20888)

* add version 1.0.16 to r-doparallel (#20886)

* add version 1.3.7 to r-domc (#20885)

* add version 0.9-15 to r-diversitree (#20884)

* add version 1.3-3 to r-dismo (#20883)

* add version 0.6.27 to r-digest (#20882)

* add version 1.5 to r-rngtools (#20887)

* add version 1.5.8 to r-dicekriging (#20877)

* add version 1.4.2 to r-httr (#20876)

* add version   1.28 to r-desolve (#20875)

* add version   2.2-5 to r-deoptim (#20874)

* add version   0.2-3 to r-deldir (#20873)

* add version   1.0.0 to r-crul (#20870)

* add version   1.1.0.1 to r-crosstalk (#20869)

* add version   1.0-1 to r-copula (#20867)

* add version 5.0.2 to r-rcppparallel (#20866)

* add version   2.0-1 to r-compositions (#20865)

* add version 0.4.10 to r-rlang (#20796)

* add version 0.3.6 to r-vctrs (#20878)

* amrex: add ROCm support (#20809)

* add version 2.0-0 to r-colorspace (#20864)

* add version 1.3-1 to r-coin (#20863)

* add version   0.19-4 to r-coda (#20862)

* add version 1.3.7 to r-clustergeneration (#20861)

* add version 0.3-58 to r-clue (#20860)

* add version 0.7.1 to r-clipr (#20859)

* add version 2.2.0 to r-cli (#20858)

* add version 0.4-3 to r-classint (#20857)

* add version 0.1.2 to r-globaloptions (#20855)

* add version 2.3-56 to r-chron (#20854)

* add version 0.4.10 to r-checkpoint (#20853)

* add version 2.0.0 to r-checkmate (#20852)

* add version 1.18.1 to r-catools (#20850)

* add version 1.2.2.2 to r-modelmetrics (#20849)

* add version 3.0-4 to r-cardata (#20847)

* add version 1.0.1 to r-caracas (#20846)

* r-lifecycle: new package at v0.2.0 (#20845)

* add version 3.0-10 to r-car (#20844)

* add version 3.4.5 to r-processx (#20843)

* add version 1.5-12.2 to r-cairo (#20842)

* add version 0.2.3 to r-cubist (#20841)

* add version 2.6 to r-rmarkdown (#20838)

* add version 1.2.1 to r-blob (#20819)

* add version 4.0.4 to r-bit (#20818)

* add version 2.4-1 to r-bio3d (#20816)

* add version 0.4.2.3 to r-bibtex (#20815)

* add version 3.1-4 to r-bayesm (#20807)

* add version 1.2.1 to r-backports (#20806)

* add version 2.0.3 to r-argparse (#20805)

* add version 5.4-1 to r-ape (#20804)

* add version 0.8-18 to r-amap (#20803)

* r-pixmap: added new package (#20795)

* zoltan: source code location change (#20787)

* refactor path logic

* added some paths to make compilers and libs discoverable

* add  to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8

* refactor path logic

* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi

* added vals for CC=icx, CXX=icpx, FC=ifx to generated module

* back out changes to intel-oneapi-mpi, save for separate PR

* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py

path is joined in _ld_library_path()

Co-authored-by: Robert Cohn <rscohn2@gmail.com>

* set absolute paths to icx,icpx,ifx

* dang close parenthesis

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>

* intel-oneapi-compilers/mpi: add module support (#20808)

Facilitate running intel-oneapi-mpi outside of Spack (set PATH,
LD_LIBRARY_PATH, etc. appropriately).

Co-authored-by: Robert Cohn <rscohn2@gmail.com>

* apple-clang: add correct path to compiler wrappers (#21662)

Follow-up to #17110

### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```

### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```

`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `py-xgboost@0.90`.

An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.

* Resolve (post-cherry-picking) flake8 errors

* Update CHANGELOG and release version

* updates for new tutorial

update s3 bucket
update tutorial branch

* update tutorial public key

* respect -k/verify-ssl-false in _existing_url method (#21864)

* use package supplied autogen.sh (#20319)

* Python 3.10 support: collections.abc (#20441)

(cherry picked from commit 40a40e0265d6704a7836aeb30a776d66da8f7fe6)

* concretizer: simplify "fact" method (#21148)

The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.

Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.

(cherry picked from commit ba42c36f00fe40c047121a32117018eb93e0c4b1)

* Improve error message for inconsistencies in package.py (#21811)

* Improve error message for inconsistencies in package.py

Sometimes directives refer to variants that do not exist.
Make it such that:

1. The name of the variant
2. The name of the package which is supposed to have
   such variant
3. The name of the package making this assumption

are all printed in the error message for easier debugging.

* Add unit tests

(cherry picked from commit 7226bd64dc3b46a1ed361f1e9d7fb4a2a5b65200)

* Updates to support clingo-cffi (#20657)

* Support clingo when used with cffi

Clingo recently merged in a new Python module option based on cffi.

Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.

manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi

(cherry picked from commit 93ed1a410c4a202eab3a68769fd8c0d4ff8b1c8e)

* Run clingo-cffi tests in a container (#21913)

There clingo-cffi job has two issues to be solved:

1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/

The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".

The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).

For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.

* repo: generalize "swap" context manager to also accept paths

The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.

Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.

Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.

(cherry picked from commit 1a8963b0f4c11c4b7ddd347e6cd95cdc68ddcbe0)

* Move context manager to swap the current store into spack.store

The context manager can be used to swap the current
store temporarily, for any use case that may need it.

(cherry picked from commit cb2c233a97073f8c5d89581ee2a2401fef5f878d)

* Move context manager to swap the current configuration into spack.config

The context manager can be used to swap the current
configuration temporarily, for any use case that may need it.

(cherry picked from commit 553d37a6d62b05f15986a702394f67486fa44e0e)

* bugfix for target adjustments on target ranges (#20537)

(cherry picked from commit 61c1b71d38e62a5af81b3b7b8a8d12b954d99f0a)

* Added a context manager to swap architectures

This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.

This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated

(cherry picked from commit 4558dc06e21e01ab07a43737b8cb99d1d69abb5d)

* make `spack fetch` work with environments (#19166)

* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
              the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
       if an environment is active and if yes we fetch all
       uninstalled specs.

* clingo: prefer master branch

Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.

- [x] make `master` the default so we don't have to keep telling people
  to install `clingo@master`. We'll update the preferred version when
  there's a new release.

* Clingo: fix missing import (#21364)

* clingo: added a package with option for bootstrapping clingo (#20652)

* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo

package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically

* clingo-bootstrap: apple-clang options to bootstrap statically on darwin

* clingo: fix the path of the Python interpreter

In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.

This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.

* clingo: the commit for "spack" version has been updated.

* clingo: fix typo (#22444)

* clingo-bootstrap: account for cray platform (#22460)

(cherry picked from commit 138312efabd534fa42d1a16e172e859f0d2b5842)

* Bootstrap clingo from sources (#21446)

* Allow the bootstrapping of clingo from sources

Allow python builds with system python as external
for MacOS

* Ensure consistent configuration when bootstrapping clingo

This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.

* Github actions: test clingo with bootstrapping from sources

* Add command to inspect and clean the bootstrap store

 Prevent users to set the install tree root to the bootstrap store

* clingo: documented how to bootstrap from sources

Co-authored-by: Gregory Becker <becker33@llnl.gov>
(cherry picked from commit 10e9e142b75c6ca8bc61f688260c002201cc1b22)

* bootstrap: account for platform specific configuration scopes (#22489)

This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.

(cherry picked from commit 413c422e53843a9e33d7b77a8c44dcfd4bf701be)

* concretizer: unify logic for spec conditionals

This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.

Given some directives in a package like:

```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```

We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:

```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
  attr(Name, Arg1)             : required_dependency_condition(ID, Name, Arg1);
  attr(Name, Arg1, Arg2)       : required_dependency_condition(ID, Name, Arg1, Arg2);
  attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
  dependency_condition(ID, Parent, Dependency);
  node(Parent).
```

And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:

```prolog
attr(Name, Arg1, Arg2) :-
  dependency_conditions_hold(ID, Package, Dependency),
  imposed_dependency_condition(ID, Name, Arg1, Arg2).

attr(Name, Arg1, Arg2, Arg3) :-
  dependency_conditions_hold(ID, Package, Dependency),
  imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```

These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.

Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:

```prolog
condition_holds(ID) :-
  condition(ID);
  attr(Name, A1)         : condition_requirement(ID, Name, A1);
  attr(Name, A1, A2)     : condition_requirement(ID, Name, A1, A2);
  attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).

attr(Name, A1)         :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2)     :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```

this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:

```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```

The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:

```prolog
dependency_holds(Package, Dependency, Type) :-
  dependency_condition(ID, Package, Dependency),
  dependency_type(ID, Type),
  condition_holds(ID).
```

Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:

```python
    def package_dependencies_rules(self, pkg, tests):
        """Translate 'depends_on' directives into ASP logic."""
        for _, conditions in sorted(pkg.dependencies.items()):
            for cond, dep in sorted(conditions.items()):
                condition_id = self.condition(cond, dep.spec, pkg.name)  # create a condition and get its id
                self.gen.fact(fn.dependency_condition(  # associate specifics about the dependency w/the id
                    condition_id, pkg.name, dep.spec.name
                ))
        # etc.
```

- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals

LocalWords:  concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords:  virtuals def pkg cond dep fn refactor github py

* bugfix: do not generate dep conditions when no dependency

We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.

- [x] change asp.py so that this can't happen -- we now only generate
      dependency types for possible dependencies.

* bugfix: allow imposed constraints to be overridden in special cases

In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.

This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.

We add one special case for this: dependencies of externals.

* spack location: bugfix for out of source build dirs (#22348)

* Channelflow: Fix the package. (#22483)

A search and replace went wrong in 2264e30d99d8b9fbdec8fa69b594e53d8ced15a1.

Thanks to @wadudmiah who reported this issue.

* Make SingleFileScope able to repopulate the cache after clearing it (#22559)

fixes #22547

SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.

* ASP-based solver: model disjoint sets for multivalued variants (#22534)

* ASP-based solver: avoid adding values to variants when they're set

fixes #22533
fixes #21911

Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.

* Ensure disjoint sets have a clear semantics for external packages

* clingo: modify recipe for bootstrapping (#22354)

* clingo: modify recipe for bootstrapping

Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping

* Remove option that breaks on linux

* Give more hints for the current Python

* Disable CLINGO_BUILD_PY_SHARED for bootstrapping

* bootstrapping: try to detect the current python from std library

This is much faster than calling external executables

* Fix compatibility with Python 2.6

* Give hints on which compiler and OS to use when bootstrapping

This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.

* Use spec_for_current_python to constrain module requirement

(cherry picked from commit d5fa509b072f0e58f00eaf81c60f32958a9f1e1d)

* Externals are preferred even when they have non-default variant values

fixes #22596

Variants which are specified in an external spec are not
scored negatively if they encode a non-default value.

* Enforce uniqueness of the version_weight atom per node

fixes #22565

This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.

Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.

This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.

* bugfix for active when pkg is already active error (#22587)

* bugfix for active when pkg is already active error

Co-authored-by: Greg Becker <becker33@llnl.gov>

* Fix clearing cache of InternalConfigScope (#22609)

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* Bootstrap: add _builtin config scope (#22610)

(cherry picked from commit a37c916dff5a5c6e72f939433931ab69dfd731bd)

* Bootstrapping: swap store before configuration (#22631)

fixes #22294

A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.

This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.

* Remove erroneous warnings about quotes for from_source_file (#22767)

* "spack build-env" searches env for relevant spec (#21642)

If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.

This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.

This makes a similar change to spack cd.

If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).

Co-authored-by: Gregory Becker <becker33@llnl.gov>

* ASP-based solver: assign OS correctly with inheritance from parent (#22896)

fixes #22871

When in presence of multiple choices for the operating system
we were lacking a rule to derive the node OS if it was
inherited.

* Externals with merged prefixes (#22653)

We remove system paths from search variables like PATH and 
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages 
may be installed to prefixes that are not actually system paths 
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.

* ASP-based solver: suppress warnings when constructing facts (#23090)

fixes #22786

Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.

* Use Python's built-in machinery to import compilers (#23290)

* Add "spack [cd|location] --source-dir" (#22321)

* spack location: fix usage without args (#22755)

* Import hooks using Python's built-in machinery (#23288)

The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.

This PR thus modifies spack.hook in the following ways:

- Use __import__ instead of spack.util.imp.load_source (this
  addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
  to stay local

* ASP-based solver: no intermediate package for concretizing together (#23307)

The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).

Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.

* ASP-based solve: minimize compiler mismatches (#23016)

fixes #22718

Instead of trying to maximize the number of
matches (preferred behavior), try to minimize
the number of mismatches (unwanted behavior).

* performance: speed up existence checks in packages (#23661)

Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.

When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`).  That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations.  It's a win now to make `Repo.exists()` check files directly.

**Fix:**

This PR does a number of things to speed up `spack load`, `spack info`, and other commands:

- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
      (avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
      a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
      exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`

* Style fixes for v0.16.2 release

* Update CHANGELOG and release version for v0.16.2

* Update command to setup tutorial (#24488)

* Flake8, bash completion

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Martin Aumüller <aumuell@reserv.at>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: George Hartzell <hartzell@alerce.com>
Co-authored-by: MichaelLaufer <70094649+MichaelLaufer@users.noreply.github.com>
Co-authored-by: michael laufer <michael.laufer@toganetworks.com>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Robert Maynard <robert.maynard@kitware.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@googlemail.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: Ye Luo <xw111luoye@gmail.com>
Co-authored-by: Frank Willmore <frankwillmore@gmail.com>
Co-authored-by: Robert Underwood <robertu94@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: Nathan Hanford <8302958+nhanford@users.noreply.github.com>
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
Co-authored-by: Yang Zongze <yangzongze@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: Phil Tooley <32297355+ptooley@users.noreply.github.com>
Co-authored-by: Josh Essman <68349992+joshessman-llnl@users.noreply.github.com>
Co-authored-by: Andreas Baumbach <healther@users.noreply.github.com>
Co-authored-by: Maxim Belkin <maxim.belkin@gmail.com>
Co-authored-by: Rémi Lacroix <remi.lacroix@idris.fr>
Co-authored-by: Cyrus Harrison <cyrush@llnl.gov>
Co-authored-by: Peter Scheibel <scheibel1@llnl.gov>
Co-authored-by: Todd Gamblin <gamblin2@llnl.gov>
matz-e added a commit to BlueBrain/spack that referenced this pull request Sep 23, 2021
* py-ipykernel: fix install (#19617)

There is a post-install routine in `ipykernel` that needs to be
called for proper registration with jupyter.

* hip support for umpire, chai, raja, camp (#19715)

* create HipPackage base class and do some refactoring

* comments and added conflict to raja for openmp with hip

* fix error handling for spack test results command (#19987)

* py-ipykernel: fix bug in phase method (#19986)

* py-ipykernel: fix bug in phase method

* Fix bug in executable calling

* recognize macOS 11.1 as big sur (#20038)

Big Sur versions go 11.0, 11.0.1, 11.1 (vs. prior versions that
only used the minor component)

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>

* Docs: remove duplication in Command Reference (#20021)

* concretizer: treat conditional providers correctly (#20086)

refers #20040

This modification emits rules like:

provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").

for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.

* concretizer: allow a bool to be passed as argument for tests dependencies (#20082)

refers #20079

Added docstrings to 'concretize' and 'concretized' to
document the format for tests.

Added tests for the activation of test dependencies.

* concretizer: prioritize matching compilers over newer versions (#20020)

fixes #20019

Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:

conflicts('%gcc@X.Y:', when='@:A.B')

where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.

* concretizer: treat target ranges in directives correctly (#19988)

fixes #19981

This commit adds support for target ranges in directives,
for instance:

conflicts('+foo', when='target=x86_64:,aarch64:')

If any target in a spec body is not a known target the
following clause will be emitted:

node_target_satisfies(Package, TargetConstraint)

when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.

* Typos: add missing closing parens (#20174)

* concretizer: swap priority of selecting provider and default variant (#20182)

refers #20040

Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.

This prevents the solver from avoiding expected configurations
just because they contain directives like:

depends_on('pkg+foo')

and `+foo` is not the default variant value for pkg.

* concretizer: remove ad-hoc rule for external packages (#20193)

fixes #20040

Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.

* spec: return early from concretization if a spec is already concrete (#20196)

* Fixes compile time errors (#20006)

Co-authored-by: michael laufer <michael.laufer@toganetworks.com>

* concretizer: don't optimize emitting version_satisfies() (#20128)

When all versions were allowed a version_satisfies rule was not emitted,
and this caused conditional directives to fail.

* boost: disable find_package's config mode for boost prior to v1.70.0 (#20198)

* Fix hipcc once more (#20095)

* concretizer: try hard to infer the real version of compilers (#20099)

fixes #20055

Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.

This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.

* concretizer: call inject_patches_variants() on the roots of the specs (#20203)

As was done in the old concretizer. Fixes an issue where conditionally
patched dependencies did not show up in spec (gdal+jasper)

* avoid circular import (#20236)

* environment installs: fix reporting. (#20004)

PR #15702 changed the invocation of the report context when installing
specs, do the same when building environments.

* concretizer: restrict maximizing variant values to MV variants (#20194)

* concretizer: each external version is allowed by definition (#20247)

Registering external versions among the lists of allowed ones
generates the correct rules for `version_satisfies`

* VTK-m: update to specify correct requirements to kokkos (#20097)

* concretizer: refactor handling of special variants dev_build and patches

Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.

The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.

- [x] move ad-hoc value handling code into spec_clauses so we do it in
  one place for CLI and packages

- [x] move handling of `variant_possible_value`, etc. into
  `concretize.lp`, where we can automatically infer variant existence
  more concisely.

- [x] simplify/clarify some of the code for variants in `spec_clauses()`

* bugfix: work around issue handling packages not in any repo

* concretizer: try hard to obtain all needed variant_possible_value()'s (#20102)

Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.

This modification ensures that open-ended variants (variants accepting any string 
or any integer) are projected to the finite set of values that are relevant for this 
concretization.

* Tests: enable re-use of post-install tests in smoke tests (#20298)

* concretizer: remove clingo command-line driver (#20362)

I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.

So far, the command line is faster than running through Python, but I'm
working on fixing that.  I found that if I do this:

```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp")       # code from spack solve --show asp hdf5
control.load("display.lp")

control.ground([("base", [])])
control.solve(...)
```

It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.

So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.

* package sanity: ensure all variant defaults are allowed values (#20373)

* concretizer: don't use one_of_iff for range constraints (#20383)

Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`.  The rules look like this:

```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```

So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.

We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.

We can replace rules like the ones above with facts like this:

```
version_satisfies("python", "2.6:", "2.4")
```

And ground them in `concretize.lp` with rules like this:

```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
  :- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
  :- version(Package, Version), version_satisfies(Package, Constraint, Version).
```

The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.

- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed

* Fix comparisons for abstract specs (#20341)

bug only relevant for python3

* unit-tests: ensure that installed packages can be reused (#20307)

refers #20292

Added a unit test that ensures we can reuse installed
packages even if in the repository variants have been
removed or added.

* ci: fixes for compiler bootstrapping (#17563)

This PR addresses a number of issues related to compiler bootstrapping.

Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.

2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec.  Allow them to build with less
aggressive target optimization settings.

3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly.  Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.

This PR also:
- adds testing
- improves concretizer handling of target ranges

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* asp: memoize the list of all target_specs to speed-up setup phase (#20473)

* asp: memoize the list of all target_specs to speed-up setup phase

* asp: memoize using a cache per solver object

* concretizer: add #defined statements to avoid warnings.

`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.

* concretizer: pull _develop_specs_from_env out of main setup loop

* concretizer: spec_clauses should traverse dependencies

There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.

* concretizer: move conditional dependency logic into `concretize.lp`

Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.

This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.

The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.

This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.

* concretizer: avoid redundant grounding on dependency types

* concretizer: emit facts for constraints on imposed dependencies

* concretizer: emit facts for integrity constraints

* concretizer: fix failing unit tests

* concretizer: optimized loop on node platforms

We can speed-up the computation by avoiding a
double loop in a cardinality constraint and
enforcing the rule instead as an integrity
constraint.

* concretizer: optimize loop on compiler version

Similar to the optimization on platform

* concretizer: refactor conditional rules to be less repetitious (#20507)

We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:

```
    node(Package) :- attr("node", Package).
    attr("node", Package) :- node(Package).

    version(Package, Version) :- attr("version", Package, Version).
    attr("version", Package, Version) :- version(Package, Version).
```

We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* Add Intel oneAPI packages (#20411)

This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:

* An inheritable IntelOneApiPackage which knows how to invoke the
  installation script based on which components are requested
* For components which include headers/libraries, an inheritable
  IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
  icc/ifortran but these are not currently detected in this PR

* bugfix: do not write empty default dicts/lists in envs (#20526)

Environment yaml files should not have default values written to them.

To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).

Includes regression test.

* concretizer: generate facts for externals

Generate only facts for external specs. Substitute the
use of already grounded rules with non-grounded rules
in concretize.lp

* bugfix: infinite loop when building a set from incomplete specs (#20649)

This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:

```python
set([spec.root for spec in self._specs.values()])
```

It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.

The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:

```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```

We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.

* concretizer: more detailed section headers in concretize.lp

* concretizer: make _condtion_id_counter an iterator

* concretizer: consolidate handling of virtuals into spec_clauses

* concretizer: convert virtuals to facts; move all rules to `concretize.lp`

This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.

The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.

To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.

One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:

    :- virtual_node(Virtual),
       version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
       not version_constraint_satisfies(Virtual, V1, V2).

This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.

We can investigate adding synthetic versions for virtuals in the future,
to speed this up.

* concretizer: remove rule generation code from concretizer

Our program only generates facts now, so remove all unused code related
to generating cardinality constraints and rules.

* concretizer: simplify handling of virtual version constraints

Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:

    version_constraint_satisfies("mpi", ":1", ":3")
    version_constraint_satisfies("mpi", ":3", ":1")

To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.

* concretizer: use consistent naming for compiler predicates (#20677)

Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.

- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`

* concretizer: make rules on virtual packages more linear

fixes #20679

In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.

* Remove hard-coded standard C++ library selection and add more releases in llvm package (#19933)

* Restore OS based Clang default choice of C++ standard library.

* Add LLVM 11.0.1 release

* fix mpi lib paths, add virtual provides (#20693)

* intel-oneapi-compilers package: correct module file (#20686)

This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).

* intel-oneapi-mpi: virtual provider support (#20732)

Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).

* restore ability of dev-build to skip patches (#20351)

At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.

* libyogrt: remove conflicts triggered by an invalid value (#20794)

fixes #20611

The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.

* concretizer: dependency conditions cannot hold if package is external

fixes #20736

Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.

This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.

* concretizer: require at least a dependency type to say the dependency holds

fixes #20784

Similarly to the previous bug, here we were deducing
conditions to be imposed on nodes that were not part
of the DAG.

* py-hovorod: fix typo on variant name in conflicts directive (#20906)

* [WIP] relocate.py: parallelize test replacement logic (#19690)

* sbang pushed back to callers;
star moved to util.lang

* updated unit test

* sbang test moved; local tests pass

Co-authored-by: Nathan Hanford <hanford1@llnl.gov>

* store sbang_install_path in buildinfo, use for subsequent relocation (#20768)

* Print groups properly for spack find -d (#20028)

* llvm: "master" branch is now "main" branch (#21411)

* add intel oneapi to compiler/pkg translations (#21448)

* adding environment to OneMKL packages so that examples will build (#21377)

* intel-oneapi-compilers: add  to LD_LIBRARY_PATH so that it finds libimf.so (#20717)

* add  to LD_LIBRARY_PATH so that it finds libimf.so

* amrex: fix handling of CUDA arch (#20786)

* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>

* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)

This better enables the collective set to be deployed togethor satisfying
eachothers dependencies

* r-sf: fix dependency error (#20898)

* improve documentation for Rocm (hip amd builds) (#20812)

* improve documentation

* astyle: Fix makefile for install parameter (#20899)

* llvm-doe: added new package (#20719)

The package contains duplicated code from llvm/package.py,
will supersede solve.

* r-e1071: added v1.7-4 (#20891)

* r-diffusionmap: added v1.2.0 (#20881)

* r-covr: added v3.5.1 (#20868)

* r-class: added v7.3-17 (#20856)

* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)

For the `~mpi` variant, the environment variable `HDF5_DIR` is still required.  I moved this command out of the `+mpi` conditional.

* py-hovorod: fix typo on variant name in conflicts directive (#20906)

* fujitsu-fftw: Add new package (#20824)

* pocl: added v1.6 (#20932)

Made version 1.5 or lower conflicts with a64fx.

* PCL: add new package (#20933)

* r-rle: new package (#20916)

Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.

* r-ellipsis: added v0.3.1 (#20913)

* libconfig: add build dependency on texinfo (#20930)

* r-flexmix: add v2.3-17 (#20924)

* r-fitdistrplus: add v1.1-3 (#20923)

* r-fit-models: add v0.64 (#20922)

* r-fields: add v11.6 (#20921)

* r-fftwtools: add v0.9-9 (#20920)

* r-farver: add v2.0.3 (#20919)

* r-expm: add v0.999-6 (#20918)

* cln: add build dependency on texinfo (#20928)

* r-expint: add v0.1-6 (#20917)

* r-envstats: add v2.4.0 (#20915)

* r-energy: add v1.7-7 (#20914)

* r-ellipse: add v0.4.2 (#20912)

* py-fiscalyear: add v0.3.0 (#20911)

* r-ecp: add v3.1.3 (#20910)

* r-plotmo: add v3.6.0 (#20909)

* Improve gcc detection in llvm. (#20189)

Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>

* hatchet: updated urls (#20908)

* py-anuga: add new package (#20782)

* libvips: added v8.10.5 (#20902)

* libzmq: add platform conditions to libbsd dependency (#20893)

* r-dtw: add v1.22-3 (#20890)

* r-dt: add v0.17 (#20889)

* r-dosnow: add v1.0.19 (#20888)

* add version 1.0.16 to r-doparallel (#20886)

* add version 1.3.7 to r-domc (#20885)

* add version 0.9-15 to r-diversitree (#20884)

* add version 1.3-3 to r-dismo (#20883)

* add version 0.6.27 to r-digest (#20882)

* add version 1.5 to r-rngtools (#20887)

* add version 1.5.8 to r-dicekriging (#20877)

* add version 1.4.2 to r-httr (#20876)

* add version   1.28 to r-desolve (#20875)

* add version   2.2-5 to r-deoptim (#20874)

* add version   0.2-3 to r-deldir (#20873)

* add version   1.0.0 to r-crul (#20870)

* add version   1.1.0.1 to r-crosstalk (#20869)

* add version   1.0-1 to r-copula (#20867)

* add version 5.0.2 to r-rcppparallel (#20866)

* add version   2.0-1 to r-compositions (#20865)

* add version 0.4.10 to r-rlang (#20796)

* add version 0.3.6 to r-vctrs (#20878)

* amrex: add ROCm support (#20809)

* add version 2.0-0 to r-colorspace (#20864)

* add version 1.3-1 to r-coin (#20863)

* add version   0.19-4 to r-coda (#20862)

* add version 1.3.7 to r-clustergeneration (#20861)

* add version 0.3-58 to r-clue (#20860)

* add version 0.7.1 to r-clipr (#20859)

* add version 2.2.0 to r-cli (#20858)

* add version 0.4-3 to r-classint (#20857)

* add version 0.1.2 to r-globaloptions (#20855)

* add version 2.3-56 to r-chron (#20854)

* add version 0.4.10 to r-checkpoint (#20853)

* add version 2.0.0 to r-checkmate (#20852)

* add version 1.18.1 to r-catools (#20850)

* add version 1.2.2.2 to r-modelmetrics (#20849)

* add version 3.0-4 to r-cardata (#20847)

* add version 1.0.1 to r-caracas (#20846)

* r-lifecycle: new package at v0.2.0 (#20845)

* add version 3.0-10 to r-car (#20844)

* add version 3.4.5 to r-processx (#20843)

* add version 1.5-12.2 to r-cairo (#20842)

* add version 0.2.3 to r-cubist (#20841)

* add version 2.6 to r-rmarkdown (#20838)

* add version 1.2.1 to r-blob (#20819)

* add version 4.0.4 to r-bit (#20818)

* add version 2.4-1 to r-bio3d (#20816)

* add version 0.4.2.3 to r-bibtex (#20815)

* add version 3.1-4 to r-bayesm (#20807)

* add version 1.2.1 to r-backports (#20806)

* add version 2.0.3 to r-argparse (#20805)

* add version 5.4-1 to r-ape (#20804)

* add version 0.8-18 to r-amap (#20803)

* r-pixmap: added new package (#20795)

* zoltan: source code location change (#20787)

* refactor path logic

* added some paths to make compilers and libs discoverable

* add  to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8

* refactor path logic

* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi

* added vals for CC=icx, CXX=icpx, FC=ifx to generated module

* back out changes to intel-oneapi-mpi, save for separate PR

* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py

path is joined in _ld_library_path()

Co-authored-by: Robert Cohn <rscohn2@gmail.com>

* set absolute paths to icx,icpx,ifx

* dang close parenthesis

Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>

* intel-oneapi-compilers/mpi: add module support (#20808)

Facilitate running intel-oneapi-mpi outside of Spack (set PATH,
LD_LIBRARY_PATH, etc. appropriately).

Co-authored-by: Robert Cohn <rscohn2@gmail.com>

* apple-clang: add correct path to compiler wrappers (#21662)

Follow-up to #17110

### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```

### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```

`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `py-xgboost@0.90`.

An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.

* Resolve (post-cherry-picking) flake8 errors

* Update CHANGELOG and release version

* updates for new tutorial

update s3 bucket
update tutorial branch

* update tutorial public key

* respect -k/verify-ssl-false in _existing_url method (#21864)

* use package supplied autogen.sh (#20319)

* Python 3.10 support: collections.abc (#20441)

(cherry picked from commit 40a40e0265d6704a7836aeb30a776d66da8f7fe6)

* concretizer: simplify "fact" method (#21148)

The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.

Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.

(cherry picked from commit ba42c36f00fe40c047121a32117018eb93e0c4b1)

* Improve error message for inconsistencies in package.py (#21811)

* Improve error message for inconsistencies in package.py

Sometimes directives refer to variants that do not exist.
Make it such that:

1. The name of the variant
2. The name of the package which is supposed to have
   such variant
3. The name of the package making this assumption

are all printed in the error message for easier debugging.

* Add unit tests

(cherry picked from commit 7226bd64dc3b46a1ed361f1e9d7fb4a2a5b65200)

* Updates to support clingo-cffi (#20657)

* Support clingo when used with cffi

Clingo recently merged in a new Python module option based on cffi.

Compatibility with this module requires a few changes to spack - it does not automatically convert strings/ints/etc to Symbol and clingo.Symbol.string throws on failure.

manually convert str/int to clingo.Symbol types
catch stringify exceptions
add job for clingo-cffi to Spack CI
switch to potassco-vendored wheel for clingo-cffi CI
on_unsat argument when cffi

(cherry picked from commit 93ed1a410c4a202eab3a68769fd8c0d4ff8b1c8e)

* Run clingo-cffi tests in a container (#21913)

There clingo-cffi job has two issues to be solved:

1. It uses the default concretizer
2. It requires a package from https://test.pypi.org/simple/

The former can be fixed by setting the SPACK_TEST_SOLVER
environment variable to "clingo".

The latter though requires clingo-cffi to be pushed to a
more stable package index (since https://test.pypi.org/simple/
is meant as a scratch version of PyPI that can be wiped at
any time).

For the time being run the tests in a container. Switch back to
PyPI whenever a new official version of clingo will be released.

* repo: generalize "swap" context manager to also accept paths

The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.

Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.

Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.

(cherry picked from commit 1a8963b0f4c11c4b7ddd347e6cd95cdc68ddcbe0)

* Move context manager to swap the current store into spack.store

The context manager can be used to swap the current
store temporarily, for any use case that may need it.

(cherry picked from commit cb2c233a97073f8c5d89581ee2a2401fef5f878d)

* Move context manager to swap the current configuration into spack.config

The context manager can be used to swap the current
configuration temporarily, for any use case that may need it.

(cherry picked from commit 553d37a6d62b05f15986a702394f67486fa44e0e)

* bugfix for target adjustments on target ranges (#20537)

(cherry picked from commit 61c1b71d38e62a5af81b3b7b8a8d12b954d99f0a)

* Added a context manager to swap architectures

This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.

This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated

(cherry picked from commit 4558dc06e21e01ab07a43737b8cb99d1d69abb5d)

* make `spack fetch` work with environments (#19166)

* make `spack fetch` work with environments
* previously: `spack fetch` required the explicit statement of
              the specs to be fetched, even when in an environment
* now: if there is no spec(s) provided to `spack fetch` we check
       if an environment is active and if yes we fetch all
       uninstalled specs.

* clingo: prefer master branch

Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.

- [x] make `master` the default so we don't have to keep telling people
  to install `clingo@master`. We'll update the preferred version when
  there's a new release.

* Clingo: fix missing import (#21364)

* clingo: added a package with option for bootstrapping clingo (#20652)

* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo

package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically

* clingo-bootstrap: apple-clang options to bootstrap statically on darwin

* clingo: fix the path of the Python interpreter

In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.

This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.

* clingo: the commit for "spack" version has been updated.

* clingo: fix typo (#22444)

* clingo-bootstrap: account for cray platform (#22460)

(cherry picked from commit 138312efabd534fa42d1a16e172e859f0d2b5842)

* Bootstrap clingo from sources (#21446)

* Allow the bootstrapping of clingo from sources

Allow python builds with system python as external
for MacOS

* Ensure consistent configuration when bootstrapping clingo

This commit uses context managers to ensure we can
bootstrap clingo using a consistent configuration
regardless of the use case being managed.

* Github actions: test clingo with bootstrapping from sources

* Add command to inspect and clean the bootstrap store

 Prevent users to set the install tree root to the bootstrap store

* clingo: documented how to bootstrap from sources

Co-authored-by: Gregory Becker <becker33@llnl.gov>
(cherry picked from commit 10e9e142b75c6ca8bc61f688260c002201cc1b22)

* bootstrap: account for platform specific configuration scopes (#22489)

This change accounts for platform specific configuration scopes,
like ~/.spack/linux, during bootstrapping. These scopes were
previously not accounted for and that was causing issues e.g.
when searching for compilers.

(cherry picked from commit 413c422e53843a9e33d7b77a8c44dcfd4bf701be)

* concretizer: unify logic for spec conditionals

This builds on #20638 by unifying all the places in the concretizer where
things are conditional on specs. Previously, we duplicated a common spec
conditional pattern for dependencies, virtual providers, conflicts, and
externals. That was introduced in #20423 and refined in #20507, and
roughly looked as follows.

Given some directives in a package like:

```python
depends_on("foo@1.0+bar", when="@2.0+variant")
provides("mpi@2:", when="@1.9:")
```

We handled the `@2.0+variant` and `@1.9:` parts by generating generated
`dependency_condition()`, `required_dependency_condition()`, and
`imposed_dependency_condition()` facts to trigger rules like this:

```prolog
dependency_conditions_hold(ID, Parent, Dependency) :-
  attr(Name, Arg1)             : required_dependency_condition(ID, Name, Arg1);
  attr(Name, Arg1, Arg2)       : required_dependency_condition(ID, Name, Arg1, Arg2);
  attr(Name, Arg1, Arg2, Arg3) : required_dependency_condition(ID, Name, Arg1, Arg2, Arg3);
  dependency_condition(ID, Parent, Dependency);
  node(Parent).
```

And we handled `foo@1.0+bar` and `mpi@2:` parts ("imposed constraints")
like this:

```prolog
attr(Name, Arg1, Arg2) :-
  dependency_conditions_hold(ID, Package, Dependency),
  imposed_dependency_condition(ID, Name, Arg1, Arg2).

attr(Name, Arg1, Arg2, Arg3) :-
  dependency_conditions_hold(ID, Package, Dependency),
  imposed_dependency_condition(ID, Name, Arg1, Arg2, Arg3).
```

These rules were repeated with different input predicates for
requirements (e.g., `required_dependency_condition`) and imposed
constraints (e.g., `imposed_dependency_condition`) throughout
`concretize.lp`. In #20638 it got to be a bit confusing, because we used
the same `dependency_condition_holds` predicate to impose constraints on
conditional dependencies and virtual providers. So, even though the
pattern was repeated, some of the conditional rules were conjoined in a
weird way.

Instead of repeating this pattern everywhere, we now have *one* set of
consolidated rules for conditions:

```prolog
condition_holds(ID) :-
  condition(ID);
  attr(Name, A1)         : condition_requirement(ID, Name, A1);
  attr(Name, A1, A2)     : condition_requirement(ID, Name, A1, A2);
  attr(Name, A1, A2, A3) : condition_requirement(ID, Name, A1, A2, A3).

attr(Name, A1)         :- condition_holds(ID), imposed_constraint(ID, Name, A1).
attr(Name, A1, A2)     :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2).
attr(Name, A1, A2, A3) :- condition_holds(ID), imposed_constraint(ID, Name, A1, A2, A3).
```

this allows us to use `condition(ID)` and `condition_holds(ID)` to
encapsulate the conditional logic on specs in all the scenarios where we
need it. Instead of defining predicates for the requirements and imposed
constraints, we generate the condition inputs with generic facts, and
define predicates to associate the condition ID with a particular
scenario. So, now, the generated facts for a condition look like this:

```prolog
condition(121).
condition_requirement(121,"node","cairo").
condition_requirement(121,"variant_value","cairo","fc","True").
imposed_constraint(121,"version_satisfies","fontconfig","2.10.91:").
dependency_condition(121,"cairo","fontconfig").
dependency_type(121,"build").
dependency_type(121,"link").
```

The requirements and imposed constraints are generic, and we associate
them with their meaning via the id. Here, `dependency_condition(121,
"cairo", "fontconfig")` tells us that condition 121 has to do with the
dependency of `cairo` on `fontconfig`, and the conditional dependency
rules just become:

```prolog
dependency_holds(Package, Dependency, Type) :-
  dependency_condition(ID, Package, Dependency),
  dependency_type(ID, Type),
  condition_holds(ID).
```

Dependencies, virtuals, conflicts, and externals all now use similar
patterns, and the logic for generating condition facts is common to all
of them on the python side, as well. The more specific routines like
`package_dependencies_rules` just call `self.condition(...)` to get an id
and generate requirements and imposed constraints, then they generate
their extra facts with the returned id, like this:

```python
    def package_dependencies_rules(self, pkg, tests):
        """Translate 'depends_on' directives into ASP logic."""
        for _, conditions in sorted(pkg.dependencies.items()):
            for cond, dep in sorted(conditions.items()):
                condition_id = self.condition(cond, dep.spec, pkg.name)  # create a condition and get its id
                self.gen.fact(fn.dependency_condition(  # associate specifics about the dependency w/the id
                    condition_id, pkg.name, dep.spec.name
                ))
        # etc.
```

- [x] unify generation and logic for conditions
- [x] use unified logic for dependencies
- [x] use unified logic for virtuals
- [x] use unified logic for conflicts
- [x] use unified logic for externals

LocalWords:  concretizer mpi attr Arg concretize lp cairo fc fontconfig
LocalWords:  virtuals def pkg cond dep fn refactor github py

* bugfix: do not generate dep conditions when no dependency

We only consider test dependencies some of the time. Some packages are
*only* test dependencies. Spack's algorithm was previously generating
dependency conditions that could hold, *even* if there was no potential
dependency type.

- [x] change asp.py so that this can't happen -- we now only generate
      dependency types for possible dependencies.

* bugfix: allow imposed constraints to be overridden in special cases

In most cases, we want condition_holds(ID) to imply any imposed
constraints associated with the ID. However, the dependency relationship
in Spack is special because it's "extra" conditional -- a dependency
*condition* may hold, but we have decided that externals will not have
dependencies, so we need a way to avoid having imposed constraints appear
for nodes that don't exist.

This introduces a new rule that says that constraints are imposed
*unless* we define `do_not_impose(ID)`. This allows rules like
dependencies, which rely on more than just spec conditions, to cancel
imposed constraints.

We add one special case for this: dependencies of externals.

* spack location: bugfix for out of source build dirs (#22348)

* Channelflow: Fix the package. (#22483)

A search and replace went wrong in 2264e30d99d8b9fbdec8fa69b594e53d8ced15a1.

Thanks to @wadudmiah who reported this issue.

* Make SingleFileScope able to repopulate the cache after clearing it (#22559)

fixes #22547

SingleFileScope was not able to repopulate its cache before this
change. This was affecting the configuration seen by environments
using clingo bootstrapped from sources, since the bootstrapping
operation involved a few cache invalidation for config files.

* ASP-based solver: model disjoint sets for multivalued variants (#22534)

* ASP-based solver: avoid adding values to variants when they're set

fixes #22533
fixes #21911

Added a rule that prevents any value to slip in a variant when the
variant is set explicitly. This is relevant for multi-valued variants,
in particular for those that have disjoint sets of values.

* Ensure disjoint sets have a clear semantics for external packages

* clingo: modify recipe for bootstrapping (#22354)

* clingo: modify recipe for bootstrapping

Modifications:
- clingo builds with shared Python only if ^python+shared
- avoid building the clingo app for bootstrapping
- don't link to libpython when bootstrapping

* Remove option that breaks on linux

* Give more hints for the current Python

* Disable CLINGO_BUILD_PY_SHARED for bootstrapping

* bootstrapping: try to detect the current python from std library

This is much faster than calling external executables

* Fix compatibility with Python 2.6

* Give hints on which compiler and OS to use when bootstrapping

This change hints which compiler to use for bootstrapping clingo
(either GCC or Apple Clang on MacOS). On Cray platforms it also
hints to build for the frontend system, where software is meant
to be installed.

* Use spec_for_current_python to constrain module requirement

(cherry picked from commit d5fa509b072f0e58f00eaf81c60f32958a9f1e1d)

* Externals are preferred even when they have non-default variant values

fixes #22596

Variants which are specified in an external spec are not
scored negatively if they encode a non-default value.

* Enforce uniqueness of the version_weight atom per node

fixes #22565

This change enforces the uniqueness of the version_weight
atom per node(Package) in the DAG. It does so by applying
FTSE and adding an extra layer of indirection with the
possible_version_weight/2 atom.

Before this change it may have happened that for the same
node two different version_weight/2 were in the answer set,
each of which referred to a different spec with the same
version, and their weights would sum up.

This lead to unexpected result like preferring to build a
new version of an external if the external version was
older.

* bugfix for active when pkg is already active error (#22587)

* bugfix for active when pkg is already active error

Co-authored-by: Greg Becker <becker33@llnl.gov>

* Fix clearing cache of InternalConfigScope (#22609)

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* Bootstrap: add _builtin config scope (#22610)

(cherry picked from commit a37c916dff5a5c6e72f939433931ab69dfd731bd)

* Bootstrapping: swap store before configuration (#22631)

fixes #22294

A combination of the swapping order for global variables and
the fact that most of them are lazily evaluated resulted in
custom install tree not being taken into account if clingo
had to be bootstrapped.

This commit fixes that particular issue, but a broader refactor
may be needed to ensure that similar situations won't affect us
in the future.

* Remove erroneous warnings about quotes for from_source_file (#22767)

* "spack build-env" searches env for relevant spec (#21642)

If you install packages using spack install in an environment with
complex spec constraints, and the install fails, you may want to
test out the build using spack build-env; one issue (particularly
if you use concretize: together) is that it may be hard to pass
the appropriate spec that matches what the environment is
attempting to install.

This updates the build-env command to default to pulling a matching
spec from the environment rather than concretizing what the user
provides on the command line independently.

This makes a similar change to spack cd.

If the user-provided spec matches multiple specs in the environment,
then these commands will now report an error and display all
matching specs (to help the user specify).

Co-authored-by: Gregory Becker <becker33@llnl.gov>

* ASP-based solver: assign OS correctly with inheritance from parent (#22896)

fixes #22871

When in presence of multiple choices for the operating system
we were lacking a rule to derive the node OS if it was
inherited.

* Externals with merged prefixes (#22653)

We remove system paths from search variables like PATH and 
from -L options because they may contain many packages and
could interfere with Spack-built packages. External packages 
may be installed to prefixes that are not actually system paths 
but are still "merged" in the sense that many other packages are
installed there. To avoid conflicts, this PR places all external
packages at the end of search paths.

* ASP-based solver: suppress warnings when constructing facts (#23090)

fixes #22786

Trying to get optimization flags for a specific target from
a compiler may trigger warnings. In the context of constructing
facts for the ASP-based solver we don't want to show these
warnings to the user, so here we simply ignore them.

* Use Python's built-in machinery to import compilers (#23290)

* Add "spack [cd|location] --source-dir" (#22321)

* spack location: fix usage without args (#22755)

* Import hooks using Python's built-in machinery (#23288)

The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.

This PR thus modifies spack.hook in the following ways:

- Use __import__ instead of spack.util.imp.load_source (this
  addresses #20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
  to stay local

* ASP-based solver: no intermediate package for concretizing together (#23307)

The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).

Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.

* ASP-based solve: minimize compiler mismatches (#23016)

fixes #22718

Instead of trying to maximize the number of
matches (preferred behavior), try to minimize
the number of mismatches (unwanted behavior).

* performance: speed up existence checks in packages (#23661)

Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.

When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see #7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`).  That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations.  It's a win now to make `Repo.exists()` check files directly.

**Fix:**

This PR does a number of things to speed up `spack load`, `spack info`, and other commands:

- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
      (avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
      a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
      exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`

* Style fixes for v0.16.2 release

* Update CHANGELOG and release version for v0.16.2

* Update command to setup tutorial (#24488)

* Fix fetching for Python 3.9.6 (#24686)

When using Python 3.9.6, Spack is no longer able to fetch anything. Commands like `spack fetch` and `spack install` all break.

Python 3.9.6 includes a [new change](https://github.com/python/cpython/pull/25853/files#diff-b3712475a413ec972134c0260c8f1eb1deefb66184f740ef00c37b4487ef873eR462) that means that `scheme` must be a string, it cannot be None. The solution is to use an empty string like the method default.

Fixes #24644. Also see https://github.com/Homebrew/homebrew-core/pull/80175 where this issue was discovered by CI. Thanks @branchvincent for reporting such a serious issue before any actual users encountered it!

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>

* clang/llvm: fix version detection (#19978)

This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:

```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```

This caused clang's version to be misdetected as:

```
clang@11.0.0
Target:
```

This resulted in errors when trying to actually use it as a compiler.

When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:

```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```

Changing the regex to only match until the end of the line fixes these
problems.

Fixes: #19473

* Fix use of quotes in Python build system (#22279)

* Cray: fix extracting paths from module files (#23472)

Co-authored-by: Tiziano Müller <tm@dev-zero.ch>

* Use AWS CloudFront for source mirror (#23978)

Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.

CloudFront is 16x faster (or more) than the old bucket.

- [x] change mirror to https://mirror.spack.io

* locks: only open lockfiles once instead of for every lock held (#24794)

This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.

The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.

Because of this, we need to track open lock files so that we only close them when
a process no longer needs them.  We do this by tracking each lockfile by its
inode and process id.  This has several nice properties:

1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
   process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
   just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
   inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
   number of times necessary for the locks we have.

Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.

Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
      needed (this avoids inadvertently releasing locks that should not be released).

* Ensure all roots of an installed environment are marked explicit in db (#24277)

* docker: Fix CentOS 6 build on Docker Hub (#24804)

This change make yum usable again on CentOS 6

* docker: remove boto3 from  CentOS 6 since it requires and updated pip (#24813)

* Remove centos:6 image references

This was EOL November 30th, 2020. I believe the "builds" are failing on
develop because of it.

* Fix style tests

* Bump version and update changelog

Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Martin Aumüller <aumuell@reserv.at>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: George Hartzell <hartzell@alerce.com>
Co-authored-by: MichaelLaufer <70094649+MichaelLaufer@users.noreply.github.com>
Co-authored-by: michael laufer <michael.laufer@toganetworks.com>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Robert Maynard <robert.maynard@kitware.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@googlemail.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: Ye Luo <xw111luoye@gmail.com>
Co-authored-by: Frank Willmore <frankwillmore@gmail.com>
Co-authored-by: Robert Underwood <robertu94@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: Nathan Hanford <8302958+nhanford@users.noreply.github.com>
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
Co-authored-by: Yang Zongze <yangzongze@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: Phil Tooley <32297355+ptooley@users.noreply.github.com>
Co-authored-by: Josh Essman <68349992+joshessman-llnl@users.noreply.github.com>
Co-authored-by: Andreas Baumbach <healther@users.noreply.github.com>
Co-authored-by: Maxim Belkin <maxim.belkin@gmail.com>
Co-authored-by: Rémi Lacroix <remi.lacroix@idris.fr>
Co-authored-by: Cyrus Harrison <cyrush@llnl.gov>
Co-authored-by: Peter Scheibel <scheibel1@llnl.gov>
Co-authored-by: Todd Gamblin <gamblin2@llnl.gov>
Co-authored-by: Michael Kuhn <michael.kuhn@ovgu.de>
Co-authored-by: Tiziano Müller <tm@dev-zero.ch>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

None yet

6 participants