Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy: merge upstream 0.16.3 #1305

Merged
merged 143 commits into from
Sep 23, 2021
Merged

deploy: merge upstream 0.16.3 #1305

merged 143 commits into from
Sep 23, 2021

Conversation

matz-e
Copy link
Member

@matz-e matz-e commented Sep 23, 2021

Has some interesting locking changes.

ax3l and others added 30 commits February 17, 2021 17:07
There is a post-install routine in `ipykernel` that needs to be
called for proper registration with jupyter.
* create HipPackage base class and do some refactoring

* comments and added conflict to raja for openmp with hip
* py-ipykernel: fix bug in phase method

* Fix bug in executable calling
Big Sur versions go 11.0, 11.0.1, 11.1 (vs. prior versions that
only used the minor component)

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
refers spack#20040

This modification emits rules like:

provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").

for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.
…cies (spack#20082)

refers spack#20079

Added docstrings to 'concretize' and 'concretized' to
document the format for tests.

Added tests for the activation of test dependencies.
…#20020)

fixes spack#20019

Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:

conflicts('%gcc@X.Y:', when='@:A.B')

where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.
fixes spack#19981

This commit adds support for target ranges in directives,
for instance:

conflicts('+foo', when='target=x86_64:,aarch64:')

If any target in a spec body is not a known target the
following clause will be emitted:

node_target_satisfies(Package, TargetConstraint)

when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.
…pack#20182)

refers spack#20040

Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.

This prevents the solver from avoiding expected configurations
just because they contain directives like:

depends_on('pkg+foo')

and `+foo` is not the default variant value for pkg.
fixes spack#20040

Matching compilers among nodes has been prioritized
in spack#20020. Selection of default variants has been
tuned in spack#20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.
Co-authored-by: michael laufer <michael.laufer@toganetworks.com>
When all versions were allowed a version_satisfies rule was not emitted,
and this caused conditional directives to fail.
…0099)

fixes spack#20055

Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.

This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.
…spack#20203)

As was done in the old concretizer. Fixes an issue where conditionally
patched dependencies did not show up in spec (gdal+jasper)
PR spack#15702 changed the invocation of the report context when installing
specs, do the same when building environments.
)

Registering external versions among the lists of allowed ones
generates the correct rules for `version_satisfies`
Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.

The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.

- [x] move ad-hoc value handling code into spec_clauses so we do it in
  one place for CLI and packages

- [x] move handling of `variant_possible_value`, etc. into
  `concretize.lp`, where we can automatically infer variant existence
  more concisely.

- [x] simplify/clarify some of the code for variants in `spec_clauses()`
…spack#20102)

Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.

This modification ensures that open-ended variants (variants accepting any string 
or any integer) are projected to the finite set of values that are relevant for this 
concretization.
I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.

So far, the command line is faster than running through Python, but I'm
working on fixing that.  I found that if I do this:

```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp")       # code from spack solve --show asp hdf5
control.load("display.lp")

control.ground([("base", [])])
control.solve(...)
```

It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.

So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.
haampie and others added 22 commits May 22, 2021 11:51
The function we coded in Spack to load Python modules with arbitrary
names from a file seem to have issues with local imports. For
loading hooks though it is unnecessary to use such functions, since
we don't care to bind a custom name to a module nor we have to load
it from an unknown location.

This PR thus modifies spack.hook in the following ways:

- Use __import__ instead of spack.util.imp.load_source (this
  addresses spack#20005)
- Sync module docstring with all the hooks we have
- Avoid using memoization in a module function
- Marked with a leading underscore all the names that are supposed
  to stay local
…pack#23307)

The ASP-based solver can natively manage cases where more than one root spec is given, and is able to concretize all the roots together (ensuring one spec per package at most).

Modifications:
- [x] When concretising together an environment the ASP-based solver calls directly its `solve` method rather than constructing a temporary fake root package.
fixes spack#22718

Instead of trying to maximize the number of
matches (preferred behavior), try to minimize
the number of mismatches (unwanted behavior).
Spack doesn't require users to manually index their repos; it reindexes the indexes automatically when things change. To determine when to do this, it has to `stat()` all package files in each repository to make sure that indexes up to date with packages. We currently index virtual providers, patches by sha256, and tags on packages.

When this was originally implemented, we ran the checker all the time, at startup, but that was slow (see spack#7587). But we didn't go far enough -- it still consults the checker and does all the stat operations just to see if a package exists (`Repo.exists()`).  That might've been a wash in 2018, but as the number of packages has grown, it's gotten slower -- checking 5k packages is expensive and users see this for small operations.  It's a win now to make `Repo.exists()` check files directly.

**Fix:**

This PR does a number of things to speed up `spack load`, `spack info`, and other commands:

- [x] Make `Repo.exists()` check files directly again with `os.path.exists()` (this is the big one)
- [x] Refactor `Spec.satisfies()` so that a checking for virtual packages only happens if needed
      (avoids some calls to exists())
- [x] Avoid calling `Repo.exists(spec)` in `Repo.get()`. `Repo.get()` will ultimately try to load
      a `package.py` file anyway; we can let the failure to load it indicate that the package doesn't
      exist, and avoid another call to exists().
- [x] Fix up some comments in spec parsing
- [x] Call `UnknownPackageError` more consistently in `repo.py`
When using Python 3.9.6, Spack is no longer able to fetch anything. Commands like `spack fetch` and `spack install` all break.

Python 3.9.6 includes a [new change](https://github.com/python/cpython/pull/25853/files#diff-b3712475a413ec972134c0260c8f1eb1deefb66184f740ef00c37b4487ef873eR462) that means that `scheme` must be a string, it cannot be None. The solution is to use an empty string like the method default.

Fixes spack#24644. Also see Homebrew/homebrew-core#80175 where this issue was discovered by CI. Thanks @branchvincent for reporting such a serious issue before any actual users encountered it!

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:

```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```

This caused clang's version to be misdetected as:

```
clang@11.0.0
Target:
```

This resulted in errors when trying to actually use it as a compiler.

When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:

```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```

Changing the regex to only match until the end of the line fixes these
problems.

Fixes: spack#19473
Co-authored-by: Tiziano Müller <tm@dev-zero.ch>
Spack's source mirror was previously in a plain old S3 bucket. That will still
work, but we can do better. This switches to AWS's CloudFront CDN for hosting
the mirror.

CloudFront is 16x faster (or more) than the old bucket.

- [x] change mirror to https://mirror.spack.io
…#24794)

This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.

The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.

Because of this, we need to track open lock files so that we only close them when
a process no longer needs them.  We do this by tracking each lockfile by its
inode and process id.  This has several nice properties:

1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
   process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
   just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
   inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
   number of times necessary for the locks we have.

Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.

Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
      needed (this avoids inadvertently releasing locks that should not be released).
This change make yum usable again on CentOS 6
This was EOL November 30th, 2020. I believe the "builds" are failing on
develop because of it.
@matz-e matz-e changed the title merge da upstream release deploy: merge upstream 0.16.3 Sep 23, 2021
Copy link

@alexsavulescu alexsavulescu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -29,4 +29,4 @@
#
[flake8]
ignore = E129,E221,E241,E272,E731,W503,W504,F999,N801,N813,N814
max-line-length = 79
max-line-length = 88

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀 🌔

💎 🙌

@@ -126,9 +126,6 @@ are currently supported are summarized in the table below:
* - Ubuntu 18.04
- ``ubuntu:18.04``
- ``spack/ubuntu-bionic``
* - CentOS 6

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

@matz-e matz-e merged commit b044dcb into develop Sep 23, 2021
@matz-e matz-e deleted the merge-da-upstream-release branch September 23, 2021 11:15
@pramodk
Copy link

pramodk commented Sep 28, 2021

👍 So this was the only diff with 0.16.3?

Edit: Ok, by looking at the #commits and files changed, I am bit confused.

@matz-e
Copy link
Member Author

matz-e commented Sep 28, 2021

Commits are due to our obsession with squashing things… the actual diff is smaller.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet