-
-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pin hdf5 and openblas #2
Pin hdf5 and openblas #2
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
recipe/meta.yaml
Outdated
skip: true # [win] | ||
|
||
requirements: | ||
build: | ||
- openblas | ||
- openblas 0.2.20|0.2.20.* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you try to remove the pins in the meta.yaml
and re-render again?
Latest conda-smithy should be able to apply the pins automatically via the ci_support/<platform>_.yaml
files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was told yesterday by @jakirkham to do it this way in conda-forge/staged-recipes#5726.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still need run_exports
on the dependencies for this to work properly 100%, but I wonder that would ci_support/<platform>_.yaml
if you drop the pins here.
With that said, we a close to re-building everything and dropping the pinning on the recipe, so this change you are applying in this PR will be outdated soon. (I'm not saying don't do it now, but watch out for future changes ;-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jakirkham is mostly correct in conda-forge/staged-recipes#5726 (comment)
I'd advise an alternate pattern. In the build deps, remove all pins. Conda-build 3 will do the right thing here.
In the run deps, you won't get automatic pinning until the upstreams are edited to include run_exports and rebuilt. So, for those, you should do:
- {{ pin_compatible('openblas', max_pin='x.x.x') }}
- {{ pin_compatible('hdf5', max_pin='x.x.x') }}
These can be completely deleted in run deps once the upstreams have run_exports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the number of x's is the number of places to pin. Since both hdf5 and openblas pin 3 places in @jakirkham's request, I've reflected that in my suggestion here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PS: either way, you'll want to edit the recipe to remove the run dependencies after run_exports are adding upstream. If you don't remove the run deps with your approach, your compatibility bounds may differ from the run_exports setting. This will give you more bounds. Conda will use the tightest merged constraint from all of the bounds provided.
For example, if you pinned to 'x.x.x', but the upstream author knows that 'x.x' is sufficient and puts that in run_exports, your global 'x.x.x' takes priority because it is more restrictive, and all of your packages will be pinned too tightly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
either way, you'll want to edit the recipe to remove the run dependencies after run_exports are adding upstream.
I guess that there is no escaping this and we should focus our efforts on re-build the recipes with run_exports
ASAP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I'm fine with doing it in conda-forge-pinning instead and advising people to remove pins from the build and run deps.
You'll just put the dep name in build and run.
I'm still not OK with doing this for boost or numpy, for the record. I'm fine with it as an interim solution for packages that we'll eventually have run_exports for.
For numpy and boost, I think @mingwandroid was correct when he recommended that we have dedicated libboost/numpy-(dev something) that have run_exports, and it is then not ambiguous how the packages are being used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this sounds like less work in the long run. Thanks for mentioning this alternative, @isuruf.
What's the issue with boost
?
Is the issue with numpy
just the pure Python packages that depend on it? Honestly I don't think that is as much of a problem as people would imagine. Most pure Python packages that use numpy
are adding it to run
only (not build
soon to be host
). If conda-build
is still pinning things that are run
only dependency, we should discuss that in a conda-build
issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More discussion in conda-forge/conda-forge-pinning-feedstock#4 and conda-forge/conda-forge-pinning-feedstock#44 (comment)
If a single package can be used multiple ways, run_exports and the global pinning are not appropriate. For boost, there should be header-only uses and linked uses, with only the latter requiring run_exports. In conda-forge/conda-forge-pinning-feedstock#44 (comment), @isuruf wants to use the global pinnings for them, but leave it up to the recipe author to ignore it where it isn't necessary.
The real solution isn't to do either pin_compatible (local) or pin_run_as_build (global), though either may be interim solutions. The real solution is to make it so that a single package is not ambiguous for how its run_exports should be set. This means splitting up or creating metapackages for numpy and libboost that allow us to indicate specific usage that implies linking (and thus binary compatibility concerns/precautions).
@conda-forge-admin, please rerender |
.ci_support/linux_.yaml
Outdated
hdf5: | ||
- 1.10.1 | ||
openblas: | ||
- 0.2.20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! This is what I wanted to see 😄
Thanks @msarahan for explaining the {{ pin_compatible('hdf5', max_pin='x.x.x') }}
.
@ChristopherHogan this is all new and we are in the process of writing docs, sorry for making your PR the guinea pig 😬
@conda-forge-admin, please rerender