-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add recipe for libmetatensor-torch #26316
Conversation
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/libmetatensor-torch:
For recipes/libmetatensor-torch:
|
Just to check, the version of libtorch is pinned across the ecosystem, right? And any bump in the libtorch version will trigger a rebuild of all feedstocks depending on it? |
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/libmetatensor-torch:
|
35810fa
to
f13a2ca
Compare
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/libmetatensor-torch:
For recipes/python-metatensor:
|
f13a2ca
to
7b2d8ac
Compare
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipes/libmetatensor-torch:
|
7b2d8ac
to
482d599
Compare
482d599
to
cc76f6f
Compare
d302b48
to
6685747
Compare
6685747
to
c418b28
Compare
@conda-forge/help-c-cpp ready for review! |
url: {{ url_base }}/metatensor-torch-v{{ version }}/metatensor-torch-cxx-{{ version }}.tar.gz | ||
sha256: 904cf858d8f98b67b948e8a453d8a6da56111e022050d6c8c3d32a9a2cc83464 | ||
|
||
build: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shared libraries also need a run_exports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not very clear what run_exports does. Does it only add a run
dependency to all packages that built against the library and had it as a host
dependency? Or is there something else I need to be aware of here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
run_exports export runtime requirements to downstream packages which dynamically link to your package. It's a statement which says "hey, if you link to my package, then this is the version that you will need at runtime."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I've added run_exports
to libmetatensor
and removed it from the run
requirements here.
I'd guess libtorch
also has a run_exports, but is there a way to express "build against the cpu variant but run with any variant, cpu or cuda"? For now I do two separate host & run requirements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd guess libtorch also has a run_exports
Looks like the run_exports for libtorch
are variant agnostic, so if you build against any variant of libtorch
you can get any variant at runtime. It looks like this package doesn't link against libtorch_cuda
, so you don't need to worry about contraining the libtorch
build variant.
Thanks for the work @Luthaf. Of course I am willing to help maintaining this recipe. |
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
@conda-forge/help-c-cpp ready for another round of review! |
The logs show that the build system is also looking for zlib? Is that a missing host dependency? |
We are not trying to load ZLIB ourself. Digging around the CMake files, this is what seems to happen: We load torch with Which then does https://github.com/pytorch/pytorch/blob/4ed93d6e0c5deb543ba5a3bd103728f00d39b1a6/cmake/Caffe2Config.cmake.in#L48 which finally tries to find ZLIB: https://github.com/protocolbuffers/protobuf/blob/2f3242c576504459621fd7d78bbccf39bfaa49c5/cmake/protobuf-config.cmake.in#L5 |
55f61b7
to
1ad4dca
Compare
This is the continuation of conda-forge/metatensor-feedstock#2, now packaging the second native library. This one is pure C++, and links to libtorch.
Checklist
url
) rather than a repo (e.g.git_url
) is used in your recipe (see here for more details).