Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specifying package version depending on system options #2145

Closed
2 tasks done
buriy opened this issue Mar 7, 2020 · 6 comments
Closed
2 tasks done

Specifying package version depending on system options #2145

buriy opened this issue Mar 7, 2020 · 6 comments
Labels
kind/question User questions (candidates for conversion to discussion) status/wontfix Will not be implemented

Comments

@buriy
Copy link

buriy commented Mar 7, 2020

  • I have searched the issues of this repo and believe that this is not a duplicate.
  • I have searched the documentation and believe that my question is not covered.

Feature Request

I would like to install
cupy-cuda101 when I have system cuda 10.1,
etc cupy-cuda100, cupy-cuda91, ...
How do I do that?
I can instrospect what CUDA do I need, but how do I specify that for poetry? Or at least how do I integrate this with poetry usage?

@neersighted
Copy link
Member

Closing as wontfix for now -- there is simply no way to Poetry to gather and reason based on this information as it is not standardized in the packaging ecosystem like ABI, architecture, or Python version. Poetry has its hands tied here (and this is common with other tools), as ML package variants are typically distributed using multiple indexes and local versions, which it cannot reason around.

@neersighted neersighted added kind/question User questions (candidates for conversion to discussion) status/wontfix Will not be implemented and removed kind/feature Feature requests/implementations labels Sep 28, 2022
@neersighted neersighted closed this as not planned Won't fix, can't repro, duplicate, stale Sep 28, 2022
@buriy
Copy link
Author

buriy commented Sep 28, 2022

@neersighted that's ok, but maybe we can run an extra module by our own and then tell poetry that without editing poetry install file? Right now we can have:
setup.sh:
python get-cuda-version.py | xargs pip install
and won't add that entry to pypoetry .
but what is the best way to do that with poetry?

@neersighted
Copy link
Member

To do what you are describing, which would presumably work for poetry install and would either not work with distfiles, or generate non-portable distfiles, a plugin could work. There's currently no hook/API for this, though it is possible if you're fine coupling to Poetry internals.

In the longer term, the ability to add some sort of external code that is treated like a marker has been discussed at times. This would have the same problem of being non-portable and producing broken distfiles, but it's possible that it could be combined with a plugin that teaches Poetry to build different local version variant dists (similar to many existing ML ecosystem packages) for a solution that is very painful and borderline broken, but at least marginally compatible with existing tooling (like the rest of the ML ecosystem currently is).

Ultimately, the real solution is to capture this information in the wheel specification as it's essentially an ABI for an ecosystem of packages -- if this was something that was part of the language/interpreter/packaging specs, Poetry could support it easily and with no compromises.

@buriy
Copy link
Author

buriy commented Sep 28, 2022

@neersighted I think there are more solutions, a kind of template approach: we can compile a pypoetry.toml file, just filling in some variables with the output from external tools (python get-cuda-version.py | xargs -j{} sed s/CUDAVER/{}/ . And committing that source file pybase.toml to git instead, adding pypoetry.toml to gitignore.
But maybe we could write another static file which can be imported from pypoetry.toml ?
Or maybe some environ variables could be added to change what poetry do?
CUDA=10.1 poetry install and it will replace ${CUDA} in pypoetry.toml with that.
or
[pypoetry.extends machine-conf.toml]
or

[pypoetry.include]
files=machine-conf.toml

Any kind of static extensions is also fine, not necessary to provide dynamic ones (poetry tool is very slow already).

@neersighted
Copy link
Member

neersighted commented Sep 28, 2022

pyproject.toml is meant (at least in the Poetry project itself, excluding plugins) to be declarative. The mechanisms you propose will prevent consuming your project as a dependency, and will result in broken sdists (in addition to the non-portable bdists).

It's certainly not an impossible problem to solve, but Poetry is first and foremost designed to be a first-class participant in the 'mainstream' packaging ecosystem, which currently has no concept of ML/GPGPU APIs. Any design accepted in Poetry itself would have to be backwards-compatible, not degrade the compatibility with the existing ecosystem (or increase the burden of maintaining that compatibility), and be generalized as the current status-quo of ML-related packaging is ad-hoc, and of unknown stability/longevity -- tightly coupling to it makes little sense for a general purpose tool.

Copy link

github-actions bot commented Mar 1, 2024

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 1, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/question User questions (candidates for conversion to discussion) status/wontfix Will not be implemented
Projects
None yet
Development

No branches or pull requests

2 participants