Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build dependencies doesn't use correct pinned version, installs numpy twice during build-time #9542

Open
xmatthias opened this issue Jan 31, 2021 · 35 comments
Labels
C: build logic Stuff related to metadata generation / wheel generation C: constraint Dealing with "constraints" (the -c option) type: feature request Request for a new feature

Comments

@xmatthias
Copy link

xmatthias commented Jan 31, 2021

Environment

  • pip version: 21.0.1
  • Python version: 3.9.0
  • OS: linux

Description

Using pyproject.toml build-dependencies installs the latest version of a library, even if the same pip command installs a fixed version.
in very some cases (binary compilation) this can lead to errors like the below when trying to import the dependency.

RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "xxx/.venv/lib/python3.9/site-packages/utils_find_1st/__init__.py", line 3, in <module>
    from .find_1st import find_1st 
ImportError: numpy.core.multiarray failed to import

Expected behavior

Build process should use the pinned version of numpy (1.19.5) instead of the latest version (1.20.0 at time of writing). This way, the installation process will be coherent, and problems like this are not possible.

How to Reproduce

  • create new environment
  • install numpy and py_find_1st (both with pinned dependencies)
python -m venv .venv
. .venv/bin/activate
pip install -U pip
pip install --no-cache numpy==1.19.5 py_find_1st==1.1.4
python -c "import utils_find_1st"

# To make the above work, upgrade numpy to the latest version (which is the one py_find_1st is compiled against).
pip install -U numpy

Output

$ python -m venv .venv
$ . .venv/bin/activate
$ pip install -U pip
Collecting pip
  Using cached pip-21.0.1-py3-none-any.whl (1.5 MB)
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 20.2.3
    Uninstalling pip-20.2.3:
      Successfully uninstalled pip-20.2.3
Successfully installed pip-21.0.1
$ pip install --no-cache numpy==1.19.5 py_find_1st==1.1.4
Collecting numpy==1.19.5
  Downloading numpy-1.19.5-cp39-cp39-manylinux2010_x86_64.whl (14.9 MB)
     |████████████████████████████████| 14.9 MB 10.4 MB/s 
Collecting py_find_1st==1.1.4
  Downloading py_find_1st-1.1.4.tar.gz (8.7 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Building wheels for collected packages: py-find-1st
  Building wheel for py-find-1st (PEP 517) ... done
  Created wheel for py-find-1st: filename=py_find_1st-1.1.4-cp39-cp39-linux_x86_64.whl size=30989 sha256=c1fa1330f733111b2b8edc447bec0c54abf3caf79cd5f386f5cbef310d41885c
  Stored in directory: /tmp/pip-ephem-wheel-cache-94uzfkql/wheels/1e/11/33/aa4db0927a22de4d0edde2a401e1cc1f307bc209d1fdf5b104
Successfully built py-find-1st
Installing collected packages: numpy, py-find-1st
Successfully installed numpy-1.19.5 py-find-1st-1.1.4
$ python -c "import utils_find_1st"
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/xmatt/development/cryptos/freqtrade_copy/.venv/lib/python3.9/site-packages/utils_find_1st/__init__.py", line 3, in <module>
    from .find_1st import find_1st 
ImportError: numpy.core.multiarray failed to import

In verbose mode, the installation of numpy 1.20.0 can be observed, however, even with "-v", the output is VERY verbose.

....
  changing mode of /tmp/pip-build-env-js9tatya/overlay/bin/f2py3.9 to 755
  Successfully installed numpy-1.20.0 setuptools-52.0.0 wheel-0.36.2
  Removed build tracker: '/tmp/pip-req-tracker-9anxsz9d'
  Installing build dependencies ... done

....

An attached version can be found below (created with pip install --no-cache numpy==1.19.5 py_find_1st==1.1.4 -v &> numpy_install.txt).

numpy_install.txt

@pradyunsg
Copy link
Member

Please use numpy's oldest-supported-numpy helper for declaring dependency on numpy in pyproject.toml.

@xmatthias xmatthias changed the title Build dependencies doesn't use correct pinned version Build dependencies doesn't use correct pinned version, installs numpy twice during build-time Jan 31, 2021
@xmatthias
Copy link
Author

i don't think you can point it to how the pyproject.toml is done.

It's pip that's installing numpy twice (1.20.0 for building, and 1.19.5 as final version), so this can also happen with any other package combination in theory.

It works fine if you install numpy FIRST, and then the package depending on numpy, as then pip recognizes that a compatible version is available, and doesn't install it again.

If it wasn't with numpy but with another random package, you couldn't point to "oldest-supported-numpy" either.

The build-dependency is specified as "numpy>=1.13.0" - which allows every numpy > 1.13.0.
Using oldest-supported-numpy might even make it worse, as according to the documentation, that would pin the build-version as numpy==1.13.0 - which would break the install completely.

In short, it's pip that should resolve the build-dependency, detect that it's a dependency that's going to be installed anyway, and install numpy first (using this numpy installation for the build of the other package).

@pfmoore
Copy link
Member

pfmoore commented Jan 31, 2021

pip builds in an isolated environment, so numpy isn't going "to be installed anyway" in the sense that you mean. You can use --no-build-isolation to make pip do the build in the current environment, but that has its own issues (not least, you have to manually install the build dependencies). IMO it's better to correctly tell pip what's needed for the build, and what's needed at runtime, and then it's sorted once and for all. Your particular situation may make that more awkward, in which case you need to explore the trade-offs in choices like disabling build isolation.

@xmatthias
Copy link
Author

xmatthias commented Jan 31, 2021

Even with build-isolation, it's at least downloaded twice (so it's a bug in pip) which wouldn't be necessary.

Both 1.19.5 and 1.20.0 are perfectly valid numpy versions to satisfy the build-dependencies, so if i instruct pip to donwload 1.19.5 - why download 1.20.0 too (and on top of that, cause a potential build-compatibility issue alongside that).

edit:
I think there should be the following behaviour:

  • if the build-dependency is specifically pinned (numpy==1.20.0) - then the build-installation should use that dependency, and install whatever is given otherwise in the "regular" environment.
  • when it's loosely pinned (numpy>=1.13.0) - it should use as build-dependency what's installed in the same command - and ONLY fall back to the latest version if that dependency is not correctly installed to begin with.

@pradyunsg
Copy link
Member

pradyunsg commented Feb 1, 2021

Neither of those suggestions work super cleanly and are actually more difficult to understand and explain than "isolated builds are isolated". As of today, you have 2 options: carefully pin the build dependencies, or tell pip to not do build isolation (i.e. you'll manage the build dependencies in the environment).

Beyond that, I'm not excited by the idea of additional complexity in the dependency resolution process, that makes isolated builds depend on existing environment details -- both of your suggestions require adding additional complexity to the already NP-complete problem of dependency resolution, and that code is already complex enough. And, they're "solutions" operating with incomplete information which will certainly miss certain usecases (eg: custom compiled package that wasn't installed via a wheel).

At the end of the day, pip isn't going to be solving every use case perfectly, and this is one of those imperfect cases at the moment. For now, that means additional work on the user's side, and I'm fine with that because we don't have a good way to have the user communicate the complete complexity of build dependencies to pip.

@paulmueller
Copy link

paulmueller commented Feb 1, 2021

I had the same issue today. Since the release of numpy 1.20.0 yesterday, there is a new dimension to this problem.

For instance, I (mostly my users and CI services) usually install the package dclab with

python3 -m venv env
source env/bin/activate
pip install --upgrade pip wheel
pip install dclab[all]

dclab comes with a few cython extensions that need to be built during installation, which is a perfectly normal use-case. This is not one of those imperfect cases.

Now, the problem is that during installation of dclab, pip downloads numpy 1.20.0 and builds the extensions. But in the environment env, pip installs numpy 1.19.5 (pinned by tensorflow). When I then try to import dclab, I get this error (GH Actions):

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/paul/repos/dclab/dclab/__init__.py", line 6, in <module>
    from . import definitions as dfn  # noqa: F401
  File "/home/paul/repos/dclab/dclab/definitions.py", line 4, in <module>
    from .rtdc_dataset.ancillaries import AncillaryFeature
  File "/home/paul/repos/dclab/dclab/rtdc_dataset/__init__.py", line 4, in <module>
    from .check import check_dataset  # noqa: F401
  File "/home/paul/repos/dclab/dclab/rtdc_dataset/check.py", line 10, in <module>
    from .core import RTDCBase
  File "/home/paul/repos/dclab/dclab/rtdc_dataset/core.py", line 12, in <module>
    from ..polygon_filter import PolygonFilter
  File "/home/paul/repos/dclab/dclab/polygon_filter.py", line 8, in <module>
    from .external.skimage.measure import points_in_poly
  File "/home/paul/repos/dclab/dclab/external/__init__.py", line 3, in <module>
    from . import skimage
  File "/home/paul/repos/dclab/dclab/external/skimage/__init__.py", line 2, in <module>
    from . import measure  # noqa: F401
  File "/home/paul/repos/dclab/dclab/external/skimage/measure.py", line 7, in <module>
    from .pnpoly import points_in_poly  # noqa: F401
  File "/home/paul/repos/dclab/dclab/external/skimage/pnpoly.py", line 1, in <module>
    from ._pnpoly import _grid_points_in_poly, _points_in_poly
  File "dclab/external/skimage/_pnpoly.pyx", line 1, in init dclab.external.skimage._pnpoly
    #cython: cdivision=True
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

As far as I can see, I have only three choices:

  • use oldest-supported-numpy in pyproject.toml (which actually works for dclab)
  • install dclab with --no-build-isolation, which I cannot really expect from my users.
  • switch from tensorflow to pytorch, which is anyway used more in research as I learned today

The best solution to this problem, as far as I can see, would be for pip to be smart about choosing which version of the build dependency in pyproject.toml to install:

  • if there is a numpy already installed in the environment, install the same version to build the extension
  • otherwise, if the pip install command already came up with a certain version range for numpy, use the highest available version (e.g. pip install dclab[all] tensorflow would tell pip that tensorflow needs numpy 1.19.5 and so it makes sense to use that when building the extensions)

I know that pinning versions is not good, but tensorflow is doing it apparently, and many people use tensorflow.

[EDIT: found out that oldest-supported-numpy works for me]

@xmatthias
Copy link
Author

use oldest-supported-numpy in pyproject.toml

Which Version of numpy does that install?
as far as i could tell from looking at that package, it seemed to use the lowest possible numpy version - so your environment would have 1.19.5, but install-dependencies would be 1.13.x.

While it may not cause a problem in this constellation, it might cause a problem once your environment updates to numpy 1.20.0 (which apparently changed the ndarray size, while prior versions didn't).

@paulmueller
Copy link

For me it installs numpy 1.17.3; the oldest-supported-numpy package on PyPI states:

install_requires = 
	
	numpy==1.16.0; python_version=='3.5' and platform_system=='AIX'
	numpy==1.16.0; python_version=='3.6' and platform_system=='AIX'
	numpy==1.16.0; python_version=='3.7' and platform_system=='AIX'
	
	numpy==1.18.5; python_version=='3.5' and platform_machine=='aarch64'
	numpy==1.19.2; python_version=='3.6' and platform_machine=='aarch64'
	numpy==1.19.2; python_version=='3.7' and platform_machine=='aarch64'
	numpy==1.19.2; python_version=='3.8' and platform_machine=='aarch64'
	
	numpy==1.13.3; python_version=='3.5' and platform_machine!='aarch64' and platform_system!='AIX'
	numpy==1.13.3; python_version=='3.6' and platform_machine!='aarch64' and platform_system!='AIX' and platform_python_implementation != 'PyPy'
	numpy==1.14.5; python_version=='3.7' and platform_machine!='aarch64' and platform_system!='AIX' and platform_python_implementation != 'PyPy'
	numpy==1.17.3; python_version=='3.8' and platform_machine!='aarch64' and platform_python_implementation != 'PyPy'
	numpy==1.19.3; python_version=='3.9' and platform_python_implementation != 'PyPy'
	
	numpy==1.19.0; python_version=='3.6' and platform_python_implementation=='PyPy'
	numpy==1.19.0; python_version=='3.7' and platform_python_implementation=='PyPy'
	
	numpy; python_version>='3.10'
	numpy; python_version>='3.8' and platform_python_implementation=='PyPy'

I just checked with a pip install numpy==1.20.0 in my environment. Pip complains about about tensorflow being incompatible with it, but dclab imports and the tests run just fine. I assume that is because of the backwards compatibility (https://pypi.org/project/oldest-supported-numpy/):

The reason to use the oldest available Numpy version as a build-time dependency is because of ABI compatibility. Binaries compiled with old Numpy versions are binary compatible with newer Numpy versions, but not vice versa.

@1fish2
Copy link

1fish2 commented Feb 12, 2021

Here's another test case to help clarify the issue.

Steps to reproduce

This problem happens on Linux, in Docker for Mac, but not on Mac outside of Docker.

pyenv virtualenv 3.8.5 test-cvxpy && pyenv shell test-cvxpy  # a fresh virtualenv
pip install -U pip  # install the latest pip
pip install numpy==1.19.5  # install the project's numpy version; not yet using 1.20.*
pip install cvxpy==1.1.7  # install cvxpy
pip list  # it has pip==21.0.1, numpy==1.19.5, cvxpy==1.1.7
python -c "import cvxpy"

Result

RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/groups/mcovert/pyenv/versions/test-cvxpy/lib/python3.8/site-packages/cvxpy/__init__.py", line 18, in <module>
    from cvxpy.atoms import *
  File "/home/groups/mcovert/pyenv/versions/test-cvxpy/lib/python3.8/site-packages/cvxpy/atoms/__init__.py", line 20, in <module>
    from cvxpy.atoms.geo_mean import geo_mean
  File "/home/groups/mcovert/pyenv/versions/test-cvxpy/lib/python3.8/site-packages/cvxpy/atoms/geo_mean.py", line 20, in <module>
    from cvxpy.utilities.power_tools import (fracify, decompose, approx_error, lower_bound,
  File "/home/groups/mcovert/pyenv/versions/test-cvxpy/lib/python3.8/site-packages/cvxpy/utilities/power_tools.py", line 18, in <module>
    from cvxpy.atoms.affine.reshape import reshape
  File "/home/groups/mcovert/pyenv/versions/test-cvxpy/lib/python3.8/site-packages/cvxpy/atoms/affine/reshape.py", line 18, in <module>
    from cvxpy.atoms.affine.hstack import hstack
  File "/home/groups/mcovert/pyenv/versions/test-cvxpy/lib/python3.8/site-packages/cvxpy/atoms/affine/hstack.py", line 18, in <module>
    from cvxpy.atoms.affine.affine_atom import AffAtom
  File "/home/groups/mcovert/pyenv/versions/test-cvxpy/lib/python3.8/site-packages/cvxpy/atoms/affine/affine_atom.py", line 22, in <module>
    from cvxpy.cvxcore.python import canonInterface
  File "/home/groups/mcovert/pyenv/versions/test-cvxpy/lib/python3.8/site-packages/cvxpy/cvxcore/python/__init__.py", line 3, in <module>
    import _cvxcore
ImportError: numpy.core.multiarray failed to import

Notes

  • There's no visible indication that numpy 1.20 came into play.
  • The exception messages aren't very helpful except that a Google search on cvxpy module compiled against API version 0xe but this version of numpy is 0xd will find https://github.com/cvxgrp/cvxpy/issues/1229 which links here.
  • This worked until recently. Was a pip change involved? https://github.com/cvxgrp/cvxpy/issues/1229 says it started happening when numpy 1.20 was released.

Workaround 1: Update to cvxpy==1.1.10 which adds "oldest-supported-numpy" although 1.1.10 is not listed as the "latest release" and its description is simply Bump version: 1.1.9 → 1.1.10.

Workaround 1: Update the project to numpy==1.20.*. It's a big release with some deprecates and maybe API changes.

@RonnyPfannschmidt
Copy link
Contributor

Structurally this needs better tooling as either the build time numpy needs to be pinned low, or the build process needs to generate wheels with updated requirements

However none of the tools is something in pip, this is a topic for numpy, setuptools and the build backends as far as I can tell

@1fish2
Copy link

1fish2 commented Feb 12, 2021

Interesting. There's a leaky abstraction or two in there somewhere. A Dockerfile aims to be a repeatable build but these steps inside it:

pip install numpy==1.19.5
pip install numpy==1.19.5 cvxpy==1.1.7

now quietly build a broken Python environment just because numpy==1.20 was released.

  • Why is --no-build-isolation not the default? Does the build process need to install other packages temporarily?
  • Could it at least build cvxpy using the same release of numpy that's installed and named in the current pip install command?
  • If the above commands included --no-binary=numpy (which compiles numpy from source, e.g. in order to link to a specific OpenBLAS) would the cvxpy temporarily install numpy the same way? If not, could that also break the cvxpy installation?

@pfmoore
Copy link
Member

pfmoore commented Feb 12, 2021

  • Why is --no-build-isolation not the default? Does the build process need to install other packages temporarily?

Yes, precisely that. There's no reason to assume that if a user runs pip install cvxpy, they want (or have) numpy installed in their environment. Pip has no reason to assume that cvxpy needs numpy at runtime, just because it needs it at build time (cython is an obvious example of why that's the case).

  • Could it at least build cvxpy using the same release of numpy that's installed and named in the current pip install command?

Why should pip assume that's the right thing to do? It might be for numpy, but we don't want to special-case numpy here, and there's no reason why it would be true in the general case. You might need to build with a particular version of setuptools, but have a runtime dependency on any version, because all you need at runtime is some simple function from pkg_resources.

  • If the above commands included --no-binary=numpy (which compiles numpy from source, e.g. in order to link to a specific OpenBLAS) would the cvxpy temporarily install numpy the same way? If not, could that also break the cvxpy installation?

Honestly, I have no idea. (I don't know without checking the source whether --no-binary is copied into the build environment invocation of pip, although I suspect it isn't). And I don't even know whether it should be (again, remember that we need to be thinking about "in general" here, not basing the answer on numpy-specific details).

It's quite possible that there are additional bits of metadata, or additional mechanisms, that would make it easier to specify cases like this. But designing something like that is hard, and most people who need that sort of thing are extremely focused on their particular use cases, and don't have a good feel for the more general case (nor should they, it's not relevant to them). So it's hard to find anyone with both the motivation and the knowledge to look at the problem. Which is also why non-pip domain specific solutions like the "oldest supported numpy" thing are probably a better approach...

@cburca-resilient
Copy link

I agree with pfmoore in that I don't think numpy should be special-cased.

If you want to have reproducible builds with pip, it looks like you need to define two files:

  • pyproject.toml for build dependencies
  • requirements.txt for runtime dependencies

If you're using numpy both to build extensions and at runtime, you'll want to specify the exact same version of numpy in both.

I wasn't too familiar with pyproject.toml before being bitten by this bug, but the following blog post does a good job of explaining the rationale: https://snarky.ca/what-the-heck-is-pyproject-toml/

@1fish2
Copy link

1fish2 commented Feb 12, 2021

Indeed, I was starting to wonder if cvxpy should specify building and running with the same version of numpy, but as a library, letting the application specify which version of numpy.

cvxpy must specify a runtime requirement on numpy directly or indirectly, since installing cvxpy==1.1.7 into a fresh virtualenv gets this pip list:

Package    Version
---------- -----------
cvxpy      1.1.7
ecos       2.0.7.post1
numpy      1.20.1
osqp       0.6.2.post0
pip        21.0.1
qdldl      0.1.5.post0
scipy      1.6.0
scs        2.1.2
setuptools 49.2.1

I'm not saying pip should special-case numpy, just that the combination of tools is now failing subtly, fragile to a new release of one library in builds that tried to freeze all library versions, and this is probably puzzling lots of developers after they rebuild Python environments.

(pyproject.toml and built-time dependencies are news to me.)

@jdavies-st
Copy link

Here's a similar, but slightly different case:

  • Package A
    • doesn't have numpy in it's pyproject.toml as it's only a runtime dependency
    • specifies a specific version of numpy==1.19.4 in a requirements.txt
    • depends on Package B, also specified in the same requirements.txt
    • maintainer of Package A does not maintain Package B, only depends on it
  • Package B
    • has a pyproject.toml listing numpy as a build dependency (has C code in the package).
    • is installed from PyPI via an sdist, so needs to be built/compiled when installed
    • can be built/compiled with a wide range of numpy versions

The above scenario produces the same results where the pinned version of numpy==1.19.4 in Package A is not used to build the dependency Package B that does need numpy. Same error results.

@jdavies-st
Copy link

So as a followup to my above comment, is there a way as a consumer of package B that I do not maintain but do depend on to control what version of a build dependency gets used by pip in the isolated build? Concretely, is there actually any way to control which version of numpy is used in the isolated build env for a package that lists numpy as a build dependency in its pyproject.toml?

lrvdijk added a commit to lrvdijk/hdmedians that referenced this issue Feb 25, 2021
Fixes incompatible numpy versions build vs runtime, as NumPy
v1.20 is binary incompatible with older versions.

See pypa/pip#9542
@d1saster
Copy link

d1saster commented Sep 8, 2022

There hasn't been much discussion in this issue lately, but for future reference I want to add that this is not only an issue for numpy and its ecosystem of dependent packages, but also for other packages. In helmholtz-analytics/mpi4torch#7 we face a similar issue with pytorch and I don't think that the purported solution of creating a meta package like oldest-supported-numpy would rectify the situation in our case, simply since pytorch is much more lenient regarding API/ABI compatibility across versions. So for me this issue mostly reads like "current build isolation implementation in pip breaks C/C++ ABI dependencies across different packages."

To be fair, pip's behavior probably is fully PEP517/518 compliant, since these PEPs only specify "minimal build dependencies" and how to proceed with building a single package. What we are asking for is more: We want pip to install "minimal build dependencies compatible with the to-be installed set of other packages".

This got me thinking that given pip calls itself to install the build dependencies in build_env.py, couldn't one add sth. like "weak constraints" (weak in the sense that build dependencies according to PEP517/518 always take precedence) that contain the selected version-specified set of the other to-be-installed packages?

However, and that is probably where the snake bites its tail, the build environments AFAIK already need to be prepared for potential candidates of to-be-installed packages? As such we would not have the final set of packages available, and even for simply iterating over candidate sets one cannot only anticipate that this can become expensive, but there are probably some nasty corner cases. @pradyunsg Is this the issue you are refering to in your comment? If so, do you have an idea on how to fix this?

@sbidoul
Copy link
Member

sbidoul commented Sep 8, 2022

I'm wondering if some sort of --build-constraints option (similar to --constraints) would help? I don't know if that has been proposed yet.

@pradyunsg
Copy link
Member

pradyunsg commented Sep 9, 2022

FWIW, it’s already possible to use PIP_CONSTRAINT environment variable to do the “careful pinning” I mentioned in an earlier comment.

@pradyunsg pradyunsg added C: build logic Stuff related to metadata generation / wheel generation and removed S: needs triage Issues/PRs that need to be triaged labels Sep 9, 2022
@pradyunsg
Copy link
Member

pradyunsg commented Sep 9, 2022

@pradyunsg Is this the issue you are refering to in your comment? If so, do you have an idea on how to fix this?

Precisely.

When building a package, pip does not know what exact set of dependencies it will end up with because it has not seen the entire set of dependencies yet. For how to "fix" this on pip's end -- it's not something that pip has enough information to fix.

We do have one mechanism to provide this additional information to pip though, and that's what the PIP_CONSTRAINT comment I made earlier today is about... Specifically, that is using https://pip.pypa.io/en/stable/user_guide/#constraints-files to constraint what packages get used by pip (The environment variable is derived from https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-c (as described in https://pip.pypa.io/en/stable/topics/configuration/). The environment variable would make all pip subprocesses also see it (thanks to way the OS managed them) and so the constraint file would also affect the build environment's subprocess as well.

To provide a concrete example... I'll use this usecase:

# constraints.txt
numpy==1.19.5
cvxpy==1.1.7
$ PIP_CONSTRAINT=constraints.txt pip install numpy cvxpy

This will install the pinned versions and use them for the build as well.

I'm wondering if some sort of --build-constraints option (similar to --constraints) would help?

Well, for the usecase here, I reckon that passing through the --constraints down to the subprocess call might actually be the right behaviour instead. I'm not quite sure if/what implications doing that has, but I think it might actually be the right behaviour to have here.

@pradyunsg pradyunsg added C: constraint Dealing with "constraints" (the -c option) type: feature request Request for a new feature labels Sep 10, 2022
@rgommers
Copy link

rgommers commented Feb 2, 2023

There is a significant issue here with dependencies with ABI constraints, and with NumPy in particular because it's so widely used (and because its runtime deps will be wrong if you build against a too-new version of numpy). As the maintainer of most NumPy build & packaging tools and docs, let me try to give an answer.

Regarding what to do right now if you run into this problem:

  1. Short answer: use oldest-supported-numpy
  2. Longer answer: if you need different numpy versions that oldest-supported-numpy pins to, you are kinda on your own and have to maintain similar pins as oldest-supported-numpy gives you (see SciPy's pyproject.toml for an example of this). In that case, please read https://numpy.org/devdocs/dev/depending_on_numpy.html#adding-a-dependency-on-numpy. If anything is missing there, I'd be happy to update the guidance there.

There will still be an issue here that it's possible to trigger build isolation, end up with a wheel built against numpy 1.X1.Y1 which contains a runtime dependency numpy >= 1.X2.Y2 and X2 being smaller than X1. This is unavoidable right now unless you use oldest-supported-numpy - but typically that is a very minor issue and you can always fix it by upgrading numpy so the runtime and build time versions match.

To work towards better solutions for this issue:

For more context on depending on packages with an ABI (uses NumPy and PyTorch as qualitatively different examples), see https://pypackaging-native.github.io/key-issues/abi/.

The answer is not to do anything in pip here imho. The problem is that wheels are being produced where the runtime dependencies are simply incorrect. So that must be fixed, rather than having pip work around it. It is build backends that produce such wheels, so the first place for improvements is there. Here is what needs to be implemented: mesonbuild/meson-python#29.

Once the build backend specific solutions have been established, it may make sense to look at standardizing that solution so it can be expressed in a single way in the build-system and dependencies sections of pyproject.toml, rather than it being slightly different for every build backend (we're going to try to at least keep meson-python and scikit-build-core in sync here).

The --build-constraints kind of UX for pip to override dependencies is certainly helpful, and for many cases other than dealing with ABI versions - so +1 for that. But it's a "work around wrong metadata" last resort that we should try to avoid having a need for for the average user who only wants to do something like express numpy >= 1.21.3.

@d1saster
Copy link

d1saster commented Feb 2, 2023

Thx for the additional pointers. Having a resource like pypackaging-native to gather info is certainly a good idea.

However I disagree with one of your conclusions:

The problem is that wheels are being produced where the runtime dependencies are simply incorrect.

setuptools is flexible enough to add this runtime dependency to the generated wheel (which I already use). Hence just uploading sdist to PYPI and having the buildsystem pin the runtime requirement of the produced wheel, is not a solution IMHO.

I agree that the PIP_CONSTRAINTfix should not be the preferred and permanent solution.

@rgommers
Copy link

rgommers commented Feb 2, 2023

setuptools is flexible enough to add this runtime dependency to the generated wheel (which I already use).

Fair enough - when you run custom code in setup.py you can always make this work. There's a couple of very similar but different issues described in this thread I think. Yours is because of the == pin in the runtime environment I believe. I'm interested in making this work well with only pyproject.toml metadata, rather than dynamic dependencies and running custom code in setup.py. Which is possible in principle, but not today.

It's also fair to say I think that when the runtime dependencies are correct, then pip install mypkg numpy==1.19.5 should error out because of incompatible runtime constraints if the mypkg wheel contains numpy>=1.20.0. The example in the issue description here seems to happily install two packages with incompatible constraints.

  • Why is --no-build-isolation not the default? Does the build process need to install other packages temporarily?

That's perhaps another angle of looking at this indeed - if --no-build-isolation were the default, there would be no problem here.

Yes, precisely that. There's no reason to assume that if a user runs pip install cvxpy, they want (or have) numpy installed in their environment. Pip has no reason to assume that cvxpy needs numpy at runtime, just because it needs it at build time (cython is an obvious example of why that's the case).

That's not quite the reason as remember it. If it's unused, it also wouldn't do any harm. The kind of thing this issue is about - building from source on an end user machine - works better without build isolation. Build isolation was chosen as the default to make builds more repeatable, in particular for building wheels to upload to PyPI. Which was a valid choice - but it came at the cost of introducing issues like this one. When a user has numpy==1.19.5 in their runtime env, they are best served by using that also as the version to build against.

@pfmoore
Copy link
Member

pfmoore commented Feb 2, 2023

The kind of thing this issue is about - building from source on an end user machine - works better without build isolation.

I'd strongly disagree. I don't want building a package (as part of pip install X) to fail because I don't have setuptools (or flit, or poetry, or...) installed. But I also don't want pip to install the build backend into my environment. Build isolation fixes all of this.

Maybe you meant "on a developer machine"? Or maybe you meand "works better with build isolation"? Or maybe you're assuming that no end user ever needs to install a package that's available only in sdist form?

@rgommers
Copy link

rgommers commented Feb 3, 2023

@pfmoore no typo, this really does work better without build isolation. Build isolation is a tradeoff, some things get better, some things get worse. Dealing with numpy-like ABI constraints is certainly worse (as this issue shows). There are other cases, for example when using pip install . in a conda/spack/nix env, you never want build isolation if you deal with native dependencies. Same for editable installs, that arguably should disable build isolation.

No worries, I am not planning to propose any changes to how things work today. You just have to be aware that it's not clearcut and there are some conceptual issues with the current design.

Or maybe you're assuming that no end user ever needs to install a package that's available only in sdist form?

On the contrary - I do it all the time, and so do the many users whose bug reports on NumPy and SciPy I deal with.

@pfmoore
Copy link
Member

pfmoore commented Feb 3, 2023

@rgommers OK, fair enough. But I still think it's right for build isolation to be the default, and where non-isolated builds are better, people should opt in. That's the comment I was uncomfortable with. I agree 100% that we need better protection for users who don't have the knowledge to set things up for non-isolated builds so that they don't get dumped with a complex build they weren't expecting. But I don't want anyone to have to install setuptools before they can install some simple pure-python package that the author just hasn't uploaded a wheel for.

@sbidoul
Copy link
Member

sbidoul commented Feb 3, 2023

Same for editable installs, that arguably should disable build isolation.

I disagree with that. I often do editable installs in production environments (in Dockerfiles, for instance), or in CI jobs, and I don't want the build dependencies in the runtime environment. So there are tradeoffs there too.

d1saster added a commit to d1saster/pip that referenced this issue Feb 3, 2023
This commit provides a simple test that demonstrates the issues
a resolver-unaware build isolation imposes on packages with C/C++
ABI dependencies.

Cf. pypa#9542 for the corresponding
discussion.
@d1saster
Copy link

d1saster commented Feb 3, 2023

setuptools is flexible enough to add this runtime dependency to the generated wheel (which I already use).

Fair enough - when you run custom code in setup.py you can always make this work. There's a couple of very similar but different issues described in this thread I think. Yours is because of the == pin in the runtime environment I believe. I'm interested in making this work well with only pyproject.toml metadata, rather than dynamic dependencies and running custom code in setup.py. Which is possible in principle, but not today.

It's also fair to say I think that when the runtime dependencies are correct, then pip install mypkg numpy==1.19.5 should error out because of incompatible runtime constraints if the mypkg wheel contains numpy>=1.20.0. The example in the issue description here seems to happily install two packages with incompatible constraints.

Ok, now I understand what you meant. Sry I was getting ahead of myself there, and I fully agree with you that many people/packages facing this issue need to pin the version in the built wheel files. This is certainly sth. which is a non-issue for pip, but rather needs fixing in the build backends as you suggested e.g. for meson-python.

However, and this is the point in my opinion: Even when people have fixed their packages or build backends, the issue persists. It is no longer a runtime issue, as the original reporter in this thread experienced, it becomes an install-time issue. And this might very well be sth. pip could (and maybe even should) address.

To highlight the install-time issue I created a draft PR #11778 that adds a (so far failing) test to the pip test collection. Maybe sb. has a good idea on how to proceed from there.

Regarding your idea about the metadata, that is probably the big question in my opinion: Is it possible to fix this issue, maybe by implementing a good-enough heuristic in pip, that works for most cases, or does it need additional metadata to find feasible solutions.

@pfmoore
Copy link
Member

pfmoore commented Feb 3, 2023

To highlight the install-time issue I created a draft PR #11778 that adds a (so far failing) test to the pip test collection. Maybe sb. has a good idea on how to proceed from there.

To be honest, I've lost the thread of what's going on here. And a PR including just a test that claims to demonstrate "the problem", without clearly explaining what the problem is in isolation (i.e., without expecting the reader to have followed this whole discussion) isn't of much help here.

If someone can add a comment to the PR describing a way to reproduce the issue it's trying to demonstrate, in terms of how to manually write a package that shows the problem, with step by step explanations, that would help a lot. I tried and failed to reverse engineer the logic of the test (the unquoted_string business lost me).

@rgommers
Copy link

rgommers commented Feb 3, 2023

Regarding your idea about the metadata, that is probably the big question in my opinion: Is it possible to fix this issue, maybe by implementing a good-enough heuristic in pip, that works for most cases, or does it need additional metadata to find feasible solutions.

No additional metadata is needed I believe. Right now, this example from the issue description:

pip install --no-cache numpy==1.19.5 py_find_1st==1.1.4

should error out if the runtime dependencies are correct in the py_find_1st wheel - with an understandable error message. Something like "numpy==1.19.5 and numpy>=1.24.1 (coming from py_find_1st==1.1.4) constraints are incompatible". Bonus points for pointing to the two possible solutions: using PIP_CONSTRAINT or removing the explicit ==1.19.5 pin.

There is no way to "fix" this in pip by automatically changing something - the two constraints are actually incompatible.

@uranusjr
Copy link
Member

uranusjr commented Feb 8, 2023

Has anyone mentioned #4582? It already discussed the same topic in considerable depth, and many people responded above are involved there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: build logic Stuff related to metadata generation / wheel generation C: constraint Dealing with "constraints" (the -c option) type: feature request Request for a new feature
Projects
None yet
Development

No branches or pull requests