How to bootstrap setuptools from source #980
Comments
An easy solution to this problem would be to replace: from setuptools import setup with: try:
from setuptools import setup
except ImportError:
from distutils.core import setup in the |
Ok, I hacked our package manager so that it can still install setuptools. spack/spack#3198 contains all of the changes necessary to build setuptools from source now that the dependencies are no longer vendored. I understand the frustration that comes with having to maintain vendors of all of your dependencies; I just hope that you don't add any dependencies that cannot be built with |
Can you use pip to install the dependencies from wheels (which don't require setuptools to install)? That's the recommended procedure. Or can your package manager install the wheels? |
Unfortunately, a package manager that requires another package manager kind of defeats the purpose, don't you think? It looks like other developers have already contributed patches to fix appdirs and pyparsing so that they can be built without setuptools. Assuming the developers approve those patches, we should be good for now. |
Yes, I sort of see that. I worry this approach may conflict with the long-term strategy for setuptools, which is that it might require arbitrary dependencies, some of which require setuptools. In fact, an early goal is to move pkg_resources into its own packages, and that package would probably require setuptools to install. Is there a reason your package manager can't have pre-built versions of these packages available or vendor them itself (into the setuptools build recipe)? |
In general, yes, there is a reason. Our package manager is designed to install software in the world of HPC, where you may need to use exotic compilers like xlc to install something on a bluegene-q or cray supercomputer. Since we need to support hundreds of operating systems and dozens of compilers, we haven't spent much time on supporting binary package installation like Python's wheels or Homebrew's taps. We have plans to do so in the future, but it has been low priority. Obviously, the compiler/OS doesn't matter much for non-compiled packages like setuptools, but the mechanism would have to be the same. We have dealt with circular dependencies before. For example, pkg-config depends on glib and vice versa. Luckily the developers realized this, and pkg-config comes with it's own internal copy of glib, much like setuptools used to come with its dependencies. This can be annoying since we end up having to add the same patches for glib to both packages, but it prevents a circular dependency which is nice. We could theoretically vendor the dependencies ourselves. That seems like the easiest solution to me if we ever run into a setuptools dependency where |
I'd say it's fairly common. And some packagers might be depending on those features and not realizing it, because pip will pre-import setuptools and setuptools monkey-patches distutils, so even a package that only imports |
The below may not be appropriate for this thread, and some of it gets convoluted quickly. Please disregard that which you feel is not appropriate here, apologies in advance! @jaraco is it correct that the general expectation will be
Is this correct, or should Edit: I was browsing related issues, things seem clearer now. When PEP 518 is standardized, the
|
@jaraco: how do the plans for Even if you have build dependencies and can install the build deps from wheels, that is still a binary in the chain that someone has to trust. So I worry a bit about this direction. I suppose that we could implement some bootstrapping logic that goes back to older versions of setuptools, if we need to, or we could rely on a baseline set of "trusted" wheels for setuptools (adamjstewart is right that we don't have binary installs yet, but they're not that hard to add). But I'd rather reproduce from source, and I suspect other distros would too. @svenevs: other than reproducibility, the main issue with relying on pip in spack is that we need to be able to mirror builds on airgapped networks. Spack traverses a dependency DAG, downloads all the sources, and lets you build that on networks that aren't connected to the internet. If we rely on pip, we can't do that. |
In general, requiring yourself as a dependency to bootstrap yourself leads to a lot of headaches. For example, we are having a lot of nightmares trying to package |
I wouldn't recommend using older versions of setuptools. That's an unsustainable approach in the long run. You could rely on trusted wheels, or you could even vendor your own bootstrapping logic. That is, you could write your own routine that resolves the setuptools build dependencies (or hard-codes them), builds them into a viable form for import, and injects them onto sys.path before building setuptools. Hmm. This makes me wonder if setuptools should bundle its build dependencies. Rather than vendor its dependencies in general, it could bundle its build dependencies. I'll give that a go. |
I've drafted a new release in the feature/bundle-build-deps branch and released b1. Install it with I notice that this change won't affect the most common use-case, that of pip installing setuptools from source when dependencies aren't met. It fails because the setuptools shim is invoked before setuptools has a chance to inject its build dependencies. |
Can you elaborate on the distinction between the two options? |
Prior to Setuptools 34, setuptools would vendor its dependencies, including them as hacked-in copies of the dependencies. Thus With this new proposed technique, the dependencies are bundled into the sdist only to facilitate building and installing from source... but the dependencies are still declared as |
To summarize, this would allow us to build |
@jaraco: Ok, so if understand this correctly, we could then declare One question: are any of the build dependencies also run dependencies? If they are, it seems like we'll still need to make sure we install the bundled run dependencies along with |
Yes. At the moment, the build dependencies and install dependencies are identical. So you would need those installed to invoke setuptools... which I can see is problemmatic if you then want to install the setuptools dependencies from source and they require setuptools. I guess if you're using I guess what it's coming down to is that whatever installs from source needs a way to install setuptools and its dependencies in one operation (as pip does with wheels or the setuptools did when the packages were vendored). |
Won't work on air-gapped clusters that have no connection to the outside world. We've been very careful about making sure that all of our packages can be installed without an internet connection. |
Ok, so I guess I have two questions:
|
It's different than just vendoring because it only takes place at build time, thus it allows people to say, upgrade the |
@dstufft: ok, so if we can get the bundled dependencies installed from source initially, so that we can use |
This (depending on six, appdirs, pyparsing, etc) really complicates packaging on Heroku, FWIW. |
@kennethreitz: Is that because Heroku builds these packages from source? Why not install from wheels using pip? What is the complication? |
@jaraco Heroku installs setuptools (and pip) by default into those environments, but people tend to also depend on libraries like six, appdirs, etc in their own projects hosted on Heroku. The unvendoring means that setuptools and the project code are now competing over who gets to define what an acceptable version is for these libraries to be installed with. |
This is an issue every other distribution is going to face eventually as well. Therefore, I recommend that each build tool vendors its dependencies. At least |
@jaraco people do Here's what I'm doing about it. heroku/heroku-buildpack-python#397 |
Sorry if this has already been said, this discussion is quite long and have unfortunately lost the time to read it fully. That said, am still very much struggling with this issue on many fronts. What if setuptools provided a mechanism to vendor in-place and then used that functionality for managing these dependencies? Given the discussion here it seems that vendoring needs to happen for setuptools, but also this needs to be done in a way where dependencies can be tracked and easily update. This could also be a feature for those that also need vendoring for other reasons. Admittedly vendoring is not a great thing to use, but some cases (like setuptools) seem to need it. Might as well make it less painful. |
With Setuptools 36, it once again vendors its dependencies. |
\o/ |
If I'm not too overenthusiastic... "when will it be released?" (I don't see it on pypi yet). |
Well, it should have been released automatically when I tagged the commit. I'll have to investigate why. |
Aah, there was an error in the deploy step due to the removal of requirements.txt. |
It should now be on PyPI. |
Seems to not have been deployed as that change was not on a tag. Would it make sense to add a 36.0.1 tag? ref: https://travis-ci.org/pypa/setuptools/jobs/237980870#L381 |
IMHO it would make sense to just upload the release manually using |
That's weird. I definitely ran the commands to cut a release. Oh, I ran |
Using twine, as recommended, the dists are now (verifiably) in PyPI. |
Great thanks for doing this. |
Hi, I'm a developer for the Spack package manager. We've had a setuptools package that has worked fine for a while now, and the vast majority of our Python packages depend on it. However, I went to update to the latest version of
setuptools
and noticed that it now depends onsix
,packaging
, andappdirs
. Unfortunately all 3 of these packages also depend onsetuptools
.six
andpackaging
can fall back ondistutils.core
if necessary, but thesetup.py
forappdirs
has a hardsetuptools
import.It seems to me like it is no longer possible to build setuptools from source. Is this correct??
The text was updated successfully, but these errors were encountered: