-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mach-nix fails to find scipy dependency #5
Comments
Hey, thanks for reporting the issue. It appears that the pypi crawler which is maintaining the dependency graph fails on extracting the dependencies for scipy with the following traceback:
The database containing detailed extraction errors (like this one) is currently non public. I will try to build a little website where people can access status information about all crawled libraries ASAP. Scipy tries to import I already noticed that scipy and some other related libraries do some very hacky stuff in their The fix for this has to be done in the pypi-crawlers project. |
Maybe, that could be fixed in SciPy code, too. That's super great work so far, and I am so thankful for what you have done, I would love to give you an hand on adding support for wheels or whatever. I can also pester upstream about their way of doing stuff. That's better than nothing. |
It makes me super happy to hear that my work is useful to you. Getting some help would be amazing! |
@DavHau I'll try to jump on Matrix later this day, I think that'd be super helpful to know how you envision moving forward (your future milestones, what you think must be done to achieve wheels support, known limitations, nice-to-have, etc.), so that people can jump in and try to contribute. Classical "contributing" headlines in the README or I don't know how hard would it be to debug the PyPI crawler on only one package like SciPy, I don't want to run it on the whole universe (as it looks like it takes a certain amount of time :D), so that'd be great if there were a bit of documentation on "debugging", I can take care of it if you could just show me some pointers on how to do it!
Well, it sounds like they're doing strange stuff, I just read their |
Thanks for your suggestions. That's helpful. I'm going to open some new issues with my thoughts soon and let you know. Debugging the crawler's extraction on a single package shouldn't be difficult. I will add some documentation on that asap. |
I added some instructions on how to debug dependency extraction to pypi-crawlers . To test my instructions i debugged scipy's setup.py already. |
Makes sense, I think that Poetry2Nix already support PEP518 so we can reuse their work I think |
I need to correct this. PEP-518 is just for specifying the |
We could also use stuff like this: https://pypi.org/simple/tensorflow/
in order to generate index of wheels URLs.
That's something that Poetry2Nix cannot do for example at the moment,
I'm trying to see how URL to PyPI are generated, but I don't find the
proper PEP for this.
…On Apr 29 2020, at 5:18 pm, DavHau ***@***.***> wrote:
> It should be fairly simple since it's just about parsing the
> project.toml file. Nothing must be fake-installed.
I need to correct this. PEP-518 is just for specifying the minimum
build system requirements which doesn't necessarily include all
requirements we are interested in. Therefore we would actually need to
build an environment with these minimum requirements, and then execute
the usual extraction inside this environment. I described it in more
detail in the corresponding issue for pypi-crawlers
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
[ { ***@***.***": "http://schema.org", ***@***.***": "EmailMessage",
"potentialAction": { ***@***.***": "ViewAction", "target":
"#5 (comment)",
"url":
"#5 (comment)",
"name": "View Issue" }, "description": "View this Issue on GitHub",
"publisher": { ***@***.***": "Organization", "name": "GitHub", "url":
"https://github.com" } } ]
|
Crawling all packages URL names is not a problem at all and done in under 30min. We need to do that anyways, since we need to learn which kind of releases are available and so on. But storing URls actually takes a lot of space currently which could be reduced. I opened an issue concerning this here: DavHau/pypi-crawlers#2 Let's collect our ideas there. I also didn't find a proper spec so far. I also opened #7 for adding wheel support. |
Mach-nix version 2.0.0 has two new providers |
Hello, thank you for your project, it looks super promising.
Just got this:
By trying mach-nix on github.com/mangaki/zero dependencies.
I'm pretty sure that scipy exists on PyPI and was released before mach-nix :P ; but I don't know what's going on.
The text was updated successfully, but these errors were encountered: