Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] conan upload -r fails to resolve python_requires from other remotes #6034

Closed
DoDoENT opened this issue Nov 6, 2019 · 7 comments · Fixed by #5804
Closed

[bug] conan upload -r fails to resolve python_requires from other remotes #6034

DoDoENT opened this issue Nov 6, 2019 · 7 comments · Fixed by #5804
Assignees

Comments

@DoDoENT
Copy link
Contributor

DoDoENT commented Nov 6, 2019

Environment Details (include every applicable attribute)

  • Conan version: 1.20.2

Steps to reproduce (Include if Applicable)

  • create two conan repositories in your local instance of Artifactory server (conan-first and conan-second)
  • create a python conan package and upload it to conan-first (let's call it PyPack)
  • create a conan package that python_requires the PyPack package (let's call it MyPack)
  • delete the PyPack from your local ~/.conan/data cache
  • upload MyPack to conan-second
  • upload fails with following message:
$ conan upload MyPack/5.1.0@user/test -r conan-second
PyPack/3.0.2@microblink/stable: Retrieving from server 'conan-second' 
PyPack/3.0.2@microblink/stable: Trying with 'conan-second'...
ERROR: Error loading conanfile at '/Users/dodo/.conan/data/MyPack/5.1.0/user/test/export/conanfile.py': Unable to find python_requires("PyPack/3.0.2@microblink/stable") in remotes

The problem is that -r conan-second is interpreted as both upload target and as the filter from which python_requires dependencies are to be resolved.

In my use case, I have conan-first as primary conan repository containing stable packages for production use. Now I want my CI to be able to build testing/staging packages and upload them to conan-second, which is a kind of temporary conan repository for testing the code with unreleased packages. This repository has way more administrators that can delete/change already built packages.

@danimtb danimtb self-assigned this Nov 6, 2019
@danimtb
Copy link
Member

danimtb commented Nov 6, 2019

I understand what you mean and this could be considered as a bug. Do you experience the same behavior with older Conan versions? Maybe @memsharded has some clue about this

@memsharded
Copy link
Member

This is mostly by design. Because of the evil inversion of control, it is really difficult to control that, and the upload doesn't have that logic to install from other remotes.

The new python_requires approach we are working, with delayed loading of python _requires as attributes should probably remove this issue. Lets add tests to #5804

@DoDoENT
Copy link
Contributor Author

DoDoENT commented Nov 6, 2019

I didn't try with older conan versions - I've worked around that problem on Jenkins side by manipulating the order of remotes and then invoking the upload without the -r parameter. So far it behaves according to the documentation (i.e. it uploads to the first remote in the remotes list).

@danimtb
Copy link
Member

danimtb commented Nov 7, 2019

Thanks for the info about the workaround. That should be the best solution right now. Could you explain why the python required is deleted from the cache before doing the upload? I guess it should matter in your current CI flow and would like to understand

@DoDoENT
Copy link
Contributor Author

DoDoENT commented Nov 7, 2019

It's not deleted - it never existed in the first place :)

The reason for this is that I have a separate package build stage in CI which builds conan packages in parallel on different CI nodes for different platforms. However, I do not want any of the builders to upload their packages to the Artifactory server until all builders have successfully built their package. Without that, I would end up with broken packages on my Artifactory server. So, each builder, after it finishes building packages. first calls a dry run of upload to make sure that tgz files appear and then stashes the built packages from the conan cache. Note that only packages that have build folder are stashed, to save servers resources.

When all builders finish, it's safe to upload the built packages. For that matter Jenkins spawns new N parallel tasks, where each unstashes the stashed conan cache build in previous stage and simply uploads everything to the artifactory. Thus, when building packages, our CI makes sure to upload all built packages, including the packages built by dependency. However, the python_requires package is not built by dependency and thus not stashed on the builder and it's not so easy to determine all python_requires packages that would need to be stashed - the current workaround is much easier, since the jenkins node always works with empty conan client with temporary conan cache (using Artifactory plugin for Jenkins).

Here is the screenshot of our jenkins process (this job is triggered on every repository after merging PR to stable branch).

image

@memsharded
Copy link
Member

However, I do not want any of the builders to upload their packages to the Artifactory server until all builders have successfully built their package. Without that, I would end up with broken packages on my Artifactory server. So, each builder, after it finishes building packages. first calls a dry run of upload to make sure that tgz files appear and then stashes the built packages from the conan cache. Note that only packages that have build folder are stashed, to save servers resources.

The current recommended practice is to upload them to a temporary or a "builds" Artifactory repository. You can use properties like the build-number. When all the job finish, you can run a "promotion" and move/copy the artifacts labelled with that property to the "production" Artifactory repository. That is, using Artifactory as the place to put packages instead of staging them in Jenkins. This has a few advantages, like much easier inspection of packages of a build directly from Artifactory. Or something that we do a lot, that is if a pipeline file because some build server failed (memory, network..) it is not necessary to re-build everything, but only the failed job, saving a lot of resources. We use Artifactory as the database of the CI. I'd recommend to consider this approach.

Also, for your comments, it seems that you could leverage the lockfiles or the json output of the commands. They will tell you what has been modified/built, so no need of manual inspection of the cache to look for "build" folders.

To gather the python-requires that are involved you might also leverage the lockfiles, that contains the information of the dependency graph.

So, at the moment, I think your workaround is good enough, but please consider these hints. Cheers!

@DoDoENT
Copy link
Contributor Author

DoDoENT commented Dec 5, 2019

Thank you for the hints. I will consider them and see if those fit our workflow (e.g. I will need to find a way to delete temporary artefacts from the "build" repository if some branch fails - I definitely do not want its storage to grow indefinitely nor clean it manually; also to find a way to automatically promote packages when all build branches are OK).

I like the idea of using Artifactory as the database for Jenkins. It has a good value also for non-C++/Conan related stuff. Thank you for that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants