Skip to content
This repository has been archived by the owner on May 16, 2024. It is now read-only.

[RFC] Binary Dependencies on Linked Repos #427

Closed
lasley opened this issue Jan 21, 2017 · 23 comments
Closed

[RFC] Binary Dependencies on Linked Repos #427

lasley opened this issue Jan 21, 2017 · 23 comments

Comments

@lasley
Copy link
Contributor

lasley commented Jan 21, 2017

We have an issue that's about to start popping up in repos due to OCA/server-tools#659 & OCA/server-tools#642.

In these PRs, I had to add some binary dependencies in order to properly test the implementations. I didn't think about the implications of this, but one of my builds in product-attribute just failed due to the 10.0 one that is merged due to pip not having SQL headers.

For the moment, I just added the binary dependencies in the Travis file to 🍏 my build. I might be able to come up with a solution completely mocking out the external Python dependencies, but I think this seems kind of hacky & really just covers up a problem we're going to start seeing as repos become interdependent.

As I see it, we have two options:

  1. Ignore Pip install failures when recursing into other repos
  2. Create an apt_dependencies.txt & install them with a script

Honestly I think the former is the best approach. But if we do that, we also need a way to block off the modules that depend on the packages that will not be there in order to not fail the build.

The latter approach would work, but I'm pretty sure it would require sudo: true on all our Travis files in order for a script to be able to install binaries. I haven't tested though, so maybe I'm wrong.

Anyone have thoughts?

@pedrobaeza
Copy link
Member

The second solution seems the best for me, but sudo is a problem because of Travis is not going to be in a container and thus less speed. Maybe we can pass packages to Travis in another way than the packages section of the .travis.yml file?

@lasley
Copy link
Contributor Author

lasley commented Jan 23, 2017

Yeah the second solution is definitely less of a square peg through the round hole solution for sure.

I did some prelim research when making this RFC & I was not able to find a Travis-supported apt install solution outside of the config file.

@lasley
Copy link
Contributor Author

lasley commented Jan 23, 2017

From a theoretical perspective, it may be possible to edit an environment variable in a before_install script then utilize that variable in the apt section. Whether that will actually work obviously depends on the order that the keys are run.

Does anyone know offhand the order that the config keys are run in, specifically the before_install and apt sections?

@pedrobaeza
Copy link
Member

@moylop260 maybe can help.

eLBati added a commit to eLBati/l10n-italy that referenced this issue Jan 24, 2017
that would fail with
src/pyodbc.h:56:17: fatal error: sql.h: No such file or directory

See
https://odoo-community.org/groups/contributors-15/contributors-49577
and
OCA/maintainer-quality-tools#427
tafaRU pushed a commit to OCA/l10n-italy that referenced this issue Jan 24, 2017
that would fail with
src/pyodbc.h:56:17: fatal error: sql.h: No such file or directory

See
https://odoo-community.org/groups/contributors-15/contributors-49577
and
OCA/maintainer-quality-tools#427
@pedrobaeza
Copy link
Member

I'm thinking that the problem is that the Python library pyodbc is on requirements.txt and the required headers on .travis.yml. If you just put the library installation on .travis.yml, we won't have all these problems.

@lasley
Copy link
Contributor Author

lasley commented Jan 24, 2017

There is no way to install ODBC without a binary. The headers (sql.h) are required in order to build via pip.

@lasley
Copy link
Contributor Author

lasley commented Jan 24, 2017

Ohhhh I see your point after rereading @pedrobaeza - yeah this would totally work I think. I'll submit a PR.

Technically this will just be masking the problem though, so this solution is more close to my original point 1 vs our preferred 2. It will fix the build while we get our ducks in a row though!

@pedrobaeza
Copy link
Member

Yeah, it's more like point 1, but avoid to change .travis.yml files for all repos meanwhile we find the solution 2. There's only 1 problem: this can screw up runbot via docker. Let's see...

@lasley
Copy link
Contributor Author

lasley commented Jan 24, 2017

We'll be good with Runbot - T2D honors the install steps of the Travis file. Only thing it will screw up I think is if a repo becomes interdependent on one of these database modules. The assumption would be that the library is installed, but that would not be the case with the recursion.

@moylop260
Copy link
Contributor

I'm coming late here?
Sorry

The issue was fixed?

@lasley
Copy link
Contributor Author

lasley commented Jan 24, 2017

Only partially, we still need a solution for recursive binary installation.

Do you happen to know if before_install is run before the apt section in Travis?

@moylop260
Copy link
Contributor

apt section of travis is not accesible from other side and we can't use apt-get install because we are using Container-based (Fast boot time environment in which sudo commands are not available)

We are fixed some similar cases download and adding the binary directly to PATH environment variable.
You can see webkit-patched-pr

I don't know if is the same case for unixodbc-dev
Other case, we can request that travis added that library to main image but is not the faster solution.

@lasley
Copy link
Contributor Author

lasley commented Jan 24, 2017

Dang and it looks like environment variables aren't honored in the apt section either. Hrrmmmmmm this is stumper. I think we're going to run into this problem outside of the context of this specific set of binaries though.

Ok so then maybe back to the original point 1, which would be to simply ignore pip install failures for recursive repos. The one catch would be somehow installing the non-failing libraries in the requirements file. Too bad pip doesn't have an ignore-errors flag or something.

@moylop260
Copy link
Contributor

If we ignore pip install failures then we need install each package isolated.
I mean,
pip install package1; pip install package2; pip install package3 (Notice the comma It means that don't matter if there is a error in the previous command we will continue the installation of other packages)

@lasley
Copy link
Contributor Author

lasley commented Jan 27, 2017

Hmmm so that would also mean parsing the requirements file, instead of doing a pip install -r. Not a big deal in the grand scheme of things though, file reads are pretty simple.

Sounds like a decent enough plan to me, and while not the best solution, it would provide a more scalable fix & keeps all the python packages in the requires file instead of moving some to Travis file.

Assuming this sounds plausible to you as well @moylop260, I'll look into the implementation.

@nhomar
Copy link
Member

nhomar commented Oct 30, 2017

@lasley can you update is you worked on this? in order to point a backlog task after #500 ¿?

@lasley
Copy link
Contributor Author

lasley commented Oct 30, 2017

I have not performed this work for MQT. We did, however, implement dependency management in Doodba:

@pedrobaeza
Copy link
Member

And that's the main reason for all the runbot discussion: not pip vs yaml vs whatever. It's to have a Docker base image that handles all of these dependencies. I haven't got too much time for writing this on the issue, and more as it was very polarized with the flame war, but this is the most important thing to achieve (not having MQT installable via pip) for having good CIs/Staging.

@lasley
Copy link
Contributor Author

lasley commented Oct 30, 2017

I think actually the key point here is that we need to move it from Docker. These scripts are built in Python and should benefit all of the community, not just the ones that use Docker.

In this case, the library itself is abstract and new package managers can easily be added with our implementation. The real question IMO is where we put it? Really it's just a dependency management for any project, based on a defined file hierarchy. It's not Odoo specific, nor is it Linux/Docker - it just requires Python and a user with proper permissions.

I think we're all just being a bit too specific in our definitions of things really. We're all talking the same language, it's just a different dialect. Once we find our common tongue, we can begin to strip out the slang/dialect.

@lasley
Copy link
Contributor Author

lasley commented Oct 30, 2017

There's even stuff like BinDep

@pedrobaeza
Copy link
Member

But we can't abstract libraries from the OS host and so on, so the solution I think it's to used Docker as an abstraction layer.

Any way, I understand that this is a bit off-topic of the specific issue, but I needed to throw out all the Docker question, although not in the proper place 😜

@lasley
Copy link
Contributor Author

lasley commented Oct 30, 2017

You're totally right that there is a point where we must become technology aware. In this case we're just executing arbitrary commands though - only the implementation needs to be aware. The most we need to know at the library level is what package managers we support, which is currently

  • apt
  • gem
  • npm
  • pip

In that, we cover basically everything Debian - with or without Docker. I would probably add yum if we broke it out, because I typically favor CentOS. Not sure what Windows users would do, but that would be up to one of them to figure out. In Mac, we'd have a brew adapter probably.

@pedrobaeza
Copy link
Member

Closing this without a clear solution, but the scope is very limited, so we will continue adding manually the needed binary dependencies on needed repos.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants