New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Docker for Travis #1701

Closed
e1528532 opened this Issue Nov 13, 2017 · 9 comments

Comments

Projects
None yet
3 participants
@e1528532
Contributor

e1528532 commented Nov 13, 2017

I am currently trying to integrate haskell in our builds. While this seems to work very well in general, but it takes quite some time to build some of the haskell dependencies which can easily lead to timeouts for travis builds.

I tried to implement caching, but i slowly have the impression that it only caches the directories after a successful build. Which is bad, as my builds always timeout because it keeps rebuilding the haskell dependencies.

https://docs.travis-ci.com/user/docker/ therefore my idea to fight off timeout issues in the future would be to create an appropriate docker image (hosted by ourself?) that already includes dependencies for our builds so they don't have to be downloaded and installed over and over again. I am not very familiar with docker apart from the basics, so i'm not sure how much effort that would be.

@markus2330 @sanssecours do you think that would be a useful idea? I'll keep trying caching first, i think there is some room for improvements regarding it so it might be enough already. Still, we might or might not benefit from using docker for that in a long run (e.g. other additions that have dependencies which need to be built).

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Nov 13, 2017

Contributor

If the bottleneck is downloading+installing, docker might not improve the situation: The download might take even longer.

Isn't docker linux only? (In the link they say they do not support Mac OS X with docker.) The initial idea of travis was to have Mac OS X builds, for Linux we have our own build server anyway. So maybe installing the dependences on the agents and use our own build server is the better solution anyway? (at least for Linux)

Contributor

markus2330 commented Nov 13, 2017

If the bottleneck is downloading+installing, docker might not improve the situation: The download might take even longer.

Isn't docker linux only? (In the link they say they do not support Mac OS X with docker.) The initial idea of travis was to have Mac OS X builds, for Linux we have our own build server anyway. So maybe installing the dependences on the agents and use our own build server is the better solution anyway? (at least for Linux)

@sanssecours

This comment has been minimized.

Show comment
Hide comment
@sanssecours

sanssecours Nov 13, 2017

Contributor

Thank you for looking into the timeout issues.

@sanssecours do you think that would be a useful idea?

Yes, I think would make sense to provide Docker support for macOS too. However, as far as I can tell from the link you posted:

We do not currently support use of Docker on OS X.

Travis does not support Docker on macOS. Since the timeout issues are only a problem on macOS using Docker images would not help us 😢. At least as far as I can tell.

Contributor

sanssecours commented Nov 13, 2017

Thank you for looking into the timeout issues.

@sanssecours do you think that would be a useful idea?

Yes, I think would make sense to provide Docker support for macOS too. However, as far as I can tell from the link you posted:

We do not currently support use of Docker on OS X.

Travis does not support Docker on macOS. Since the timeout issues are only a problem on macOS using Docker images would not help us 😢. At least as far as I can tell.

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Nov 19, 2017

Contributor

@manuelm suggested in #730 to write a script that detects which kind of rebuild is necessary. This way we could only add haskell bindings (and deps) to the build when there was actually a change within the haskell bindings. (And avoid to build anything else, which might help us to do the build within the timeout.)

I was skeptical if its worth the effort. The alternative proposal was to have a build job for every combination that is relevant for us. (Basically a build job per person.) Because the build-job-per-person never got realized, I thought it might be a good idea to throw in @manuelm's idea #730. The idea is certainly more generic and might greatly simplify the number of build jobs. (For both travis+our build server.)

What do you think?

Contributor

markus2330 commented Nov 19, 2017

@manuelm suggested in #730 to write a script that detects which kind of rebuild is necessary. This way we could only add haskell bindings (and deps) to the build when there was actually a change within the haskell bindings. (And avoid to build anything else, which might help us to do the build within the timeout.)

I was skeptical if its worth the effort. The alternative proposal was to have a build job for every combination that is relevant for us. (Basically a build job per person.) Because the build-job-per-person never got realized, I thought it might be a good idea to throw in @manuelm's idea #730. The idea is certainly more generic and might greatly simplify the number of build jobs. (For both travis+our build server.)

What do you think?

@e1528532

This comment has been minimized.

Show comment
Hide comment
@e1528532

e1528532 Nov 22, 2017

Contributor

I am not sure whether this would help lot for this specific haskell-related use case, though it would sure help in general. The issue is that apart from ghc and cabal, no precompiled haskell library is available for macOS, so it compiles all the dependencies from scratch first which does take a while on travis. On Debian for example there seems to be a precompiled version of commonly used haskell libraries, for instance hspec which i am using as the test framework. This should be solved by the cache, as after the first successfull build those dependencies should be cached and the next time it only has to rebuild my haskell bindings and plugins, which should be fairly fast then.

I tried to compile the bindings without optimizations and everything, but it is still too long. So my next idea is to make use of the build stage feature to have a first warmup stage that only configures a minimal version of elektra to be able to build the dependencies into their sandboxes and cache them for the main build (Basically i only need to execute cmake to fill in the placeholders in the cabal files for the compilation). They even mention this use case in their documentation, so it might be the easiest solution to this issue.

You can, for example, also use build stages to warm up dependency caches in a single job on a first stage, then use the cache on several jobs on a second stage.

I think each job does have the same 45min timeout, so this extra build job can hopefully eat the time for building the haskell dependencies for macOS.

Contributor

e1528532 commented Nov 22, 2017

I am not sure whether this would help lot for this specific haskell-related use case, though it would sure help in general. The issue is that apart from ghc and cabal, no precompiled haskell library is available for macOS, so it compiles all the dependencies from scratch first which does take a while on travis. On Debian for example there seems to be a precompiled version of commonly used haskell libraries, for instance hspec which i am using as the test framework. This should be solved by the cache, as after the first successfull build those dependencies should be cached and the next time it only has to rebuild my haskell bindings and plugins, which should be fairly fast then.

I tried to compile the bindings without optimizations and everything, but it is still too long. So my next idea is to make use of the build stage feature to have a first warmup stage that only configures a minimal version of elektra to be able to build the dependencies into their sandboxes and cache them for the main build (Basically i only need to execute cmake to fill in the placeholders in the cabal files for the compilation). They even mention this use case in their documentation, so it might be the easiest solution to this issue.

You can, for example, also use build stages to warm up dependency caches in a single job on a first stage, then use the cache on several jobs on a second stage.

I think each job does have the same 45min timeout, so this extra build job can hopefully eat the time for building the haskell dependencies for macOS.

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Nov 24, 2017

Contributor

it compiles all the dependencies from scratch first which does take a while on travis.

The question is if doing this can be done within the allowed timeout. If compiling the haskell deps alone exceeds the timeout it obviously does not help. If only the combination compiling the haskell deps and others deps like qt-gui is the problem, it might help.

I think each job does have the same 45min timeout, so this extra build job can hopefully eat the time for building the haskell dependencies for macOS.

If possible we should avoid non-generic complications in the build system.

What about trying to get the haskell binaries to a homebrew bottle or similar?

Contributor

markus2330 commented Nov 24, 2017

it compiles all the dependencies from scratch first which does take a while on travis.

The question is if doing this can be done within the allowed timeout. If compiling the haskell deps alone exceeds the timeout it obviously does not help. If only the combination compiling the haskell deps and others deps like qt-gui is the problem, it might help.

I think each job does have the same 45min timeout, so this extra build job can hopefully eat the time for building the haskell dependencies for macOS.

If possible we should avoid non-generic complications in the build system.

What about trying to get the haskell binaries to a homebrew bottle or similar?

@e1528532

This comment has been minimized.

Show comment
Hide comment
@e1528532

e1528532 Nov 24, 2017

Contributor

If compiling the haskell deps alone exceeds the timeout it obviously does not help.

It always does finish compiling but then in the final testing step the timeouts usually happen, so this would help.

What about trying to get the haskell binaries to a homebrew bottle or similar?

I also had this thought already to compile the dependencies ourselves and pack them into a custom bottle for our testing use case. While this approach seems fine to me and would solve the issue as well, is that legally fine to just compile and pack some library? But as for instance the debian guys do the same, i guess it is as long as i keep the licence around and don't claim it as my work.

I think it wouldn't end up to be portable for everyone but to be tailored to our specific use case, but i guess that doesn't really matter then we don't need to offer it for everyone, its just a workaround to solve our timeout issue.

Contributor

e1528532 commented Nov 24, 2017

If compiling the haskell deps alone exceeds the timeout it obviously does not help.

It always does finish compiling but then in the final testing step the timeouts usually happen, so this would help.

What about trying to get the haskell binaries to a homebrew bottle or similar?

I also had this thought already to compile the dependencies ourselves and pack them into a custom bottle for our testing use case. While this approach seems fine to me and would solve the issue as well, is that legally fine to just compile and pack some library? But as for instance the debian guys do the same, i guess it is as long as i keep the licence around and don't claim it as my work.

I think it wouldn't end up to be portable for everyone but to be tailored to our specific use case, but i guess that doesn't really matter then we don't need to offer it for everyone, its just a workaround to solve our timeout issue.

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Nov 25, 2017

Contributor

It always does finish compiling but then in the final testing step the timeouts usually happen, so this would help.

Yes, the approach would not only reduce deps but also reduces number of running tests and so on. So for most PRs the build times should be significantly shorter.

You could check in a PR if building only your binding+plugin (and no others) and only your deps allow the build to complete within time. (Simply reduce the .travis to the absolute minimum for your case. From this specific variant we then think of a way how to can automatically derive such a configuration.)

While this approach seems fine to me and would solve the issue as well, is that legally fine to just compile and pack some library?

It would not be free software if such restrictions would be there. For example Java from Oracle has some restrictions, thus it is not free software (and thus also not part of Debian main). If it is in Debian main you can be sure that there are no restrictions, including distributing modified variants for any purpose. (Understanding licenses is one of the qualifications Debian developers need to have before they are accepted.)

its just a workaround to solve our timeout issue.

That is the problem: Its too much effort for such an issue. Because we basically have all troubles distributions have (Packaging, finding a working configuration with other constantly-changing packages and so on...). So I would prefer the more generic variant and only pull the deps if needed. Using this mechanism all our builds would profit.

This would only add the trouble that we need to document external deps but this is something we should do anyway (also for many other reasons, like helping maintainers of distributions),see #1016

Contributor

markus2330 commented Nov 25, 2017

It always does finish compiling but then in the final testing step the timeouts usually happen, so this would help.

Yes, the approach would not only reduce deps but also reduces number of running tests and so on. So for most PRs the build times should be significantly shorter.

You could check in a PR if building only your binding+plugin (and no others) and only your deps allow the build to complete within time. (Simply reduce the .travis to the absolute minimum for your case. From this specific variant we then think of a way how to can automatically derive such a configuration.)

While this approach seems fine to me and would solve the issue as well, is that legally fine to just compile and pack some library?

It would not be free software if such restrictions would be there. For example Java from Oracle has some restrictions, thus it is not free software (and thus also not part of Debian main). If it is in Debian main you can be sure that there are no restrictions, including distributing modified variants for any purpose. (Understanding licenses is one of the qualifications Debian developers need to have before they are accepted.)

its just a workaround to solve our timeout issue.

That is the problem: Its too much effort for such an issue. Because we basically have all troubles distributions have (Packaging, finding a working configuration with other constantly-changing packages and so on...). So I would prefer the more generic variant and only pull the deps if needed. Using this mechanism all our builds would profit.

This would only add the trouble that we need to document external deps but this is something we should do anyway (also for many other reasons, like helping maintainers of distributions),see #1016

@markus2330

This comment has been minimized.

Show comment
Hide comment
@markus2330

markus2330 Mar 25, 2018

Contributor

Is this proposal still relevant?

Contributor

markus2330 commented Mar 25, 2018

Is this proposal still relevant?

@e1528532

This comment has been minimized.

Show comment
Hide comment
@e1528532

e1528532 Jun 11, 2018

Contributor

The current solution on travis seems fine combined with the cached sandbox.

Contributor

e1528532 commented Jun 11, 2018

The current solution on travis seems fine combined with the cached sandbox.

@e1528532 e1528532 closed this Jun 11, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment