Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single after_success callback after *all* builds #929

Closed
rkistner opened this issue Feb 14, 2013 · 21 comments
Closed

Single after_success callback after *all* builds #929

rkistner opened this issue Feb 14, 2013 · 21 comments
Labels
feature-request locked This thread is locked, and won't automatically be closed travis-worker

Comments

@rkistner
Copy link

It would be useful to be able to execute some script only if all builds in the build matrix passed. The use case I have in mind is to perform some release process (such as publishing a gem) if all of the builds passed (also see my post in the mailing list here).

Currently the best (simplest) solution is to configure the script to run in a single build configuration, regardless of the result of the other configurations.

If something like this is implemented, there will be some complexity regarding the environment that the release script runs in. For example, does one of the build workers wait until the rest finished running, or does a new worker get started just for the release process?

@roidrage
Copy link
Contributor

For the time being, you could utilize webhooks to achieve this, as they will only be triggered once the entire build has completed.

@roidrage
Copy link
Contributor

Should probably add that I'm all for this feature. But it requires a certain coordination indeed. We'll look into it!

@sarahhodne
Copy link
Contributor

What would a sane environment be for the command in after_all_{success,failure} (or whatever we end up calling the callback)? Ie. how many commands should've been run? If all commands up to after_{failure,success} is expected to have run, this probably needs to run on one of the workers that ran a build before it's killed (unless we want to rerun all of the tests, which could in theory make the build result different). In that case, designating a worker at the start of the build as the worker that should wait for all others to finish could make a deadlock situation where all workers are waiting for jobs that haven't started to finish. So, that's not really an option. A better solution would be to have the worker check at the end whether all the other workers are finished, but that could hit a race condition and never run the after_all_{success,failure} script (it shouldn't run it twice, though).

So, making one of the VMs that ran a test also run the script is tricky because concurrency. It would be easier if we could, at the end, spin up a new worker, but then the question is how many commands do we need to run? Obviously we should at least check out the git repository, and I think before_install and install should probably be run, but do the before_script and script scripts have to be run? Also, any of those scripts could have parts of them determined by what job in the matrix is being run, how do we decide which one to pick?

I'm 👍 on the idea in general, but I think we need to answer the questions above before we can have a good implementation.

@rkistner
Copy link
Author

For predictability you'd probably want the release script to run in a specific environment, not in the one that finished last.

I like the idea of new worker started up for the release process. In this case, an environment (Ruby version, environment variables, etc) should be specified explicitly for this worker. I agree that the before_install and install steps should be run, but I don't think we need more than that.

@laurentpetit
Copy link

I'm also facing the same problem. Here are my 0,02 €.

In a matrix build, each worker generally creates the artifact under test. The artifact may or may not differ from matrix cell to matrix cell. It is the responsibility of each matrix cell job to publish its particular artifact outside Travis-ci (be it in snapshot state, release state, whatever).

For the case where each worker re-creates the same artifact over and over (e.g. matrix coordinates do not affect the artifact builder), this is my case for instance, then I can easily manage to only deploy from a single matrix cell worker.

So that was about publishing build result.

Now, for the "promotion", whatever this means (in my case, publishing into an official Eclipse p2 repository, for somebody else, pushing to Heroku, for another, promoting an artifact from a private repository to a central repository, etc.).

This "promotion", which must run only once if all workers succeed, could indeed run in a new environment. Simplest thing that may work. I'm not sure if this should be one of the test environment (as far as matrix env variables are concerned), but I guess that would do no harm to be able to specify one.
Having before_install and install steps run should indeed be sufficient.

@danielchatfield
Copy link

+1

@sarahhodne
Copy link
Contributor

We have discussed different ways of doing this, and we have a feature in the planning phase that should solve this. Basically we're working on a way to create build matrices with dependencies. Ie. "run this job after these jobs are done". I believe having something like that may solve this issue too.

@rufoa
Copy link

rufoa commented Aug 24, 2013

Look forward to that, would be very useful

@md5
Copy link

md5 commented Dec 16, 2013

Since nobody else seems to have mentioned it, I'll just add that something like this would be useful not just after all the matrix builds have run, but also before any of them have run.

The use case I have in mind is running WAD in a Rails project before any matrix builds run to allow it to pre-publish any needed gems to S3 before the actual builds run. This avoids a race condition where all of the builds in a matrix will potentially re-publish the same gems to S3, saving on overall build time, S3 bandwidth, and worker bandwidth.

@jhilden
Copy link

jhilden commented Jan 17, 2014

We are facing the same issue, when trying to parralelize our test suite into smaller subbuilds. We have an after_success callback that deploys our app and this will run now once the first subbuild has passed (even though the other one may pass)

deboss pushed a commit to deboss/mybatis that referenced this issue Feb 1, 2014
@dmakhno
Copy link

dmakhno commented Feb 13, 2014

Found this issue that was created long time ago.
Until this is not done, maybe someone may found this useful travis_after_all

@fawkesley
Copy link

We're also affected by this - we're trying to push a release_<build> tag after all successful builds.

fawkesley pushed a commit to alphagov/stagecraft that referenced this issue Feb 26, 2014
Create the release tag if the Python 2.7 job succeeds. Don't do anything
in the Python 3 job.

This is a bit of a workaround for the fact that there's no way of
running scripts after *all* jobs complete on Travis. That means we end
up trying to release tag in every successful job.

See travis-ci/travis-ci#929
@RReverser
Copy link

@dmakhno, thanks, using as temporary replacement until @travis-ci will implement this natively.

@BanzaiMan
Copy link
Contributor

If you decide to use travis_after_all, ensure that you take a look at dmakhno/travis_after_all#1 and follow HTTP redirect with curl.

@aviau
Copy link

aviau commented Jul 10, 2014

I would really like a feature like this!

@cosmosgenius
Copy link

+1

@tjmcewan
Copy link

another +1

@joscha
Copy link

joscha commented Oct 29, 2014

👍 here, too

@travis-ci travis-ci locked and limited conversation to collaborators Oct 29, 2014
@joshk
Copy link
Contributor

joshk commented Oct 29, 2014

I'm locking this issue for the time being as we are aware of the request but don't have an ETA on when it will be ready.

@joshk
Copy link
Contributor

joshk commented Sep 12, 2016

Hi All

It's been far too long since my last comment.

We are in the planning stages for what we have termed 'Build Stages'. This will allow for pipelined-ish job groups.

We are putting together the feature plans for this year, with this work likely to fall in Q1/Q2 next year.

I will also be adding this to http://next.travis-ci.com in the coming weeks.

Thanks

Josh

@BanzaiMan
Copy link
Contributor

To an extent, this issue would be resolved by a recent beta feature, Build Stages. You can find more about this feature in https://docs.travis-ci.com/user/build-stages, and the ongoing discussion in travis-ci/beta-features#11. (Be sure to read the entire issue before adding your comments; some of your concerns are probably already discussed in the issue.)

@DrTorte DrTorte added the locked This thread is locked, and won't automatically be closed label Apr 2, 2018
@DrTorte DrTorte closed this as completed Sep 19, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
feature-request locked This thread is locked, and won't automatically be closed travis-worker
Projects
None yet
Development

No branches or pull requests