New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build Stages: Flexible and practical Continuous Delivery pipelines #11

Closed
joshk opened this Issue Mar 30, 2017 · 218 comments

Comments

Projects
None yet
@joshk
Member

joshk commented Mar 30, 2017

From simple deployment pipelines, to complex testing groups, the world is your CI and CD oyster with Build Stages.

Build Stages allows you and your team to compose groups of Jobs which are only started once the previous Stage has finished.

You can mix Linux and Mac VMs together, or split them into different Stages. Since each Stage is configurable, there are endless Build pipeline possibilities!

This feature will be available for general beta testing soon... watch this space 😄

We love to hear feedback, it's the best way for us to improve and shape Travis CI. Please leave all thoughts/comments/ideas related to this feature here.

Happy Testing!

@joshk joshk self-assigned this Mar 30, 2017

@bsipocz

This comment has been minimized.

Show comment
Hide comment
@bsipocz

bsipocz Mar 30, 2017

@joshk - I have a somewhat related question. Do you consider introducing a new tagging system that would trigger only part of the build pipeline? The usecase I have in mind is very simple: run unit tests, if they pass run the docs build. But for pure docs PRs, no need to do the first step so a [docs only] or [skip test1] in the commit message would jump to the step in the build process. A group of jobs are then tagged in this example either as test1 or docs.

bsipocz commented Mar 30, 2017

@joshk - I have a somewhat related question. Do you consider introducing a new tagging system that would trigger only part of the build pipeline? The usecase I have in mind is very simple: run unit tests, if they pass run the docs build. But for pure docs PRs, no need to do the first step so a [docs only] or [skip test1] in the commit message would jump to the step in the build process. A group of jobs are then tagged in this example either as test1 or docs.

@joshk

This comment has been minimized.

Show comment
Hide comment
@joshk

joshk Mar 30, 2017

Member

Hi @bsipocz

Hmmmm, that is an interesting idea. Not at the moment, but it might be something for us to consider later. I think this might be a bit of an edge use case, although we might be surprised by what people need and want. 😄

Member

joshk commented Mar 30, 2017

Hi @bsipocz

Hmmmm, that is an interesting idea. Not at the moment, but it might be something for us to consider later. I think this might be a bit of an edge use case, although we might be surprised by what people need and want. 😄

@larsxschneider

This comment has been minimized.

Show comment
Hide comment
@larsxschneider

larsxschneider Mar 31, 2017

That sounds interesting! The following feature would be very useful to me:

My build has a number of jobs (7 total). I want to run all of them on my release branch. However, I only want to run a subset of them on my feature branches to speed up the Travis run and to save resources. According to support (thanks @joepvd !) this is not possible right now but might be in the future 😉

Would that be useful for other people, too?

That sounds interesting! The following feature would be very useful to me:

My build has a number of jobs (7 total). I want to run all of them on my release branch. However, I only want to run a subset of them on my feature branches to speed up the Travis run and to save resources. According to support (thanks @joepvd !) this is not possible right now but might be in the future 😉

Would that be useful for other people, too?

@joshk

This comment has been minimized.

Show comment
Hide comment
@joshk

joshk Mar 31, 2017

Member

@larsxschneider I love this idea, and definitely think it is a valid use case!

Member

joshk commented Mar 31, 2017

@larsxschneider I love this idea, and definitely think it is a valid use case!

@fabriziocucci

This comment has been minimized.

Show comment
Hide comment
@fabriziocucci

fabriziocucci Apr 20, 2017

...it was about time! 😅

All kidding aside, in my opinion this is THE missing feature of Travis which will also tempt all Jenkins lovers to give Travis another try.

I would strongly suggest to have a look at the great job that the GitLab guys have done with pipelines and environments (no, I'm not part of the GitLab team).

...it was about time! 😅

All kidding aside, in my opinion this is THE missing feature of Travis which will also tempt all Jenkins lovers to give Travis another try.

I would strongly suggest to have a look at the great job that the GitLab guys have done with pipelines and environments (no, I'm not part of the GitLab team).

@JoshCheek

This comment has been minimized.

Show comment
Hide comment
@JoshCheek

JoshCheek Apr 21, 2017

Hi, it's looking good so far! In the example, the unit tests are bash scripts. For us, though, the unit tests are in multiple services, each with their own GH repo, and they currently trigger CI builds. The issue we have with this is that it doesn't report CI failures back to the GH issue that triggered it. I'm thinking about replacing the CI step with a pipeline repo, but I still don't see how to get around this issue.

So lets say I set it up like this:

  • the repo service-1 has unit tests
  • the repo service-2 has unit tests
  • the repo integration has integration tests
  • the repo deploy has deploy scripts
  • the repo pipeline uses this feature to test service-1, then service-2, then integration then run the scripts to deploy

Then when someone submits a PR in service-1, that PR should cause travis to run pipeline's travis build instead of its own. But the interface from the PR should feel the same. Meaning it should report failures back to the PR that triggered it. Metaphorically, I'm thinking about it like a file system soft-link, or a C++ reference, where service-1's .travis.yml has some configuration to say "I don't have my own CI, instead go run pipeline's with a parameter telling it to build my repo against this commit"

I'm expecting that this is how almost everyone is going to want to use it. Multiple repos that act as event triggers for the pipeline, and the pipeline should report back to them with its result. Eg even if you're not deploying, once you split your project into multiple repos, to use the pipeline to coordinate across those repos, they'll see their unit tests as just the first stage in their pipeline repo.


Also, shout out to y'all for working on this, I'm a huge Travis fan, and was worried I'd have to find a different CI or write a lot of wrapper code in order to get this kind of feature. Also, thx to @BanzaiMan for pointing me at it ❤️

Hi, it's looking good so far! In the example, the unit tests are bash scripts. For us, though, the unit tests are in multiple services, each with their own GH repo, and they currently trigger CI builds. The issue we have with this is that it doesn't report CI failures back to the GH issue that triggered it. I'm thinking about replacing the CI step with a pipeline repo, but I still don't see how to get around this issue.

So lets say I set it up like this:

  • the repo service-1 has unit tests
  • the repo service-2 has unit tests
  • the repo integration has integration tests
  • the repo deploy has deploy scripts
  • the repo pipeline uses this feature to test service-1, then service-2, then integration then run the scripts to deploy

Then when someone submits a PR in service-1, that PR should cause travis to run pipeline's travis build instead of its own. But the interface from the PR should feel the same. Meaning it should report failures back to the PR that triggered it. Metaphorically, I'm thinking about it like a file system soft-link, or a C++ reference, where service-1's .travis.yml has some configuration to say "I don't have my own CI, instead go run pipeline's with a parameter telling it to build my repo against this commit"

I'm expecting that this is how almost everyone is going to want to use it. Multiple repos that act as event triggers for the pipeline, and the pipeline should report back to them with its result. Eg even if you're not deploying, once you split your project into multiple repos, to use the pipeline to coordinate across those repos, they'll see their unit tests as just the first stage in their pipeline repo.


Also, shout out to y'all for working on this, I'm a huge Travis fan, and was worried I'd have to find a different CI or write a lot of wrapper code in order to get this kind of feature. Also, thx to @BanzaiMan for pointing me at it ❤️

@siebertm

This comment has been minimized.

Show comment
Hide comment
@siebertm

siebertm May 3, 2017

Nice one! Is it possible to somehow "name" the jobs so that the job's intent is also revealed in the UI?

siebertm commented May 3, 2017

Nice one! Is it possible to somehow "name" the jobs so that the job's intent is also revealed in the UI?

@MariadeAnton

This comment has been minimized.

Show comment
Hide comment
@MariadeAnton

MariadeAnton May 11, 2017

Member

Hi everyone!

Build Stages are now in Public Beta 🚀 https://blog.travis-ci.com/2017-05-11-introducing-build-stages

Looking forward to hearing what you all think!

Member

MariadeAnton commented May 11, 2017

Hi everyone!

Build Stages are now in Public Beta 🚀 https://blog.travis-ci.com/2017-05-11-introducing-build-stages

Looking forward to hearing what you all think!

@pimterry

This comment has been minimized.

Show comment
Hide comment
@pimterry

pimterry May 11, 2017

This looks really nice! The one thing I'd love though is conditional stages.

The same on structure as deploy would work fine. In our case, I'd like to have a deploy stage that runs for tagged commits (using a specific regex tag format), but I don't want the stage to appear at all on builds otherwise, since none of them should be deploying. I think something like this also solves quite a few of the use cases above (docs-only builds, unit/integration tests stages depending on the branch, etc).

This looks really nice! The one thing I'd love though is conditional stages.

The same on structure as deploy would work fine. In our case, I'd like to have a deploy stage that runs for tagged commits (using a specific regex tag format), but I don't want the stage to appear at all on builds otherwise, since none of them should be deploying. I think something like this also solves quite a few of the use cases above (docs-only builds, unit/integration tests stages depending on the branch, etc).

@hawkrives

This comment has been minimized.

Show comment
Hide comment
@hawkrives

hawkrives May 11, 2017

First: Wow! This looks really really cool.

With that said, I think I found a bug? Maybe?

My build stages aren't respected correctly if I specify a ruby version at the top level (config, build log), only if I specify it inside the job itself (config, build log).

That is to say,

language: ruby
rvm: '2.4'
cache: 
  bundler: true

jobs:
  include:
    - stage: prepare cache
      script: true
    - stage: test
      script: bundle show
    - stage: test
      script: bundle show
    - stage: test
      script: bundle show

gives me four "Test" jobs and one "Prepare Cache" job, in that order, while inlining the rvm key as below gives me the proper one "Prepare Cache" and three "Test" jobs.

language: ruby
cache: 
  bundler: true

jobs:
  include:
    - stage: prepare cache
      script: true
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'

I would have expected them to be equivalent?

hawkrives commented May 11, 2017

First: Wow! This looks really really cool.

With that said, I think I found a bug? Maybe?

My build stages aren't respected correctly if I specify a ruby version at the top level (config, build log), only if I specify it inside the job itself (config, build log).

That is to say,

language: ruby
rvm: '2.4'
cache: 
  bundler: true

jobs:
  include:
    - stage: prepare cache
      script: true
    - stage: test
      script: bundle show
    - stage: test
      script: bundle show
    - stage: test
      script: bundle show

gives me four "Test" jobs and one "Prepare Cache" job, in that order, while inlining the rvm key as below gives me the proper one "Prepare Cache" and three "Test" jobs.

language: ruby
cache: 
  bundler: true

jobs:
  include:
    - stage: prepare cache
      script: true
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'
    - stage: test
      script: bundle show
      rvm: '2.4'

I would have expected them to be equivalent?

@bmuschko

This comment has been minimized.

Show comment
Hide comment
@bmuschko

bmuschko May 12, 2017

Deployment is a central piece of every Continuous Delivery pipeline. Some organization or projects do not want to go with the Continuous Deployment model as it doesn't fit their workflow. That means they'd rather decide when to deploy on demand instead of deploying with every change. Are you planning to support the definition of a stage that can be triggered manually through the UI?

Deployment is a central piece of every Continuous Delivery pipeline. Some organization or projects do not want to go with the Continuous Deployment model as it doesn't fit their workflow. That means they'd rather decide when to deploy on demand instead of deploying with every change. Are you planning to support the definition of a stage that can be triggered manually through the UI?

@soaxelbrooke

This comment has been minimized.

Show comment
Hide comment
@soaxelbrooke

soaxelbrooke May 12, 2017

Python docker test/build/deploy fails for unknown reasons when converted to build stages. Should a separate issue be created?

When debugged and each step run in the tmate shell, everything works as expected.

soaxelbrooke commented May 12, 2017

Python docker test/build/deploy fails for unknown reasons when converted to build stages. Should a separate issue be created?

When debugged and each step run in the tmate shell, everything works as expected.

@svenfuchs

This comment has been minimized.

Show comment
Hide comment
@svenfuchs

svenfuchs May 12, 2017

Member

Thanks for the feedback, everyone! We are collecting all your input, and we will conduct another planning phase after a certain amount of time, and evaluate your ideas, concerns, and feature requests. So, your input is very valuable to us.

@pimterry This makes sense. The on condition logic is currently only evaluated only after the job already has been scheduled for execution, and it only applies to the deploy phase that is part of the job. We'd want to make this a first-class condition on the job itself. You're right, this also would make sense in other scenarios, too. I'll add this to our list.

@hawkrives I see how this is confusing, and looks as if both configs should be equivalent. The reason why they're not is that rvm is a "matrix expansion key" (see our docs here), and it will generate one job per value (in your case just one). The jobs defined in jobs.include will be added to that set of jobs. This makes a lot more sense in other scenarios, e.g. when you have a huge matrix, and then want to run a single deploy job after it, e.g. https://docs.travis-ci.com/user/build-stages/matrix-expansion/. We have evaluating this more on our list, as we've gotten this same feedback from others, too, and we'll look into how to make this less confusing.

@bmuschko Yes, that is one of the additions on our list. In fact, it was mentioned in the original, very first whiteboarding session, and it has had an impact on the specific config format that we have chosen for build stages.

@soaxelbrooke Yes, it would make sense to either open a separate issue, or email support@travis-ci.org with details.

Again, thank you all!

Member

svenfuchs commented May 12, 2017

Thanks for the feedback, everyone! We are collecting all your input, and we will conduct another planning phase after a certain amount of time, and evaluate your ideas, concerns, and feature requests. So, your input is very valuable to us.

@pimterry This makes sense. The on condition logic is currently only evaluated only after the job already has been scheduled for execution, and it only applies to the deploy phase that is part of the job. We'd want to make this a first-class condition on the job itself. You're right, this also would make sense in other scenarios, too. I'll add this to our list.

@hawkrives I see how this is confusing, and looks as if both configs should be equivalent. The reason why they're not is that rvm is a "matrix expansion key" (see our docs here), and it will generate one job per value (in your case just one). The jobs defined in jobs.include will be added to that set of jobs. This makes a lot more sense in other scenarios, e.g. when you have a huge matrix, and then want to run a single deploy job after it, e.g. https://docs.travis-ci.com/user/build-stages/matrix-expansion/. We have evaluating this more on our list, as we've gotten this same feedback from others, too, and we'll look into how to make this less confusing.

@bmuschko Yes, that is one of the additions on our list. In fact, it was mentioned in the original, very first whiteboarding session, and it has had an impact on the specific config format that we have chosen for build stages.

@soaxelbrooke Yes, it would make sense to either open a separate issue, or email support@travis-ci.org with details.

Again, thank you all!

@popravich

This comment has been minimized.

Show comment
Hide comment
@popravich

popravich May 12, 2017

Hi guys! Great feature!

I've just started to playing with it and have got an issue with build matrix:
I have a several Python version in my build matrix, so generating multiple test-stage jobs.
Adding another stage without explicitly set python key generates a single job with all python version values collapsed into single value.
Here's the build — https://travis-ci.org/aio-libs/aioredis/builds/231530766

Hi guys! Great feature!

I've just started to playing with it and have got an issue with build matrix:
I have a several Python version in my build matrix, so generating multiple test-stage jobs.
Adding another stage without explicitly set python key generates a single job with all python version values collapsed into single value.
Here's the build — https://travis-ci.org/aio-libs/aioredis/builds/231530766

@BanzaiMan

This comment has been minimized.

Show comment
Hide comment
@BanzaiMan

BanzaiMan May 12, 2017

Member

@popravich Hello. For individual issues, please open a separate issue at https://github.com/travis-ci/travis-ci/issues/new, or send email to support@travis-ci.com. Thanks.

Member

BanzaiMan commented May 12, 2017

@popravich Hello. For individual issues, please open a separate issue at https://github.com/travis-ci/travis-ci/issues/new, or send email to support@travis-ci.com. Thanks.

@jeffbyrnes

This comment has been minimized.

Show comment
Hide comment
@jeffbyrnes

jeffbyrnes May 12, 2017

I’d love it if the test job/stage was not always the first one, and that the main script was not included if it is skip.

I’d love it if the test job/stage was not always the first one, and that the main script was not included if it is skip.

@BanzaiMan

This comment has been minimized.

Show comment
Hide comment
@BanzaiMan

BanzaiMan May 12, 2017

Member

@jeffbyrnes If you do not want test to be the first stage, please override the stage names.

the main script was not included if it is skip.

Do you mean that you don't want to see the message at all?

Member

BanzaiMan commented May 12, 2017

@jeffbyrnes If you do not want test to be the first stage, please override the stage names.

the main script was not included if it is skip.

Do you mean that you don't want to see the message at all?

@cspotcode

This comment has been minimized.

Show comment
Hide comment
@cspotcode

cspotcode May 12, 2017

I'm seeing the same behavior as @hawkrives: there is no way to declare stages that execute before the build matrix. Any top-level keys that trigger any sort of build matrix (rvm, env, node_js, etc) even if it's a single-job matrix, cause the test stage to be declared first, so it always executes first. Any test jobs declared within jobs: include: are merely appended to the build matrix jobs.

The only solution I've found is to avoid the build matrix, manually enumerating each job of the matrix within my jobs.include section. This is fine -- I have total control -- but it means for big matrices I might write a script to generate my .travis.yml. The documentation could also describe this solution for newcomers.


Is there a way to share build cache between jobs with different environments? For example, can I populate the yarn cache in a stage using node_js: 7, and use that cache in both of my "test" jobs: both node_js: 7 and node_js: 6?

I'm seeing the same behavior as @hawkrives: there is no way to declare stages that execute before the build matrix. Any top-level keys that trigger any sort of build matrix (rvm, env, node_js, etc) even if it's a single-job matrix, cause the test stage to be declared first, so it always executes first. Any test jobs declared within jobs: include: are merely appended to the build matrix jobs.

The only solution I've found is to avoid the build matrix, manually enumerating each job of the matrix within my jobs.include section. This is fine -- I have total control -- but it means for big matrices I might write a script to generate my .travis.yml. The documentation could also describe this solution for newcomers.


Is there a way to share build cache between jobs with different environments? For example, can I populate the yarn cache in a stage using node_js: 7, and use that cache in both of my "test" jobs: both node_js: 7 and node_js: 6?

@glensc

This comment has been minimized.

Show comment
Hide comment
@glensc

glensc May 12, 2017

imho the syntax is rather complex and hard to grasp comparing to gitlab-ci. i already had serious headache trying to understand how to use matrix. and to make even worse stages and matrix can be also combined!

travis syntax endorses that all my scripts get nested several levels deep of indentation.

for example, let's take deploy-github-releases example:

  • in my previous .travis.yml deploy: was root level
  • with stages it's third level

perhaps some syntax addon to write section names from root level instead typing in the actual script?

jobs:
  include:
    - script: &.test1
    - script: &.test2
    - stage: GitHub Release
      script: echo "Deploying to npm ..."
      deploy: &.deploy

.test1: |
  echo "Running unit tests (1)"

.test2: |
  echo "Running unit tests (2)"

.deploy:
        provider: releases
        api_key: $GITHUB_OAUTH_TOKEN
        skip_cleanup: true
        on:
          tags: true

ps: [deploy-github-releases] lacks echo keyword in script examples

glensc commented May 12, 2017

imho the syntax is rather complex and hard to grasp comparing to gitlab-ci. i already had serious headache trying to understand how to use matrix. and to make even worse stages and matrix can be also combined!

travis syntax endorses that all my scripts get nested several levels deep of indentation.

for example, let's take deploy-github-releases example:

  • in my previous .travis.yml deploy: was root level
  • with stages it's third level

perhaps some syntax addon to write section names from root level instead typing in the actual script?

jobs:
  include:
    - script: &.test1
    - script: &.test2
    - stage: GitHub Release
      script: echo "Deploying to npm ..."
      deploy: &.deploy

.test1: |
  echo "Running unit tests (1)"

.test2: |
  echo "Running unit tests (2)"

.deploy:
        provider: releases
        api_key: $GITHUB_OAUTH_TOKEN
        skip_cleanup: true
        on:
          tags: true

ps: [deploy-github-releases] lacks echo keyword in script examples

@BanzaiMan

This comment has been minimized.

Show comment
Hide comment
@BanzaiMan

BanzaiMan May 12, 2017

Member

@cspotcode Could you elaborate on how you would like to mix the build matrix and the stages, where you might want to execute some of it before the matrix?

As for the cache question, what is "an environment" when you say:

Is there a way to share build cache between jobs with different environments?

The answer to your question, I believe, is "no", because node_js: 6 and node_js: 7 jobs will have different cache names, as explained in https://docs.travis-ci.com/user/caching/#Caches-and-build-matrices. They could contain binary-incompatible files and may not work in general. If you want to share things between them, an external storage (such as S3 or GCS) would have to be configured.

Member

BanzaiMan commented May 12, 2017

@cspotcode Could you elaborate on how you would like to mix the build matrix and the stages, where you might want to execute some of it before the matrix?

As for the cache question, what is "an environment" when you say:

Is there a way to share build cache between jobs with different environments?

The answer to your question, I believe, is "no", because node_js: 6 and node_js: 7 jobs will have different cache names, as explained in https://docs.travis-ci.com/user/caching/#Caches-and-build-matrices. They could contain binary-incompatible files and may not work in general. If you want to share things between them, an external storage (such as S3 or GCS) would have to be configured.

@cspotcode

This comment has been minimized.

Show comment
Hide comment
@cspotcode

cspotcode May 12, 2017

@BanzaiMan
To mix the matrix with stages, I am imagining a situation like this example: https://docs.travis-ci.com/user/build-stages/share-docker-image/
However, in that example, all jobs in the "test" stage are declared explicitly within jobs.include. Suppose a developer wanted to use the build matrix to declare those "test" jobs but wanted the "build docker image" stage to execute first. Will that be possible, or will we be required to avoid the build matrix like in the linked example?
I now see that this is the same as what @jeffbyrnes asked about here: #11 (comment)

"An environment" means all of the things that make the cache names different: node version, ruby version, environment variables, OS, etc. I agree that sharing cache between node 6 and 7 may not work in general, which is why the default behavior is to have different caches. I'm asking if there is a way to override that behavior in situations where a developer knows that sharing cache will safely afford them a performance benefit without causing problems.

EDIT fixed a typo

cspotcode commented May 12, 2017

@BanzaiMan
To mix the matrix with stages, I am imagining a situation like this example: https://docs.travis-ci.com/user/build-stages/share-docker-image/
However, in that example, all jobs in the "test" stage are declared explicitly within jobs.include. Suppose a developer wanted to use the build matrix to declare those "test" jobs but wanted the "build docker image" stage to execute first. Will that be possible, or will we be required to avoid the build matrix like in the linked example?
I now see that this is the same as what @jeffbyrnes asked about here: #11 (comment)

"An environment" means all of the things that make the cache names different: node version, ruby version, environment variables, OS, etc. I agree that sharing cache between node 6 and 7 may not work in general, which is why the default behavior is to have different caches. I'm asking if there is a way to override that behavior in situations where a developer knows that sharing cache will safely afford them a performance benefit without causing problems.

EDIT fixed a typo

@BanzaiMan

This comment has been minimized.

Show comment
Hide comment
@BanzaiMan

BanzaiMan May 12, 2017

Member

@flovilmart As mentioned before, for particular use case issues, please open a separate issue at https://github.com/travis-ci/travis-ci/issues/new. Thanks.

Member

BanzaiMan commented May 12, 2017

@flovilmart As mentioned before, for particular use case issues, please open a separate issue at https://github.com/travis-ci/travis-ci/issues/new. Thanks.

@webknjaz

This comment has been minimized.

Show comment
Hide comment
@webknjaz

webknjaz May 12, 2017

Hi,

Is there a list of YAML keys I have to remove from config (and move to jobs.include), so that travis would detect it as pipeline-enabled?
I had to move them one by one until only notifications and cache left at the root level of nesting.

Hi,

Is there a list of YAML keys I have to remove from config (and move to jobs.include), so that travis would detect it as pipeline-enabled?
I had to move them one by one until only notifications and cache left at the root level of nesting.

@BanzaiMan

This comment has been minimized.

Show comment
Hide comment
@BanzaiMan

BanzaiMan May 13, 2017

Member

@webknjaz I am not sure if such a list should exist. This feature is meant to be compatible with the existing configuration, and if you had to do extra work, then there might be a bug. In travis-ci/travis-ci#7754 (comment), I identified matrix.fast_finish to be a potential culprit. Did you have this? If not, where can we see how you worked through the troubles?

Member

BanzaiMan commented May 13, 2017

@webknjaz I am not sure if such a list should exist. This feature is meant to be compatible with the existing configuration, and if you had to do extra work, then there might be a bug. In travis-ci/travis-ci#7754 (comment), I identified matrix.fast_finish to be a potential culprit. Did you have this? If not, where can we see how you worked through the troubles?

@ljharb

This comment has been minimized.

Show comment
Hide comment
@ljharb

ljharb May 13, 2017

Is there any way I can make certain parts of the matrix be in one stage, and others in another? Kind of like how allow_failure can be used with env vars to target multiple disparate jobs.

ljharb commented May 13, 2017

Is there any way I can make certain parts of the matrix be in one stage, and others in another? Kind of like how allow_failure can be used with env vars to target multiple disparate jobs.

@bmuschko

This comment has been minimized.

Show comment
Hide comment
@bmuschko

bmuschko May 13, 2017

If I am seeing this correctly each stage uses its own "workspace" meaning a new clone of the same repository. Under certain conditions you'll want to reuse the workspace (and the produced outputs) of an earlier stage and continue work based on that result.

Example:

  • stage 1 - compilation
  • stage 2 - run unit tests based on the compiled source code.

Is this going to be a supported feature? It's an essential feature for modeling many build pipelines out there.

If I am seeing this correctly each stage uses its own "workspace" meaning a new clone of the same repository. Under certain conditions you'll want to reuse the workspace (and the produced outputs) of an earlier stage and continue work based on that result.

Example:

  • stage 1 - compilation
  • stage 2 - run unit tests based on the compiled source code.

Is this going to be a supported feature? It's an essential feature for modeling many build pipelines out there.

@shepmaster

This comment has been minimized.

Show comment
Hide comment
@shepmaster

shepmaster May 13, 2017

you'll want to reuse the workspace (and the produced outputs)

I believe you are looking for the Build Stages: Share files via S3 example or this earlier comment in this thread.

shepmaster commented May 13, 2017

you'll want to reuse the workspace (and the produced outputs)

I believe you are looking for the Build Stages: Share files via S3 example or this earlier comment in this thread.

@webknjaz

This comment has been minimized.

Show comment
Hide comment
@webknjaz

webknjaz May 13, 2017

@BanzaiMan Yea, I think moving matrix.fast_finish has fixed it: GDG-Ukraine/gdg.org.ua@5af2ffc...49f1b02 (JFYI)

@BanzaiMan Yea, I think moving matrix.fast_finish has fixed it: GDG-Ukraine/gdg.org.ua@5af2ffc...49f1b02 (JFYI)

@shepmaster

This comment has been minimized.

Show comment
Hide comment
@shepmaster

shepmaster May 14, 2017

Is there a way to ignore failures of a specific job in a stage? I tried

    # this is the second job in a stage
    - env:
        - TOOLS_TO_BUILD="clippy"
      script: ./.travis/build-containers.sh
      allow_failures:
        - TOOLS_TO_BUILD="clippy"

But that seemed to have no effect.

Is there a way to ignore failures of a specific job in a stage? I tried

    # this is the second job in a stage
    - env:
        - TOOLS_TO_BUILD="clippy"
      script: ./.travis/build-containers.sh
      allow_failures:
        - TOOLS_TO_BUILD="clippy"

But that seemed to have no effect.

@BanzaiMan

This comment has been minimized.

Show comment
Hide comment
Member

BanzaiMan commented May 14, 2017

@glensc

This comment has been minimized.

Show comment
Hide comment
@glensc

glensc May 14, 2017

seems allow failures not possible. as enabling matrix keyword disables whole stages concept.

jobs:
  include:
    - php: "5.5"
    - php: "5.6"
    - php: "7.0"
    - php: "7.1"
    - php: "hhvm"
    - php: "nightly"

matrix:
   allow_failures:
   - php: hhvm

glensc commented May 14, 2017

seems allow failures not possible. as enabling matrix keyword disables whole stages concept.

jobs:
  include:
    - php: "5.5"
    - php: "5.6"
    - php: "7.0"
    - php: "7.1"
    - php: "hhvm"
    - php: "nightly"

matrix:
   allow_failures:
   - php: hhvm

glensc added a commit to glensc/eventum that referenced this issue May 14, 2017

glensc added a commit to glensc/eventum that referenced this issue May 14, 2017

@svenfuchs

This comment has been minimized.

Show comment
Hide comment
@svenfuchs

svenfuchs May 14, 2017

Member

@jeffbyrnes With regards to the test stage being the first one, we have a change on our list that will make this possible. essentially, you'll be able to specify a list of stages and, doing so, modify their order. I'm not 100% sure I understand your second comment. The main script will not be run if it is skip. Do you mean you'd rather not have any message output at all?

@cspotcode That's true. At the moment, one cannot run a stage before the jobs that are expanded out of matrix expansion keys (such as rvm, env, etc.). This will be possible with the change I have mentioned before. For large matrixes that have a lot of repitition it can make sense to use YAML aliases, see https://docs.travis-ci.com/user/build-stages/using-yaml-aliases/. About your question regarding the yarn cache, I'm not 100% sure. I believe the runtime version is always included to the cache key in our cache integration. You could, of course, always manage this manually. E.g. see https://docs.travis-ci.com/user/build-stages/share-files-s3/.

@glensc You're right that the YAML syntax has an additional level of indentation (jobs.include vs jobs), which we intend to get rid of in the future. Other than that, our syntax is very similar to the one GitLab supports. Except that, of course, we also support matrix expansion, which can be confusing to use when combined with stages. You can always just list all jobs in jobs.include though, and disregard the matrix expansion. We are considering allowing a stage key on the root deploy section, but we're not decided on that, yet. If you need any help with getting your .travis.yml file right please always feel free to email us at support@travis-ci.org.

@webknjaz At the moment there's no official, public list of matrix expansion keys, although they're listed on the documentation for various languages. I've made a gist for you here: https://gist.github.com/svenfuchs/66c8b627dca35561ee1f0912d54dfd0d.

@ljharb At the moment there's no such way, and I'm not sure that would be beneficial. Looking at the confusion people sometimes go through I'm afraid this might add to it. Do you have an example usecase though? I'd like to understand what you are trying to do.

@bmuschko You're right that every job runs on a fresh/clean VM using a new clone of the repository. In order to share build artifacts (for example a binary compiled in an earlier stage) you can do so using our artifacts feature (see https://docs.travis-ci.com/user/uploading-artifacts/), or manage the process manually, e.g. https://docs.travis-ci.com/user/build-stages/share-files-s3/. We do intend to improve on this in the future.

@shepmaster, @glensc allow_failures should continue to work as before. I am surprised our docs on stages do not mention allow_failures though. I thought we had added that, and I'll make sure we fix that. It should mention jobs.allow_failures, not matrix.allow_failures. jobs is an alias key for matrix. So you want to specify jobs.include and jobs.allow_failures. Does that help? Here's an example build: https://travis-ci.org/backspace/travixperiments-redux/builds/226522374 and here's the respective .travis.yml file: https://github.com/backspace/travixperiments-redux/blob/primary/.travis.yml

Member

svenfuchs commented May 14, 2017

@jeffbyrnes With regards to the test stage being the first one, we have a change on our list that will make this possible. essentially, you'll be able to specify a list of stages and, doing so, modify their order. I'm not 100% sure I understand your second comment. The main script will not be run if it is skip. Do you mean you'd rather not have any message output at all?

@cspotcode That's true. At the moment, one cannot run a stage before the jobs that are expanded out of matrix expansion keys (such as rvm, env, etc.). This will be possible with the change I have mentioned before. For large matrixes that have a lot of repitition it can make sense to use YAML aliases, see https://docs.travis-ci.com/user/build-stages/using-yaml-aliases/. About your question regarding the yarn cache, I'm not 100% sure. I believe the runtime version is always included to the cache key in our cache integration. You could, of course, always manage this manually. E.g. see https://docs.travis-ci.com/user/build-stages/share-files-s3/.

@glensc You're right that the YAML syntax has an additional level of indentation (jobs.include vs jobs), which we intend to get rid of in the future. Other than that, our syntax is very similar to the one GitLab supports. Except that, of course, we also support matrix expansion, which can be confusing to use when combined with stages. You can always just list all jobs in jobs.include though, and disregard the matrix expansion. We are considering allowing a stage key on the root deploy section, but we're not decided on that, yet. If you need any help with getting your .travis.yml file right please always feel free to email us at support@travis-ci.org.

@webknjaz At the moment there's no official, public list of matrix expansion keys, although they're listed on the documentation for various languages. I've made a gist for you here: https://gist.github.com/svenfuchs/66c8b627dca35561ee1f0912d54dfd0d.

@ljharb At the moment there's no such way, and I'm not sure that would be beneficial. Looking at the confusion people sometimes go through I'm afraid this might add to it. Do you have an example usecase though? I'd like to understand what you are trying to do.

@bmuschko You're right that every job runs on a fresh/clean VM using a new clone of the repository. In order to share build artifacts (for example a binary compiled in an earlier stage) you can do so using our artifacts feature (see https://docs.travis-ci.com/user/uploading-artifacts/), or manage the process manually, e.g. https://docs.travis-ci.com/user/build-stages/share-files-s3/. We do intend to improve on this in the future.

@shepmaster, @glensc allow_failures should continue to work as before. I am surprised our docs on stages do not mention allow_failures though. I thought we had added that, and I'll make sure we fix that. It should mention jobs.allow_failures, not matrix.allow_failures. jobs is an alias key for matrix. So you want to specify jobs.include and jobs.allow_failures. Does that help? Here's an example build: https://travis-ci.org/backspace/travixperiments-redux/builds/226522374 and here's the respective .travis.yml file: https://github.com/backspace/travixperiments-redux/blob/primary/.travis.yml

@alexfmpe

This comment has been minimized.

Show comment
Hide comment
@alexfmpe

alexfmpe Jul 3, 2017

That's unfortunate, I was hoping to avoid combinatorial explosion.

Say, is it currently possible to specify linux dist inside stages?
Trying it directly yielded multi-os stages again.
I had to resort to matrix defaults like this to get the intended behavior, but if one wanted to specify two distros this way, it couldn't be done given that only one would be inherited right?

EDIT: turns out it's not using trusty as requested, but precise. Any way to choose distro in a stages build?

alexfmpe commented Jul 3, 2017

That's unfortunate, I was hoping to avoid combinatorial explosion.

Say, is it currently possible to specify linux dist inside stages?
Trying it directly yielded multi-os stages again.
I had to resort to matrix defaults like this to get the intended behavior, but if one wanted to specify two distros this way, it couldn't be done given that only one would be inherited right?

EDIT: turns out it's not using trusty as requested, but precise. Any way to choose distro in a stages build?

@webknjaz

This comment has been minimized.

Show comment
Hide comment
@webknjaz

webknjaz Jul 4, 2017

I think specifying root-level dist: trusty worked for me. But in certain conditions it's got resets to precise.
Matrix expansion does not work inside of stages, it only generates test stage jobs list.

webknjaz commented Jul 4, 2017

I think specifying root-level dist: trusty worked for me. But in certain conditions it's got resets to precise.
Matrix expansion does not work inside of stages, it only generates test stage jobs list.

@backspace

This comment has been minimized.

Show comment
Hide comment
@backspace

backspace Jul 5, 2017

Member

@jsmaniac, thanks for letting us know about the rendering bug on the root route. It has since been fixed and deployed, so build stages should properly render regardless of what route you’re on.

Member

backspace commented Jul 5, 2017

@jsmaniac, thanks for letting us know about the rendering bug on the root route. It has since been fixed and deployed, so build stages should properly render regardless of what route you’re on.

@stianst

This comment has been minimized.

Show comment
Hide comment
@stianst

stianst Jul 6, 2017

I'm trying out build stages for our project now and it's solving a lot of problems for us. It's really nice and I especially love how simple it is.

I've got two requests:

  • Ability to share between stages seems like it should be a core feature. At the moment I'm simply using caching to solve this, but not sure if that's the best idea. In my 'prepare cache' stage I run a Maven install (which will download all external artifacts as well build all project artifacts). That's then cached and uses by the test stages. This is how I'm doing it now https://github.com/stianst/keycloak/blob/TRAVIS/.travis.yml#L15

  • Ability to name a job within a stage. At the moment I'm simply setting a dummy environment variable for this, but I think it should be a core part of build stages. Otherwise you could have 10 jobs within a stage, but no clue about what is what. This is how I'm doing it now https://github.com/stianst/keycloak/blob/TRAVIS/.travis.yml#L20

stianst commented Jul 6, 2017

I'm trying out build stages for our project now and it's solving a lot of problems for us. It's really nice and I especially love how simple it is.

I've got two requests:

  • Ability to share between stages seems like it should be a core feature. At the moment I'm simply using caching to solve this, but not sure if that's the best idea. In my 'prepare cache' stage I run a Maven install (which will download all external artifacts as well build all project artifacts). That's then cached and uses by the test stages. This is how I'm doing it now https://github.com/stianst/keycloak/blob/TRAVIS/.travis.yml#L15

  • Ability to name a job within a stage. At the moment I'm simply setting a dummy environment variable for this, but I think it should be a core part of build stages. Otherwise you could have 10 jobs within a stage, but no clue about what is what. This is how I'm doing it now https://github.com/stianst/keycloak/blob/TRAVIS/.travis.yml#L20

@vb216

This comment has been minimized.

Show comment
Hide comment
@vb216

vb216 Jul 6, 2017

@stianst is that approach to shared caches working for you for sure? I need to produce something similar, for a compiled job output that others are dependent on and populating the .m2 cache, would save a lot of time and complexity

I thought cache locations were built from a unique key derived from job variables, so as the env var you have is different they wouldn't share a cache?

vb216 commented Jul 6, 2017

@stianst is that approach to shared caches working for you for sure? I need to produce something similar, for a compiled job output that others are dependent on and populating the .m2 cache, would save a lot of time and complexity

I thought cache locations were built from a unique key derived from job variables, so as the env var you have is different they wouldn't share a cache?

@stianst

This comment has been minimized.

Show comment
Hide comment
@stianst

stianst Jul 6, 2017

@vb216 you're right it's not working. Would have been real nice if it did though, but for now I'm going back to building the whole thing for each jobs, which is annoying.

stianst commented Jul 6, 2017

@vb216 you're right it's not working. Would have been real nice if it did though, but for now I'm going back to building the whole thing for each jobs, which is annoying.

@bkimminich

This comment has been minimized.

Show comment
Hide comment
@bkimminich

bkimminich Jul 6, 2017

To get a build setup like this
image
it seems I need to have a .travis.yml like this:

language: node_js
node_js:
- 4
- 6
- 7
before_install:
- rm -rf node_modules
script:
- npm test
jobs:
  include:
    - stage: test e2e
      script: npm run e2e
      node_js: 4
    - stage: test e2e
      script: npm run e2e
      node_js: 6
    - stage: test e2e
      script: npm run e2e
      node_js: 7
    - stage: coverage
      script: npm run coverage
      node_js: 6
    - stage: deploy
      script: skip
      node_js: 6
      provider: npm
      email: XXXXXX
      api_key:
        secure: XXXXXX=
      on:
        tags: true
        repo: bkimminich/juice-shop-ctf
sudo: false

where it would be way nicer to have the test e2e stage declared like this:

    - stage: test e2e
      script: npm run e2e
      node_js:
      - 4
      - 6
      - 7

To get a build setup like this
image
it seems I need to have a .travis.yml like this:

language: node_js
node_js:
- 4
- 6
- 7
before_install:
- rm -rf node_modules
script:
- npm test
jobs:
  include:
    - stage: test e2e
      script: npm run e2e
      node_js: 4
    - stage: test e2e
      script: npm run e2e
      node_js: 6
    - stage: test e2e
      script: npm run e2e
      node_js: 7
    - stage: coverage
      script: npm run coverage
      node_js: 6
    - stage: deploy
      script: skip
      node_js: 6
      provider: npm
      email: XXXXXX
      api_key:
        secure: XXXXXX=
      on:
        tags: true
        repo: bkimminich/juice-shop-ctf
sudo: false

where it would be way nicer to have the test e2e stage declared like this:

    - stage: test e2e
      script: npm run e2e
      node_js:
      - 4
      - 6
      - 7
@keradus

This comment has been minimized.

Show comment
Hide comment
@keradus

keradus Jul 6, 2017

already raised @bkimminich at #11 (comment) , check out the answer for it

keradus commented Jul 6, 2017

already raised @bkimminich at #11 (comment) , check out the answer for it

@rmehner

This comment has been minimized.

Show comment
Hide comment
@rmehner

rmehner Jul 7, 2017

Hey there,

as tweeted here it would be super nice, if it would be possible to skip stages on certain branches. Something like this:

jobs:
  include:
    - stage: test
      rvm: 2.3.1
      script: bundle exec rspec
    - stage: test
      rvm: 2.4.1
      script: bundle exec rspec
    - stage: deploy
      rvm: 2.3.1
      branches:
        - master
        - production

My use case is, that I want to test if something breaks in the latest version of Ruby, while still keeping my main test suite in line with the version that is run in the respective production environment and only deploy with that version. However, the deploy stage takes a while to run & install and I don't need it to be run on any other branch than master or production.

I know there are workarounds, but I'd like the stages features to support that natively (I only want to deploy if all test stages are green)

rmehner commented Jul 7, 2017

Hey there,

as tweeted here it would be super nice, if it would be possible to skip stages on certain branches. Something like this:

jobs:
  include:
    - stage: test
      rvm: 2.3.1
      script: bundle exec rspec
    - stage: test
      rvm: 2.4.1
      script: bundle exec rspec
    - stage: deploy
      rvm: 2.3.1
      branches:
        - master
        - production

My use case is, that I want to test if something breaks in the latest version of Ruby, while still keeping my main test suite in line with the version that is run in the respective production environment and only deploy with that version. However, the deploy stage takes a while to run & install and I don't need it to be run on any other branch than master or production.

I know there are workarounds, but I'd like the stages features to support that natively (I only want to deploy if all test stages are green)

@keradus

This comment has been minimized.

Show comment
Hide comment
@keradus

keradus Jul 7, 2017

functionality already requested and approved, yet no ETA:
#11 (comment)

keradus commented Jul 7, 2017

functionality already requested and approved, yet no ETA:
#11 (comment)

@peshay

This comment has been minimized.

Show comment
Hide comment
@peshay

peshay Jul 9, 2017

I have an issue with my travis syntax when I try to integrate that new feature

language: python
python:
- '2.7'
- '3.3'
- '3.4'
- '3.5'
- '3.6'
- 3.7-dev
- nightly
install:
- pip install -r requirements.txt
- python setup.py -q install

jobs:
  include:
    - stage: Tests
      script: nosetests -v --with-coverage
      after_success: codecov
    - stage: Releases
      before_deploy: tar -czf tpmstore-$TRAVIS_TAG.tar.gz tpmstore/*.py
      deploy:
        provider: releases
        api_key:
          secure: <long string>
        file: tpmstore-$TRAVIS_TAG.tar.gz
      on:
        repo: peshay/tpmstore
        branch: master
        tags: true
    - 
      deploy:
        - provider: pypi
          distributions: sdist
          user: peshay
          password:
            secure: <long string>
          on:
            branch: master
            tags: true
            condition: $TRAVIS_PYTHON_VERSION = "2.7"
        - provider: pypi
          distributions: sdist
          server: https://test.pypi.org/legacy/
          user: peshay
          password:
            secure: <long string>
          on:
            branch: master
            tags: false
            condition: $TRAVIS_PYTHON_VERSION = "2.7"

peshay commented Jul 9, 2017

I have an issue with my travis syntax when I try to integrate that new feature

language: python
python:
- '2.7'
- '3.3'
- '3.4'
- '3.5'
- '3.6'
- 3.7-dev
- nightly
install:
- pip install -r requirements.txt
- python setup.py -q install

jobs:
  include:
    - stage: Tests
      script: nosetests -v --with-coverage
      after_success: codecov
    - stage: Releases
      before_deploy: tar -czf tpmstore-$TRAVIS_TAG.tar.gz tpmstore/*.py
      deploy:
        provider: releases
        api_key:
          secure: <long string>
        file: tpmstore-$TRAVIS_TAG.tar.gz
      on:
        repo: peshay/tpmstore
        branch: master
        tags: true
    - 
      deploy:
        - provider: pypi
          distributions: sdist
          user: peshay
          password:
            secure: <long string>
          on:
            branch: master
            tags: true
            condition: $TRAVIS_PYTHON_VERSION = "2.7"
        - provider: pypi
          distributions: sdist
          server: https://test.pypi.org/legacy/
          user: peshay
          password:
            secure: <long string>
          on:
            branch: master
            tags: false
            condition: $TRAVIS_PYTHON_VERSION = "2.7"
@keradus

This comment has been minimized.

Show comment
Hide comment
@keradus

keradus Jul 9, 2017

perhaps describing the issue you are facing would help

keradus commented Jul 9, 2017

perhaps describing the issue you are facing would help

@peshay

This comment has been minimized.

Show comment
Hide comment
@peshay

peshay Jul 9, 2017

travis linter simply fails, I don‘t see why.
unexpected key jobs, dropping

peshay commented Jul 9, 2017

travis linter simply fails, I don‘t see why.
unexpected key jobs, dropping

@maciejtreder

This comment has been minimized.

Show comment
Hide comment
@maciejtreder

maciejtreder Jul 9, 2017

Sharing files via stages, definetely should be done differently then via external systems. In the gitlab-ci it is done really simply, by 'artifacts' property in yml. Here should be same.

Sharing files via stages, definetely should be done differently then via external systems. In the gitlab-ci it is done really simply, by 'artifacts' property in yml. Here should be same.

@BanzaiMan

This comment has been minimized.

Show comment
Hide comment
@BanzaiMan

BanzaiMan Jul 9, 2017

Member

The linter is sadly out of date at the moment. Many of the recent keys are not recognized. We have plans to improve this aspect of our services, but it will be a little while.

Sharing storage has been raised as a missing feature many times before, and we recognize that it is critical.

Member

BanzaiMan commented Jul 9, 2017

The linter is sadly out of date at the moment. Many of the recent keys are not recognized. We have plans to improve this aspect of our services, but it will be a little while.

Sharing storage has been raised as a missing feature many times before, and we recognize that it is critical.

@thedrow

This comment has been minimized.

Show comment
Hide comment
@thedrow

thedrow Jul 10, 2017

@BanzaiMan CircleCI provides artifacts storage with their own S3 bucket. It reduces the burden for open source users to pay and maintain on their own S3 bucket.

thedrow commented Jul 10, 2017

@BanzaiMan CircleCI provides artifacts storage with their own S3 bucket. It reduces the burden for open source users to pay and maintain on their own S3 bucket.

@thedrow

This comment has been minimized.

Show comment
Hide comment
@thedrow

thedrow Jul 10, 2017

How do I mark all the jobs with a certain environment variable as another stage?
The documentation is not clear about whether it is possible to set stages to jobs generated from the build matrix.
We have the following .travis.yml https://github.com/celery/celery/blob/master/.travis.yml#L17
We want to mark every integration suite to be in the integration stage. Most of it is automatically generated from the matrix.
Do we have to specify each build explicitly? It's a lot of work to do so that way.

thedrow commented Jul 10, 2017

How do I mark all the jobs with a certain environment variable as another stage?
The documentation is not clear about whether it is possible to set stages to jobs generated from the build matrix.
We have the following .travis.yml https://github.com/celery/celery/blob/master/.travis.yml#L17
We want to mark every integration suite to be in the integration stage. Most of it is automatically generated from the matrix.
Do we have to specify each build explicitly? It's a lot of work to do so that way.

@peshay

This comment has been minimized.

Show comment
Hide comment
@peshay

peshay Jul 10, 2017

@BanzaiMan Thanks it works now. First I had real syntax issues and I tried with the linter to fix it but then got stuck at that key jobs. But it works like a charm now. :)

peshay commented Jul 10, 2017

@BanzaiMan Thanks it works now. First I had real syntax issues and I tried with the linter to fix it but then got stuck at that key jobs. But it works like a charm now. :)

@asifdxtreme

This comment has been minimized.

Show comment
Hide comment
@asifdxtreme

asifdxtreme Jul 14, 2017

This seems to be a very interesting feature.
I have already started using this feature in my projects and I am happy to see all my Build, UT and IT in different stages in Travis CI.

It will be very nice if I can see the status of these individual stages in my github pull request page.

This seems to be a very interesting feature.
I have already started using this feature in my projects and I am happy to see all my Build, UT and IT in different stages in Travis CI.

It will be very nice if I can see the status of these individual stages in my github pull request page.

@weitjong

This comment has been minimized.

Show comment
Hide comment
@weitjong

weitjong Jul 15, 2017

Just have time to test the new feature and loving it so far! Kudos to Travis team.

From my tests I have found an undocumented feature. I could name the stage from the build matrix by adding a top-level stage in my .travis.yml, like so:

language: cpp
compiler:
  - gcc
  - clang
dist: trusty
sudo: false
addons: {apt: {packages: &default_packages [doxygen, graphviz]}}
env:
  global:
    - numjobs=4
  matrix:
    - FOO=foo1
    - FOO=foo2
    - FOO=barNone
stage: build test
before_script: echo "before script"
script: echo $FOO
after_script: echo "after script"
matrix:
  fast_finish: true
  exclude:
    - env: FOO=barNone
  include:
    - stage: more build test
      env: FOO=bar
      addons: {apt: {packages: [*default_packages, rpm]}}
    - stage: deploy
      env: FOO=foobar
      addons: null
      before_script: null
      script: echo $FOO
      after_script: null

Just have time to test the new feature and loving it so far! Kudos to Travis team.

From my tests I have found an undocumented feature. I could name the stage from the build matrix by adding a top-level stage in my .travis.yml, like so:

language: cpp
compiler:
  - gcc
  - clang
dist: trusty
sudo: false
addons: {apt: {packages: &default_packages [doxygen, graphviz]}}
env:
  global:
    - numjobs=4
  matrix:
    - FOO=foo1
    - FOO=foo2
    - FOO=barNone
stage: build test
before_script: echo "before script"
script: echo $FOO
after_script: echo "after script"
matrix:
  fast_finish: true
  exclude:
    - env: FOO=barNone
  include:
    - stage: more build test
      env: FOO=bar
      addons: {apt: {packages: [*default_packages, rpm]}}
    - stage: deploy
      env: FOO=foobar
      addons: null
      before_script: null
      script: echo $FOO
      after_script: null

webknjaz added a commit to aio-libs/multidict that referenced this issue Jul 15, 2017

@mpkorstanje

This comment has been minimized.

Show comment
Hide comment
@mpkorstanje

mpkorstanje Jul 16, 2017

By canceling all but one job in the pipeline I've gotten a job stuck in the yellow "created" state.

https://travis-ci.org/cucumber/cucumber-jvm/builds/254134986?utm_source=github_status

Steps to reproduce:

  1. Start the job.
  2. Cancel all deploy and all but one test job.
  3. Fail the test job.

I would expect the build to be either marked canceled or failed.

mpkorstanje commented Jul 16, 2017

By canceling all but one job in the pipeline I've gotten a job stuck in the yellow "created" state.

https://travis-ci.org/cucumber/cucumber-jvm/builds/254134986?utm_source=github_status

Steps to reproduce:

  1. Start the job.
  2. Cancel all deploy and all but one test job.
  3. Fail the test job.

I would expect the build to be either marked canceled or failed.

@Griffon26

This comment has been minimized.

Show comment
Hide comment
@Griffon26

Griffon26 Jul 17, 2017

I've put the coverity_scan plugin in a stage. See here: https://github.com/Griffon26/vhde/blob/master/.travis.yml#L61

Should that work? I assumed so, because I can also override the global apt plugin settings in a stage.

When the job runs the coverity scan is skipped and I see no logging whatsoever related to the coverity scan plugin: https://travis-ci.org/Griffon26/vhde/jobs/254260753

I've put the coverity_scan plugin in a stage. See here: https://github.com/Griffon26/vhde/blob/master/.travis.yml#L61

Should that work? I assumed so, because I can also override the global apt plugin settings in a stage.

When the job runs the coverity scan is skipped and I see no logging whatsoever related to the coverity scan plugin: https://travis-ci.org/Griffon26/vhde/jobs/254260753

@svenfuchs

This comment has been minimized.

Show comment
Hide comment
@svenfuchs

svenfuchs Jul 19, 2017

Member

@shepmaster Thanks for the additional input. I've added your example to our internal tracking issue.

@roperto Thanks for the suggestion. It seems to me that your usecase would be covered by adding more filtering/condition capabilities, which is something that is on our list.

@colby-swandale Sorry for the late response. If you still have this issue/question it might be best to get in touch with support via email support@travis-ci.com

@EmilDafinov @alorma @seivan Thanks for the suggestion. Yes, allowing a name and/or description for jobs has been suggested a few times, and it's on our list of things to consider.

@webknjaz Thanks for the suggestion on improving the message for allowed failures on our UI. I'll forward this to our UI/web team.

@ghedamat @webknjaz Yes, you are right. Jobs in later stages end up in a canceled state, and restarting one job currently does not touch the state of any other jobs. That is sort of intended, even though in the context of build stages it might seem to make sense. I'll add reconsidering this behaviour to our list, but for now it seems unlikely for us to change this.

@aontas @ELD It sounds like your setup should be very possible. If you're still having this issue could you please get in touch with support via email? support@travis-ci.com

@ELD Thanks for the suggestion of a .travis.yml web tool/editor. We have that on our list.

@23Skidoo Thanks for the suggestions. There are no plans to introduce a more complicated pipeline setup at this time. However, we're collecting usecases and thoughts to be considered in a future iteration. If you could outline your case more that would be valuable input. Making cache slugs more customizable is on our list.

@envygeeks Thanks for the suggestions. Listing stage names, and specifying their order is one improvement pretty high on our list. I'm not sure I understand what you mean by "global matrixes were respected when it comes to env" ... could you elaborate? Also, the example linked by @skeggse should work. If you still have this issue, could you get in touch with support via email? support@travis-ci.com

@timofurrer Development on this has not started, yet, no, sorry. We're still in the planning phase for the most part. Also, thanks for the pointer about the missing message. I'll forward that to our UI/web team.

@SkySkimmer Thanks for the detailed writeup on your case. Do I understand this correctly that you'd need either more flexibility in using our built-in cache feature (i.e. customize cache slugs according to your needs), or need a different way of sharing build artifacts between jobs?

@jsmaniac Thanks for documenting this. I'll open a ticket to consider adding this to our documentation.

@pawamoy Thanks for the report, and for the suggestion. I'll open a ticket about the bug, and add your suggestion for a separate color to our list of things to consider.

@bsideup Thanks for the suggestion. This has come up a few times, and it's high on our list of things to consider for the next iteration.

@leighmcculloch @asifdxtreme Thanks for the suggestion. I understand integrating with the GitHub UI in more detail would be desirable. However, the GitHub commit status API has a couple issues for us, and we're essentially hitting their limits, and as far as I understand they're trying to figure out improvements. So this might not be the best time for us to make such a huge change. I'll still add your suggestion to our list of things to re-evaluate this later.

@szpak Thanks for the report about the stage not being canceled. From your report it seems this clearly is a bug, I suspect a race'y condition between the component that cancels, and the component that schedules jobs. I'll open a ticket for this.

@stianst Thanks for the feedback! The ability to share artifacts between jobs on different stages more easily is definitely on our list. The ability to name jobs also is on the list of things to be considered.

@bkimminich Thanks for the suggestion. I can see how this makes sense in your usecase. I've added it to our list, and we'll consider this in a future iteration. I'm not sure about the outcome of that evaluation, yet, but I'm certain we won't prioritize it before work on a new travis.yml parser (and format specification) has been completed, so in any case this might not happen for a while.

@maciejtreder Improved support for sharing artifacts between stages is pretty high on our list.

@thedrow If I understand your case correctly then, yes, at the moment you'd need to specify these jobs individually. If you still have this issue, could you please email support? support@travis-ci.com

@weitjong Hah, this is interesting, thanks for documenting this here. I would not have expected this to work, but I can now see how this works accidentally. I'd recommend not relying on it too much for the time being, even though it might actually make sense to add it as an official feature.

@mpkorstanje Thanks a lot for the report. This clearly is a bug. I'll open a ticket for this and look into it.

@Griffon26 From what I can tell I would guess this should work, but it might be something specific to coverity_scan. I'd recommend getting in touch with support via email support@travis-ci.com.

Member

svenfuchs commented Jul 19, 2017

@shepmaster Thanks for the additional input. I've added your example to our internal tracking issue.

@roperto Thanks for the suggestion. It seems to me that your usecase would be covered by adding more filtering/condition capabilities, which is something that is on our list.

@colby-swandale Sorry for the late response. If you still have this issue/question it might be best to get in touch with support via email support@travis-ci.com

@EmilDafinov @alorma @seivan Thanks for the suggestion. Yes, allowing a name and/or description for jobs has been suggested a few times, and it's on our list of things to consider.

@webknjaz Thanks for the suggestion on improving the message for allowed failures on our UI. I'll forward this to our UI/web team.

@ghedamat @webknjaz Yes, you are right. Jobs in later stages end up in a canceled state, and restarting one job currently does not touch the state of any other jobs. That is sort of intended, even though in the context of build stages it might seem to make sense. I'll add reconsidering this behaviour to our list, but for now it seems unlikely for us to change this.

@aontas @ELD It sounds like your setup should be very possible. If you're still having this issue could you please get in touch with support via email? support@travis-ci.com

@ELD Thanks for the suggestion of a .travis.yml web tool/editor. We have that on our list.

@23Skidoo Thanks for the suggestions. There are no plans to introduce a more complicated pipeline setup at this time. However, we're collecting usecases and thoughts to be considered in a future iteration. If you could outline your case more that would be valuable input. Making cache slugs more customizable is on our list.

@envygeeks Thanks for the suggestions. Listing stage names, and specifying their order is one improvement pretty high on our list. I'm not sure I understand what you mean by "global matrixes were respected when it comes to env" ... could you elaborate? Also, the example linked by @skeggse should work. If you still have this issue, could you get in touch with support via email? support@travis-ci.com

@timofurrer Development on this has not started, yet, no, sorry. We're still in the planning phase for the most part. Also, thanks for the pointer about the missing message. I'll forward that to our UI/web team.

@SkySkimmer Thanks for the detailed writeup on your case. Do I understand this correctly that you'd need either more flexibility in using our built-in cache feature (i.e. customize cache slugs according to your needs), or need a different way of sharing build artifacts between jobs?

@jsmaniac Thanks for documenting this. I'll open a ticket to consider adding this to our documentation.

@pawamoy Thanks for the report, and for the suggestion. I'll open a ticket about the bug, and add your suggestion for a separate color to our list of things to consider.

@bsideup Thanks for the suggestion. This has come up a few times, and it's high on our list of things to consider for the next iteration.

@leighmcculloch @asifdxtreme Thanks for the suggestion. I understand integrating with the GitHub UI in more detail would be desirable. However, the GitHub commit status API has a couple issues for us, and we're essentially hitting their limits, and as far as I understand they're trying to figure out improvements. So this might not be the best time for us to make such a huge change. I'll still add your suggestion to our list of things to re-evaluate this later.

@szpak Thanks for the report about the stage not being canceled. From your report it seems this clearly is a bug, I suspect a race'y condition between the component that cancels, and the component that schedules jobs. I'll open a ticket for this.

@stianst Thanks for the feedback! The ability to share artifacts between jobs on different stages more easily is definitely on our list. The ability to name jobs also is on the list of things to be considered.

@bkimminich Thanks for the suggestion. I can see how this makes sense in your usecase. I've added it to our list, and we'll consider this in a future iteration. I'm not sure about the outcome of that evaluation, yet, but I'm certain we won't prioritize it before work on a new travis.yml parser (and format specification) has been completed, so in any case this might not happen for a while.

@maciejtreder Improved support for sharing artifacts between stages is pretty high on our list.

@thedrow If I understand your case correctly then, yes, at the moment you'd need to specify these jobs individually. If you still have this issue, could you please email support? support@travis-ci.com

@weitjong Hah, this is interesting, thanks for documenting this here. I would not have expected this to work, but I can now see how this works accidentally. I'd recommend not relying on it too much for the time being, even though it might actually make sense to add it as an official feature.

@mpkorstanje Thanks a lot for the report. This clearly is a bug. I'll open a ticket for this and look into it.

@Griffon26 From what I can tell I would guess this should work, but it might be something specific to coverity_scan. I'd recommend getting in touch with support via email support@travis-ci.com.

@SkySkimmer

This comment has been minimized.

Show comment
Hide comment
@SkySkimmer

SkySkimmer Jul 19, 2017

@SkySkimmer Thanks for the detailed writeup on your case. Do I understand this correctly that you'd need either more flexibility in using our built-in cache feature (i.e. customize cache slugs according to your needs), or need a different way of sharing build artifacts between jobs?

To use the cache to share artifacts I would need a way to share cache between arbitrary jobs (maybe this is what you mean by customize cache slugs) and also some idea of what happens when the cache is modified by parallel jobs.
(unforeseen issues might come up after that of course)

SkySkimmer commented Jul 19, 2017

@SkySkimmer Thanks for the detailed writeup on your case. Do I understand this correctly that you'd need either more flexibility in using our built-in cache feature (i.e. customize cache slugs according to your needs), or need a different way of sharing build artifacts between jobs?

To use the cache to share artifacts I would need a way to share cache between arbitrary jobs (maybe this is what you mean by customize cache slugs) and also some idea of what happens when the cache is modified by parallel jobs.
(unforeseen issues might come up after that of course)

@dg

This comment has been minimized.

Show comment
Hide comment
@keradus

This comment has been minimized.

Show comment
Hide comment
@keradus

keradus Jul 24, 2017

as you included only 2 jobs to autogenerated matrix and let unexisting job to fail

keradus commented Jul 24, 2017

as you included only 2 jobs to autogenerated matrix and let unexisting job to fail

@webknjaz

This comment has been minimized.

Show comment
Hide comment
@webknjaz

webknjaz Jul 25, 2017

@dg allow_failures let's you declare that if the running job (defined via include or global matrix) matches some of the features listed in allow_failures it shouldn't fail the whole build.
Just move its content to jobs.include and add just stage: Code Coverage to allow_failures, so that it would match this rule by stage name. Also it is common to use language version or env var for this purpose.

webknjaz commented Jul 25, 2017

@dg allow_failures let's you declare that if the running job (defined via include or global matrix) matches some of the features listed in allow_failures it shouldn't fail the whole build.
Just move its content to jobs.include and add just stage: Code Coverage to allow_failures, so that it would match this rule by stage name. Also it is common to use language version or env var for this purpose.

@BY-jk

This comment has been minimized.

Show comment
Hide comment
@BY-jk

BY-jk Jul 26, 2017

Update: I revoke my question for now - just saw that I had a great mixup with matrix, python versions, and jobs. Still, it is quite confusing how these features can be combined.

This is a great feature. However, I am struggling to get it to work. The way jobs and matrix expand into stages is a mystery to me. The documentation does not really help to this end.

I am able to produce this build: https://travis-ci.org/blue-yonder/sqlalchemy_exasol/builds/257765684
with this travis config: https://github.com/blue-yonder/sqlalchemy_exasol/blob/rootless_travis/.travis.yml

What I want are stages with two build jobs each (one Exasol5 one Exasol6 - see matrix), and one Python version used in each stage (either 2 or 3).

Ignore for a second that the build is failing - I have a version in the commit history where the builds worked. First, I'd like to see the stages well aligned.

Where am I holding it wrong?

BY-jk commented Jul 26, 2017

Update: I revoke my question for now - just saw that I had a great mixup with matrix, python versions, and jobs. Still, it is quite confusing how these features can be combined.

This is a great feature. However, I am struggling to get it to work. The way jobs and matrix expand into stages is a mystery to me. The documentation does not really help to this end.

I am able to produce this build: https://travis-ci.org/blue-yonder/sqlalchemy_exasol/builds/257765684
with this travis config: https://github.com/blue-yonder/sqlalchemy_exasol/blob/rootless_travis/.travis.yml

What I want are stages with two build jobs each (one Exasol5 one Exasol6 - see matrix), and one Python version used in each stage (either 2 or 3).

Ignore for a second that the build is failing - I have a version in the commit history where the builds worked. First, I'd like to see the stages well aligned.

Where am I holding it wrong?

@dholdren

This comment has been minimized.

Show comment
Hide comment
@dholdren

dholdren Jul 26, 2017

I'd like it if later stages ran once failed jobs from earlier stages are restarted and pass.

e.g. If I have two stages, "Test" and "Deploy", and "Test" is composed of 4 jobs, and one of those 4 fail, I can restart it. But if it then passes, the "Deploy" stage doesn't automatically run, and I have to manually start it.

dholdren commented Jul 26, 2017

I'd like it if later stages ran once failed jobs from earlier stages are restarted and pass.

e.g. If I have two stages, "Test" and "Deploy", and "Test" is composed of 4 jobs, and one of those 4 fail, I can restart it. But if it then passes, the "Deploy" stage doesn't automatically run, and I have to manually start it.

@BanzaiMan

This comment has been minimized.

Show comment
Hide comment
@BanzaiMan

BanzaiMan Jul 26, 2017

Member

I'm locking this issue for the time being. Most of the bug reports and feature requests are now understood.

Thanks.

Member

BanzaiMan commented Jul 26, 2017

I'm locking this issue for the time being. Most of the bug reports and feature requests are now understood.

Thanks.

@travis-ci travis-ci locked and limited conversation to collaborators Jul 26, 2017

@svenfuchs

This comment has been minimized.

Show comment
Hide comment
@svenfuchs

svenfuchs Aug 29, 2017

Member

Accidentally closed this by referring to one comment here in a pull request. Thanks @ljharb for the pointer! ❤️

In this case, however, since @BanzaiMan already locked it, i guess it's fine. My tentative plan is to open a new "beta feature feedback issue" once I ship iteration 2 (should happen soonish), and then also include something like an FAQ at the very top (so we don't get all the already answered questions all over).

Member

svenfuchs commented Aug 29, 2017

Accidentally closed this by referring to one comment here in a pull request. Thanks @ljharb for the pointer! ❤️

In this case, however, since @BanzaiMan already locked it, i guess it's fine. My tentative plan is to open a new "beta feature feedback issue" once I ship iteration 2 (should happen soonish), and then also include something like an FAQ at the very top (so we don't get all the already answered questions all over).

@svenfuchs

This comment has been minimized.

Show comment
Hide comment
@svenfuchs

svenfuchs Sep 13, 2017

Member

We have shipped "Build Stages Iteration 2", fixing several bugs, and introducing "Build Stages Order" as well as "Conditional Builds, Stages, and Jobs".

See our blog post for details: https://blog.travis-ci.com/2017-09-12-build-stages-order-and-conditions

We have also opened a new beta feature feedback issue here: #28

Member

svenfuchs commented Sep 13, 2017

We have shipped "Build Stages Iteration 2", fixing several bugs, and introducing "Build Stages Order" as well as "Conditional Builds, Stages, and Jobs".

See our blog post for details: https://blog.travis-ci.com/2017-09-12-build-stages-order-and-conditions

We have also opened a new beta feature feedback issue here: #28

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.