Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: `set_pipeline` step #31

Open
wants to merge 1 commit into
base: master
from

Conversation

@vito
Copy link
Member

vito commented Jul 11, 2019

Rendered

Please comment on individual lines, not at the top-level.

Signed-off-by: Alex Suraci <suraci.alex@gmail.com>

The pipeline would be configured within whichever team the build execution belongs to.

The pipeline would be automatically unpaused, as opposed to `fly set-pipeline` which pauses pipelines by default. The assumption here is that if you're automating `set_pipeline` you're not just kicking the tires and can probably trust the pipelines that you're configuring are correct, at least enough to have made it into version control.

This comment has been minimized.

Copy link
@Rukenshia

Rukenshia Jul 17, 2019

We store our ci pipeline along with our code in the source code repository, in a ci/ directory. Would I integrate the set_pipeline step as the first step after the get: of my source code? Would changing the pipeline mid-run do anything to the active execution (as the version of my source code wouldn't change, but I am doing a set-pipeline)?

This comment has been minimized.

Copy link
@vito

vito Jul 21, 2019

Author Member

Yep, it would typically follow a get step today. The semantics of changing the pipeline mid-run are the same semantics as with fly set-pipeline today.

Once Projects come along (#32) we can eliminate that get step and start providing better guarantees about keeping task code in sync with pipeline versions and such.

This comment has been minimized.

Copy link
@hstenzel

hstenzel Aug 1, 2019

I think this is a bit less flexible than the pipeline resource.

Today, we generate the pipelines.yaml and each pipeline itself from a concourse job that scans a GitHub repository. We then create appropriate pipelines (one per included :) and let the concourse pipeline resource deploy the whole thing.

How could we deploy pipelines which are unknown at pipeline-writing time with this proposed system as we can with the pipeline resource?

Additionally, this would have strange behavior across branches. If I described my pipeline in a repository, and I create a release branch or a feature branch, then by definition the same pipeline yaml will be present in each branch. This would be wrong and bad (tm) because now I'd have each branch updating the same artifact -- clearly not what should happen!

Could you elaborate a bit more on how such circumstances could be dealt with?

This comment has been minimized.

Copy link
@vito

vito Aug 1, 2019

Author Member

I think this is a bit less flexible than the pipeline resource.

It is indeed less flexible, at least in its first iteration. This is just the first step along our roadmap. 🙂 This RFC is not intended to replicate the pipeline resource's entire functionality, my goal is just to provide a teeny tiny primitive that provides basic functionality that we can build on.

For example, one thing that this step won't do, as opposed to the concourse-pipeline resource, is allow you to configure multiple pipelines all at once. Instead, just like all the other steps, it only deals with one thing at a time. To set a bunch of pipelines at once, you would just run multiple set_pipeline steps, just as you would run many tasks. To set a dynamic set of pipelines, you'll have to wait until the rest of our roadmap is done, namely the across step (#29). Until then I would recommend that you continue to use the concourse-pipeline resource for that specific use case. Theoretically you could generate a pipeline with many set_pipeline steps, but that may be too many moving parts. 🤔

If you haven't yet, I would recommend checking out the v10 roadmap post and accompanying slides, which should paint a fuller picture of how pipelines will be automated, potentially across branches, by composing the set_pipeline step with the across step (#29) and instanced pipelines (#34). Finally, 'projects' (#32) will tie it all together.

Here's a slide showing a project that sets pipelines across all branches.

Maybe it would help if I had a complete example somewhere showing how this would be used in the future. 🤔 I can add something under 'new implications' since that's where it claims to replace the concourse-pipelines resource.

Additionally, this would have strange behavior across branches. If I described my pipeline in a repository, and I create a release branch or a feature branch, then by definition the same pipeline yaml will be present in each branch. This would be wrong and bad (tm) because now I'd have each branch updating the same artifact -- clearly not what should happen!

I'm not sure what you mean here - this sounds like a workflow problem, not a problem with the set_pipeline step, since the same challenges affect the concourse-pipeline resource. We faced the same challenge and solved it by extracting our pipeline configs into a separate ci repo. This repo is what we would convert into a 'project' which configures its branch pipeline template across all of our branches.

Does this all help?

This comment has been minimized.

Copy link
@hstenzel

hstenzel Aug 1, 2019

Thanks, I have checked out the roadmap post. I'm excited, but admittedly don't yet fully grok the whole thing.

The workflow problem is exactly what I'm trying to think about. Consider a whole bunch of repositories each of which might have several "interesting" branches all producing and consuming artifacts of different kinds.

What should a pipeline correspond to? Right now, we map a github : to a concourse : and we have homegrown tools so that there is just one logical pipeline described in github that is deployed several times over in different ways to produce the correct results. In this case, the "correct" results mean mainly getting inputs from the correct places and producing artifacts & loading them in the correct place.

The pipelines have to be deployable properly in different teams and work properly across branches and forks. So we have to be very clear about what goes into the repo itself.

Different variables need to be set based on credentials used, accounts used, administration zone used, etc. None of these belong in source control where they can be forked or branched.

I know these questions are bigger than this feature and I'll stay in the loop on the roadmap, but I also want to bring up these larger issues just to be sure that there will be a natural way to work in these more complicated environments with Concourse.

Thanks!

@evanchaoli

This comment has been minimized.

Copy link

evanchaoli commented Oct 25, 2019

@vito Please consider to support pipeline self set. For example, for the set_pipeline step, if configure pipeline-name: self, where self is a reserved word, the it just do self set. With self set, pipeline name and team are known from the current pipeline.

I see you have create issue #4254 to track implementation of set_pipeline. Once this step is in place, it will simplify a lot of our use cases. I think set_pipeline step could be done on ATC, just call same function as current set-pipeline API, no need to start any container. If that's true and you are ok, I can contribute a PR, as we eager for this feature.

@evanchaoli evanchaoli referenced this pull request Nov 5, 2019
0 of 10 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
7 participants
You can’t perform that action at this time.