Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Environment mapping broken when using multiple odsComponentPipelines #394

Closed
segfault16 opened this issue Jul 2, 2020 · 14 comments · Fixed by #408
Closed

Environment mapping broken when using multiple odsComponentPipelines #394

segfault16 opened this issue Jul 2, 2020 · 14 comments · Fixed by #408
Labels
bug Something isn't working

Comments

@segfault16
Copy link
Contributor

In one of my Jenkinsfile I'm utilising two consecutive ODS pipelines to

  1. Build an image for some codegen functionality
  2. Generate code utilizing the image built in 1) and build the images of the components

The Jenkinsfile looks something like this:

// First pipeline to build the image that is needed as an container for the agent pod in the second pipeline
odsComponentPipeline(
  imageStreamTag: "cd/jenkins-slave-base:2.x",
  branchToEnvironmentMapping: [
    '*': 'cd'
  ],
  
) { context ->
  def res = odsComponentStageBuildOpenShiftImage(context, [resourceName: 'proto-codegen', dockerDir: 'proto-codegen'])
  stageTagLatest(context, res.image)
}

odsComponentPipeline(
  podContainers: [
    containerTemplate(
      name: 'jnlp',
      image: "${dockerRegistry}/cd/jenkins-slave-base:2.x",
      workingDir: '/tmp',
      alwaysPullImage: true,
      args: '${computer.jnlpmac} ${computer.name}',
      serviceAccount: 'jenkins'
    ),
    containerTemplate(
      name: 'protoc',
      image: "docker-registry.default.svc:5000/biob-cd/proto-codegen:latest",
      workingDir: '/tmp',
      alwaysPullImage: true,
      ttyEnabled: true,
      command: 'cat',
    ),
    containerTemplate(
      name: 'golang',
      image: "golang:1.14",
      workingDir: '/tmp',
      ttyEnabled: true,
      command: 'cat',
    ),
    containerTemplate(
      name: 'helm',
      image: "alpine/helm:3.1.2",
      workingDir: '/tmp',
      ttyEnabled: true,
      command: 'cat'
    ),
  ],
  branchToEnvironmentMapping: [
    'master': 'test',
    'staging': 'test',
    'develop': 'dev',
    '*': 'dev',
  ],
) { context ->
  stageDecrypt(context)
  stageProtoCodegen(context)
  stageBackend(context)
  stageFrontend(context)
  stageDeploy(context)
}

In the second pipeline the odsComponentStageBuildOpenShiftImage in stageBackend and stageFrontend is executed with target project cd instead of test or dev as specified in the odsComponentPipeline.

This is due to the OpenShiftService being registered only once:
https://github.com/opendevstack/ods-jenkins-shared-library/blob/ea285ce/src/org/ods/component/Pipeline.groovy#L124-L132

@segfault16 segfault16 added the bug Something isn't working label Jul 2, 2020
@clemensutschig
Copy link
Member

@segfault16 hmmmm - this was never really supported - sort of luck that it worked... it's a pretty big change to make this fly on master again. Is this critical, or do you have a way to split this up?

The above will also not work in the context of the release manager.. @michaelsauter fyi

@michaelsauter
Copy link
Member

@clemensutschig I guess we could attempt to reset the registry at the start in the component pipeline if we are not in an orchestration context?

@clemensutschig
Copy link
Member

clemensutschig commented Jul 2, 2020

@michaelsauter - that's an idea :) do we have this problem somewhere else? ... the fix is very surgical- just remove the if ... but again - in the above case - this is not working with the release manager...

@michaelsauter
Copy link
Member

michaelsauter commented Jul 2, 2020

Yes, no idea how to make the above work with the release manager :) So not sure we actually want to open that box and support multiple pipelines officially ... would there be a way for you to achieve the same result within one pipeline? One question toward that end: Why are you building the builder image in cd? It looks like you generate it anyway on every pipeline run, so you might as well build it in the target project, no? @segfault16

@clemensutschig
Copy link
Member

@michaelsauter - yes - we will not open the can

@clemensutschig
Copy link
Member

clemensutschig commented Jul 2, 2020

I assume this will fail in https://github.com/opendevstack/ods-jenkins-shared-library/blob/master/src/org/ods/orchestration/util/MROPipelineUtil.groovy#L353 - or only deliver the last artifacts ... no idea ... I'll try it

Correction : this works :) (at least on my not latest master) :)

I copied the odsPipelineStage and the last build / deployments where taken...

@michaelsauter - can you fix this as part of your open PR ... :)

@clemensutschig clemensutschig added this to To Do in OpenDevStack 3.0 via automation Jul 2, 2020
@segfault16
Copy link
Contributor Author

I'm building it every run since I want the tooling to be up to date with what we have in our git repository. We're using the same Dockerfile for generating code on local dev and thus can make sure that every developer has the exact same environment (as with multistage docker files for the separate components). This is also why I want to keep this image in the same repo as the components.

Building each run in general doesn't add too much overhead since docker caching mechanisms generally work very well and with FE build times for the optimised prod code being long anyway the time to build this image is negligible.

@michaelsauter
I cannot build the image in the same pipeline as I'm using it, it's a chicken-egg problem... That's why I'm using two pipelines. I also tried adding a separate Jenkinsfile (e.g. Jenkinsfile_codegen) with a manually added Jenkins Job without any success. I can certainly try using the same environment mapping as a workaround or have a parameter to switch between the different odsComponentPipelines or smth like that but I'd rather not.

FYI not using release manager. My requirements are pretty basic and pretty simple:

  • Build docker images in a dedicated namespace tagged with the git short rev (without build # btw...)
  • Publish build information in Bitbucket to ensure git flow
  • Be able to have the build environment in the same repo so there is no "works on my machine" discussions

@clemensutschig
Copy link
Member

@michaelsauter - that would lean itself towards re-init Openshift service in the NON mro case ...

@michaelsauter
Copy link
Member

@segfault16 Can you explain the "chicken-egg problem"? I am not sure I understand why you cannot put the following into the main pipeline, before you run the other stuff?

def res = odsComponentStageBuildOpenShiftImage(context, [resourceName: 'proto-codegen', dockerDir: 'proto-codegen'])
stageTagLatest(context, res.image)

@clemensutschig Yes, we can re-init OpenShiftService, that probably won't hurt. But you still might run into more problems down the road ...

@segfault16
Copy link
Contributor Author

segfault16 commented Jul 3, 2020

@michaelsauter the snippet you mentioned builds docker-registry.default.svc:5000/biob-cd/proto-codegen:latest, which is used as a container in the agent for this pipeline.
So a) the image needs to be built with some other mechanism for the first run of the pipeline, otherwise the agent pod won't start and more severely b) the pipeline agent will never have the current image but the image from the build before - on whatever branch that might have been

@segfault16
Copy link
Contributor Author

segfault16 commented Jul 3, 2020

@michaelsauter @clemensutschig
Maybe as a suggestion although I'm not familiar with the orchestrationPipeline:

  • Initialise services in the global serviceRegistry singleton on orchestrationPipeline start
  • In component pipeline inject the serviceRegistry singleton instance into the context if set, otherwise provide a scoped serviceRegistry to the context and use that

In general I think it's a good idea to use singletons only where necessary (as might be in the advanced orchestration case) but rely on dependency injection otherwise.

@michaelsauter
Copy link
Member

@segfault16 Ah, now I got it. The only way around it would be to do the build steps you are running to generate biob-cd/proto-codegen, in the agent image itself. But I understand that this is not ideal.

Injecting the service registry is not possible, as the only means to communicate between orchestration pipeline and component pipeline is via env vars (as the orchestration pipeline just executes the Jenkinsfile of the component).

Resetting like mentioned above in non-MRO case would unblock you, so maybe we just do that - even though that does not mean using multiple pipelines is a supported use case ... sorry :(

@clemensutschig
Copy link
Member

@michaelsauter - did you fix this along the way now?

@michaelsauter
Copy link
Member

No, this isn't fixed yet. I have been thinking about this a little. As you know, my original approach was to put the project into the OpenShiftService, under the assumption that it won't change during one pipeline. I am not so sure anymore if this isn't too restrictive. For example, we recently discussed a use-case (#405) that would benefit if one could overwrite the target project in each stage.

Wit that in mind, I can either "just get it to work" by resetting the OpenShiftService in the non-MRO case, or I can change the approach to just always pass the project ... hmm :) I want to work on the Jenkins plugins issue today, so I'll leave this open for now.

OpenDevStack 3.0 automation moved this from To Do to Done Jul 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

3 participants