Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Rebase BuildStrategy #656

Open
imjasonh opened this issue Mar 10, 2021 · 13 comments
Open

WIP: Rebase BuildStrategy #656

imjasonh opened this issue Mar 10, 2021 · 13 comments

Comments

@imjasonh
Copy link
Contributor

(This is a precursor to a full EP on the topic, to get ideas out and start discussion)

In a "rebase" operation, an original image's base image layers are swapped out with another set of layers from another image.

Users might want to do this for a couple reasons:

  • if a vulnerability is found in a base image, you can replace those layers in an application image simply using registry API operations, and not have to rebuild the image from scratch on top of the new image. This can be especially useful if you need to quickly produce lots of images where the vulnerability is fixed, or if you no longer have access to the original source.
  • if you'd prefer to have another image as the base image; for example, an image is built on a large-ish image, and you'd prefer to have it based on a slimmer image like ubi or openjdk-slim, etc.

(the second case, producing slimmer built images, seems to be a motivating use case for runtime config)

In either case, rebasing is not guaranteed to be safe by default, and care must be taken when building to ensure your application code can handle being rebased onto other base images. Rebasing onto a base image with different versions of system libraries or installed runtimes might produce an image that can't execute. Rebased images should still be subjected to tests before being deployed to production. Buildpacks for example takes care to separate application layers from base image layers, so that rebasing is more likely to be successful.

To enable rebasing in Shipwright, we could define a new rebase BuildStrategy, which would expect an input image, and parameters for old- and new-base images (or could expect this to be stored in OCI image annotations on the image), and an output of where to publish the newly rebased image. (As an alternative, we could define a new Rebase CRD alongside Build, since they're fairly separate operations, I'm not sure that's necessary though, and a BuildStrategy is easier to iterate on)

When BuildRuns are triggered, rebasing builds could watch for updates to a base image and automatically produce rebased images built on that new base image.

References:

cc @sbose78

@zhangtbj
Copy link
Contributor

Good topic, Jason!

Actually, we also care about that because we run the Build on our official environment. And we care about the first case very much.

My two cents:

  • I would like to think about this scenario from our end-user experience. On our UI, or even on the OpenShift operator UI, the user needs to create a build then submit a buildrun to do the build execution. The build should be a template that stores the initial settings, but the rebase should be a kind of execution level request, not the definition request. If the end user wants to rebase its image, I am thinking if the buildrun is a better place to have a new Rebase section, like:
---
apiVersion: shipwright.io/v1alpha1
kind: BuildRun
metadata:
  name: buildpack-nodejs-buildrun
spec:
  buildRef:
    name: buildpack-nodejs-build
  serviceAccount:
    generate: true
  rebase:
    registry.access.redhat.com/ubi8-minimal
  • The second thing is about Kaniko or other buildstrategies, I think the buildpacks can be done easily. But I think it is not easy to define the Rebase for the Dockerfile build, because the base image is defined and controlled in the Dockerfile, and without the kaniko build process, I am not sure how to ensure the image is rebased correctly.

So when we were thinking about this requirement before, we just think of buildpacks first. But I think it is a good topic, we can think about that together :)

@imjasonh
Copy link
Contributor Author

Thanks for your response!

I had imagined this being phrased as another type of parameterized BuildStrategy, but one that didn't require any source input, only an image to rebase.

apiVersion: shipwright.io/v1alpha1
kind: Build
metadata:
  name: rebase
spec:
  inputs:
  - name: image
    type: Image
    value: gcr.io/my/image
  strategy:
    name: rebase
    params:
      old_base: ubuntu
      new_base: registry.access.redhat.com/ubi8-minimal

(This assumes parameterized strategies and Adam's inputs EP land)

Then you could run this any number of times as new gcr.io/my/images get built, or if new ubi8-minimal images get published. If neither of those images change, you could run it over and over and it would just be a no-op.

The real reason I want this as a Build instead of a configuration of a BuildRun is that I want to be able to trigger these automatically when base images change.

I think you're describing another feature where Builds (or Runs?) can immediately rewrite the base image of images they produce, sort of like how runtime images work today, but purposefully scoped down (which I support!). I think we could live with both features, but for this issue I'm more interested in the first.

The second thing is about Kaniko or other buildstrategies, I think the buildpacks can be done easily. But I think it is not easy to define the Rebase for the Dockerfile build, because the base image is defined and controlled in the Dockerfile, and without the kaniko build process, I am not sure how to ensure the image is rebased correctly.

Yeah, Dockerfile builds are less likely to be safe in every case, but if someone knows it's safe there's no reason to disallow it. Once the standard annotations are in, it shouldn't be hard to make Kaniko write those at least, so you won't need params in most cases.

@zhangtbj
Copy link
Contributor

zhangtbj commented Mar 11, 2021

Thanks Jason,

I had imagined this being phrased as another type of parameterized BuildStrategy, but one that didn't require any source input, only an image to rebase.

If it is a new Build which is based on the rebase strategy, that is why I was thinking the user experience, for example, the user can create and show the build list from UI or CLI and know what is his build, but if there are some rebase build, I am thinking people may feel confusing and don't know the relationship between the original build and this rebase build. And I don't know if it is can be shown and used well from UX perspective.

And I agree auto-trigger and one-time base image rebase are two different features :)

I thought this issue is related to one-time base image rebase, but if it is related to the auto-trigger, I think it is more complex and we should have a better design with trigger together. :)

When I compare with other Build services, like OpenShift Build v1 or Google/AWS Build (I forget which one...), the trigger related config is integrated in the Build Config:

This is the OpenShift Build v1 page:
Screen Shot 2021-03-11 at 10 11 51 PM

So if for trigger, I am thinking if we can introduce a new trigger-related CRD and append it to the original Build, Or have a new property inside of the Build directly? I think it should be a more consistent user experience. :)

Yeah, Dockerfile builds are less likely to be safe in every case, but if someone knows it's safe there's no reason to disallow it. Once the standard annotations are in, it shouldn't be hard to make Kaniko write those at least, so you won't need params in most cases.

Sure thing, it is better that we support this for multiple build strategies.

@imjasonh
Copy link
Contributor Author

I'd like to be really careful not to expand the scope of this to include triggering. That's a separate EP entirely, which should be designed in isolation in such a way that rebuilds or rebases both work equally well. I'd like us to design that too, but separately from supporting rebasing.

Rebasing will be supported as a one-time operation for now (just as one-time build runs are supported now), but which can be triggered in the future. I like to think of it as just another kind of BuildStrategy which can be requested and run, or (in the future) triggered.

It's possible a user would want to request a build from source, then as a followup operation, request a rebase of that resulting image on top of another image. That use case is supported today using runtime images, but I think that's a bit overly complex for this use case (it supports adding copy, run, etc., directives), and can possibly lose information about the original base image. I'd like to understand that use case a bit more as part of producing an EP for this work.

@zhangtbj
Copy link
Contributor

zhangtbj commented Mar 11, 2021

I'd like to be really careful not to expand the scope of this to include triggering.

Agree.

Rebasing will be supported as a one-time operation for now (just as one-time build runs are supported now), but which can be triggered in the future. I like to think of it as just another kind of BuildStrategy which can be requested and run, or (in the future) triggered.

If it is supported as a one-time operation for now but can be triggered in the future, so it also related to trigger, so I think we should be careful and have a better design also for future feature integration, like trigger.

I totally agree this is a valid scenario we should support. I am just thinking for this kind of requirement, if a new rebase BuildStrategy then a new rebase Build is good from the UX perspective.

@imjasonh
Copy link
Contributor Author

I totally agree this is a valid scenario we should support. I am just thinking for this kind of requirement, if a new rebase BuildStrategy then a new rebase Build is good from the UX perspective.

The reason a rebase BuildStrategy is attractive to me is that it doesn't require any (more) changes to the API to support it. It's just another BuildStrategy with sort of odd semantics. We can modify the definition until we're happy with it, and users can provide their own or uninstall rebase support if they want to. Then if we decide it's a common enough and well-understood-enough concept, we can decide whether we want to promote it as a first-class concept alongside Build.

@qu1queee
Copy link
Contributor

qu1queee commented Mar 15, 2021

My two cents so far are:

  • We need to have a good understanding on how such rebase strategy will look. For example:
    • If for Buildpacks, we need to check if their binaries can provide us already the pack rebase as mentioned in their docs. If this is the case, users might only require to choose an strategy name without any further new parameters.
    • If for Kaniko, if Kaniko provides support for this, or if we need to develop some glue code. For Kaniko I do see the need of new parameters in our API.
  • API wise, I like both ideas. I think Jason´s one is related to this thread https://kubernetes.slack.com/archives/C019ZRGUEJC/p1615387780007200 . I think Jordan proposal is good in the sense that you do not need a new Build only for rebase, you could use an existing one and just trigger a new BuildRun with the purpose of rebase.
  • I´m confused on the overlap with the runtime feature. I see this as a replacement of the runtime feature which I think it would be for the better. Is this the case?. To be more precise, rebase applies to both Dockerfile-less and Dockerfile-based strategies, while runtime is really only for Dockerfile-based.

@gabemontero
Copy link
Member

I would agree with @qu1queee that while both the @imjasonh and @zhangtbj ideas introduced here share a common goal of helping to "trim images" or "make them leaner" as is cited in https://github.com/shipwright-io/build/blob/master/docs/proposals/runtime-image.md they in fact they do not overlap so much in the scenarios they are addressing. I'm also not 100% sure @imjasonh was suggesting that, but I think it helps to explicitly confirm that the commonality most likely stops at the shared goal.

I'm possibly oversimplifying, but from what I've read, rebase seems to == replace layers, where as the https://github.com/shipwright-io/build/blob/master/docs/proposals/runtime-image.md == minimize the size of the layers you add on top of the existing layers

I also think of https://github.com/shipwright-io/build/blob/master/docs/proposals/runtime-image.md as having some sub-goals of

But I'm of course curious as to what @sbose78 and @otaviof as the authors of the EP think, and if they share my recollections / inferences.

https://github.com/shipwright-io/build/blob/master/docs/proposals/runtime-image.md also bleeds into what could be included in sources or inputs or however we want to organize the "multiple content inputs" into building an image. Some of that is captured in @adambkaplan 's #544

With all that ^^, I'm however not sure @imjasonh and/or @zhangtbj ideas here are a replacement for https://github.com/shipwright-io/build/blob/master/docs/proposals/runtime-image.md

However, I think I could see some form of both proving complementary and useful to users.

@imjasonh
Copy link
Contributor Author

I'm also not 100% sure @imjasonh was suggesting that, but I think it helps to explicitly confirm that the commonality most likely stops at the shared goal.

I am explicitly not interested in adding image-slimming functionality to the core Shipwright API; that's something that can already be expressed better in a Dockerfile or buildpacks, and those features and communities are far more established and moving faster, so we'll never catch up.

I'd like to deprecate and phase out runtime-image, and I believe it should be relatively straightforward to do today since (we think) it has relatively little use.

[runtime image has a sub-goal of]

Multi-stage docker builds do exist today though, and I think there's far more understanding and support for them now. I don't think this is a feature of OpenShift Builds we need to carry over to Shipwright.

  • If for Buildpacks, we need to check if their binaries can provide us already the pack rebase as mentioned in their docs. If this is the case, users might only require to choose an strategy name without any further new parameters.
  • If for Kaniko, if Kaniko provides support for this, or if we need to develop some glue code. For Kaniko I do see the need of new parameters in our API.

Rebasing doesn't require pack rebase, and doesn't require (much, if any) new glue code to be written. pack rebase uses code in go-containerregistry's mutate.Rebase (that I wrote, incidentally) to reassemble layers efficiently. We can reuse that logic, either by repackaging it in our own code, or by having a BuildStrategy that calls crane rebase (that I also wrote, and am improving now).

mutate.Rebase and crane rebase both work against any image, however it was built. If it was built with a tool that's rebase-aware, it can rely on hints baked into the image to make rebasing even easier.

  • I´m confused on the overlap with the runtime feature. I see this as a replacement of the runtime feature which I think it would be for the better. Is this the case?. To be more precise, rebase applies to both Dockerfile-less and Dockerfile-based strategies, while runtime is really only for Dockerfile-based.

I'd like to deprecate and phase out runtime whether or not or however rebase is included, full stop. Some of the functionality can be simulated with rebase, but I think they solve different problems.

@gabemontero
Copy link
Member

I'm also not 100% sure @imjasonh was suggesting that, but I think it helps to explicitly confirm that the commonality most likely stops at the shared goal.

I am explicitly not interested in adding image-slimming functionality to the core Shipwright API; that's something that can already be expressed better in a Dockerfile or buildpacks, and those features and communities are far more established and moving faster, so we'll never catch up.

I'd like to deprecate and phase out runtime-image, and I believe it should be relatively straightforward to do today since (we think) it has relatively little use.

[runtime image has a sub-goal of]

Multi-stage docker builds do exist today though, and I think there's far more understanding and support for them now. I don't think this is a feature of OpenShift Builds we need to carry over to Shipwright.

Yep I'm OK with that. I was mostly trying to convey background and context. And in this particular case, explaining how to move from our old thing to the new thing should not be difficult. And I have not heard / cannot think of why a user would be inclined to express the desire to copy a subset of content from an image via k8s types and yaml instead of Dockerfile commands, or any analogous functionality across the spectrum of image building tools (present or future).

  • If for Buildpacks, we need to check if their binaries can provide us already the pack rebase as mentioned in their docs. If this is the case, users might only require to choose an strategy name without any further new parameters.
  • If for Kaniko, if Kaniko provides support for this, or if we need to develop some glue code. For Kaniko I do see the need of new parameters in our API.

Rebasing doesn't require pack rebase, and doesn't require (much, if any) new glue code to be written. pack rebase uses code in go-containerregistry's mutate.Rebase (that I wrote, incidentally) to reassemble layers efficiently. We can reuse that logic, either by repackaging it in our own code, or by having a BuildStrategy that calls crane rebase (that I also wrote, and am improving now).

mutate.Rebase and crane rebase both work against any image, however it was built. If it was built with a tool that's rebase-aware, it can rely on hints baked into the image to make rebasing even easier.

  • I´m confused on the overlap with the runtime feature. I see this as a replacement of the runtime feature which I think it would be for the better. Is this the case?. To be more precise, rebase applies to both Dockerfile-less and Dockerfile-based strategies, while runtime is really only for Dockerfile-based.

I'd like to deprecate and phase out runtime whether or not or however rebase is included, full stop. Some of the functionality can be simulated with rebase, but I think they solve different problems.

@qu1queee
Copy link
Contributor

Yes, I think the alternative to the runtime feature is a user Dockerfile that follows the multi-stage best practices.

@imjasonh for the rebase strategy and merely speaking about buildpacks, one needs to do a rebase of the run image ( illustration in docs ), if you translate this to the crane rebase cmd, the old/new base needs to be the run image, which is an existing layer on your app image if you previously build with creator. So I´m wondering if for "buildpacks", instead of using crane we should stick to what the community buildpacks/lifecycle already provides, what do you think?

Or are you saying that in general for any rebase strategy, we should have a single step that uses an standard tooling/mechanism, like crane?

@imjasonh
Copy link
Contributor Author

Or are you saying that in general for any rebase strategy, we should have a single step that uses an standard tooling/mechanism, like crane?

Using pack rebase would only work to rebase images produced by buildpacks (AFAIK), since it relies on base image information in image annotations.

crane rebase would be capable of rebasing any image, either by requiring users to explicitly specify base image information, or using a standard annotation, which buildpacks should also set.

@qu1queee
Copy link
Contributor

Using pack rebase would only work to rebase images produced by buildpacks (AFAIK), since it relies on base image information in image annotations.

yes, thats my understanding. I still think we should also have a pack rebase strategy, while we already have a pack build strategy for building. Maybe something to consider as alternatives in your upcoming EP. I think having the pack rebase is convenient, while our default Dockerfile-less strategy is paketo (cloud native buildpack implementation).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants