Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime Image Support #263

Merged
merged 5 commits into from
Jul 14, 2020
Merged

Conversation

otaviof
Copy link
Member

@otaviof otaviof commented Jun 9, 2020

This pull-request is addressing the discussion in #183, as a initial approach to provide a runtime image. This image is basically a multi-stage build, based on the container-image created by the BuildStrategy steps.

Enhancement Proposal

Please consider the design document added in this PR, describing the feature in details.

Runtime Resource

The following example contains the modifications introduced by this PR. Please consider the .spec.runtime section.

---
apiVersion: build.dev/v1alpha1
kind: Build
metadata:
  name: nodejs-ex
spec:
  strategy:
    name: buildpacks-v3
    kind: ClusterBuildStrategy
  source:
    url: https://github.com/sclorg/nodejs-ex.git
  output:
    image: quay.io/otaviof/nodejs-ex:runtime
    credentials:
      name: quayio-nodejs-ex
  runtime:
    base:
      image: docker.io/node:latest
    workDir: /home/node/app
    run:
      - echo "Before copying data..."
    user:
      name: node
      group: "1000"
    paths:
      - $(workspace):/home/node/app
    entrypoint:
      - npm
      - start

On .spec.runtime section the user can modify the runtime base-image attributes, new attributes have been added on Build resource. Please consider build_types.go file.

Generated Dockerfile

The generation of a runtime Dockerfile is based on Go Templates, defined on runtime_image.go file. Using the example build above, it will produce the following Dockerfile:

FROM quay.io/otaviof/nodejs-ex:runtime as builder

FROM docker.io/node:latest
RUN echo "Before copying data..."
COPY --chown="node:1000" --from=builder "/workspace/source" "/home/node/app"
WORKDIR "/home/node/app"
USER node:1000
ENTRYPOINT [ "npm", "start" ]

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 9, 2020
@openshift-ci-robot
Copy link

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci-robot openshift-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 9, 2020
@sbose78
Copy link
Member

sbose78 commented Jun 9, 2020

Thanks @otaviof , did you happen to try any Java use cases?

@otaviof
Copy link
Member Author

otaviof commented Jun 11, 2020

did you happen to try any Java use cases?

@sbose78 yes, I have, it does work as expected. Let me share the examples here as well.

On testing this situation I notice the ENTRYPOINT not being correctly rendered, and then fixed on latest commit. :-)

@openshift-ci-robot openshift-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 15, 2020
@otaviof
Copy link
Member Author

otaviof commented Jun 15, 2020

@otaviof
Copy link
Member Author

otaviof commented Jun 15, 2020

Community meeting note:

  • Use common Image output credentials for runtime as well;
  • Get review from other teams to have buildah unprivileged;
    • Lets start with kaniko;
  • Lets write a enhancement proposal for this feature (late is better than latter);

@otaviof otaviof marked this pull request as ready for review June 18, 2020 15:57
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 18, 2020
@otaviof otaviof force-pushed the BUILD-27 branch 2 times, most recently from faed4c4 to 230ab84 Compare June 19, 2020 10:39
@otaviof otaviof requested review from zhangtbj and sbose78 June 22, 2020 10:28
@otaviof
Copy link
Member Author

otaviof commented Jun 22, 2020

@qu1queee, @SaschaSchwarze0, @zhangtbj Would you please consider this PR when you have some time? I've also added a EP document to describe it: https://github.com/redhat-developer/build/blob/3590fff206333b62b8b5e5b2a527908190210791/docs/proposals/runtime-image.md

@qu1queee
Copy link
Contributor

@otaviof yes, let me provide my comments between today and tomorrow.

Copy link
Member

@gabemontero gabemontero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing jumped out at me from a code review perspective, and I think the builder/runtime abstractions easily resonate for any user familiar with multi-stage builds, which I think we can assume is pretty ubiquitous at this point.

But as you'll see below, the concern I've relayed in recent cabal meetings is "revitalized" if you will when I look at the BuildStep machinations needed.

pkg/controller/buildrun/runtime_image.go Show resolved Hide resolved
pkg/controller/buildrun/runtime_image.go Show resolved Hide resolved
Copy link
Member

@SaschaSchwarze0 SaschaSchwarze0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding comments to the written proposal. Will go through the code changes later.

docs/build.md Outdated Show resolved Hide resolved
docs/build.md Outdated Show resolved Hide resolved
docs/proposals/runtime-image.md Show resolved Hide resolved
docs/proposals/runtime-image.md Show resolved Hide resolved
docs/proposals/runtime-image.md Outdated Show resolved Hide resolved
@qu1queee
Copy link
Contributor

Hi @otaviof , thanks a lot for this PR. Let me add my comments on this, I did not reviewed the whole code in detail yet, because I would like to discuss the proposal first.

  • I find this advance used case as a very edge case. The view of the world I have when it comes to this project is that actually end-users are abstracted from the contents of an strategy(Cluster or namespaced scope). An strategy is defined and managed by what @sbose78 recently called an strategy admin. The only use case where an end-user and a strategy admin are the same person is mainly when using this project on development environments, e.g. kind , personal k8s cluster, etc. Where one must install the build operator, etc. From our side, at this point in time I really doubt me as an strategy admin will spend time on tweaking an existing strategy to introduce a build runtime image. But I´m open to this, dont get me wrong. All I´m saying is that this use case seems to be complex due to the amount of dependencies a particular runtime(e.g. golang, ruby, nodejs, php, staticfiles, etc) can have, and trying to have a runtime image for each of them will be a maintenance nightmare.

  • If I understand correctly, on what you call the runtime Dockerfile, we could have N stages, basically N numbers of FROM, due to the way docker deals with multiple stages(DAG) and the lack of parallelism in Kaniko( which seems to be the binary you used on this PR to build the final image), this will affect build times(slower). Plus the fact that we will most probably need +N number of containers(more expensive in terms of resources), as seen in https://gist.github.com/otaviof/edede792ad11aa4e6a8b8cf7a7eb001e#file-buildstrategy_buildpacks-v3_cr-lean-yaml-L83-L152 (this gist is using buildah bud which I believe is obsolete)

  • It would be very helpful to get examples for the Dockerfile strategies we support, e.g. Kaniko and Buildah. How will this runtime image will look on them. For example, currently Kaniko consists of a single step to build and push, how will the runtime image fit there?. For Buildpacks I dont see a value on using this runtime feature, I dont think this should be used on a Buildpack strategy, specially because we want to allow the strategy to detect the runtime for us and fetch all related dependencies for free.

  • @gabemontero brought some good points in terms of security. We should not be exposing the whole container spec, but rather restrict this to particular field under this object.

Sorry for this long message, I finally got the whole feature picture with this PR, so thanks a lot for this.

@sbose78
Copy link
Member

sbose78 commented Jun 25, 2020

From our side, at this point in time I really doubt me as an strategy admin will spend time on tweaking an existing strategy to introduce a build runtime image.

Note, the admin doesn't have to do anything. In the proposal, the strategy is tweaked on the fly in-memory and then consumed - which @SaschaSchwarze0 mentioned could be done at the 'generate taskrun' level directly.

@qu1queee
Copy link
Contributor

@sbose78 thanks. Yes you are correct. There still some particular knowledge the user needs to know prior to the runtime definition of the Build(similar to what @SaschaSchwarze0 mentioned around which directory to use from the base img). I would try to summarize the whole discussion via this msg, to avoid extending this for longer:

  • I think the way the new step is implemented needs to be modified, and should be using the generate_taskrun.go to embed the new step with the Dockerfile content, as @SaschaSchwarze0 suggested.
  • The concern around PUSHING the base image first and then pulling it as the base layer for the multi-stage image is valid. Im not sure if there is a better way on doing this, ideas? Also, keep in mind we use the same image url, so we basically override the same image twice(one when building via the strategy, second when we push the lean image), is this a concern?
  • Security concerns from @gabemontero(ensure we dont expose too much from the container Spec) and @SaschaSchwarze0(allow to modify the user mode in the runtime Dockerfile)

pkg/controller/build/build_controller.go Outdated Show resolved Hide resolved
pkg/controller/build/build_controller.go Outdated Show resolved Hide resolved
pkg/controller/build/build_controller.go Outdated Show resolved Hide resolved
pkg/controller/buildrun/buildrun_controller.go Outdated Show resolved Hide resolved
pkg/controller/buildrun/runtime_image.go Outdated Show resolved Hide resolved
pkg/controller/buildrun/runtime_image.go Outdated Show resolved Hide resolved
pkg/controller/buildrun/runtime_image.go Outdated Show resolved Hide resolved
docs/proposals/runtime-image.md Outdated Show resolved Hide resolved
@sbose78
Copy link
Member

sbose78 commented Jul 13, 2020

In my opinion it should be a hard-coded image, controllable through environment variable and defaulting to something like busybox or registry.access.redhat.com/ubi8/ubi-minimal.

I agree, we should use the absolute minimal image. Builder images can be 'heavy'.

To be a good community citizen, we should probably use the busybox-ish image to achieve this since spec.builderImage is optional ?

Recommendations on making this configurable:
Read this image from an env var, potentially align the name of the env var with https://docs.openshift.com/container-platform/4.3/operators/operator_sdk/osdk-generating-csvs.html#olm-enabling-operator-for-restricted-network_osdk-generating-csvs so that we an work with restricted environments with ease.

@otaviof
Copy link
Member Author

otaviof commented Jul 14, 2020

To be a good community citizen, we should probably use the busybox-ish image to achieve this since spec.builderImage is optional ?

@sbose78 yep, I think that's a good path ahead, after all busybox:latest is quite small 1.22 MB.

@otaviof
Copy link
Member Author

otaviof commented Jul 14, 2020

Community Meeting Notes:

* Review `build.spec.builderImage` usage during `build.spec.runtime`, it must not be empty when using `buildpacks-v3` where the builder image is optional;

@SaschaSchwarze0 thanks for pointing that out, indeed when .spec.BuilderImage is not defined it was causing a NPE. I fixed this by employing busybox:latest when the builder image is not defined. Please consider latest commit. (cc @adambkaplan, @gabemontero)

@SaschaSchwarze0
Copy link
Member

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 14, 2020
@gabemontero
Copy link
Member

it seems like it is time to squash commits where appropriate and approve/merge

any last thoughts @sbose78 while @otaviof while updates us on that ?

@sbose78
Copy link
Member

sbose78 commented Jul 14, 2020

I'm good @gabemontero . Thank you @SaschaSchwarze0 and @otaviof !

@sbose78
Copy link
Member

sbose78 commented Jul 14, 2020

/lgtm

@otaviof
Copy link
Member Author

otaviof commented Jul 14, 2020

it seems like it is time to squash commits where appropriate and approve/merge

any last thoughts @sbose78 while @otaviof while updates us on that ?

@gabemontero I've squashed the commits this morning, now it's only 5 commits organized per section of the project updated. Would you mean, squashing as a single commit?

@gabemontero
Copy link
Member

it seems like it is time to squash commits where appropriate and approve/merge
any last thoughts @sbose78 while @otaviof while updates us on that ?

@gabemontero I've squashed the commits this morning, now it's only 5 commits organized per section of the project updated. Would you mean, squashing as a single commit?

yeah OCP generally like to minimize commits but I don't think we need to strictly enforce that with this repo

if you have already squashed some and are happy with the commit structure ...

/approve

@gabemontero
Copy link
Member

[APPROVALNOTIFIER] This PR is NOT APPROVED

oh well ... I don't have the authority :-)

try adding /approve yourself @otaviof

This pull-request has been approved by: gabemontero
To complete the pull request process, please assign otaviof
You can assign the PR to them by writing /assign @otaviof in a comment when ready.

The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:

* **[OWNERS](https://github.com/redhat-developer/build/blob/master/OWNERS)**

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@otaviof
Copy link
Member Author

otaviof commented Jul 14, 2020

/approve

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: gabemontero, otaviof

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 14, 2020
@openshift-merge-robot openshift-merge-robot merged commit 1d51fd6 into shipwright-io:master Jul 14, 2020
@sbose78
Copy link
Member

sbose78 commented Jul 14, 2020

I typically try and ensure we have a meaningful single commit message, but upon auto-squash that doesn't quite happen :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. runtime-base-image
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants