Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support an OCI Image Builder other than Docker #564

Closed
jorgemoralespou opened this issue Apr 6, 2020 · 36 comments
Closed

Support an OCI Image Builder other than Docker #564

jorgemoralespou opened this issue Apr 6, 2020 · 36 comments
Labels
status/ready Issue ready to be worked on. type/enhancement Issue that requests a new feature or improvement. type/research Issue intended to be exploratory.

Comments

@jorgemoralespou
Copy link

Description

There's many users that are starting to not have Docker installed on their systems because there are other alternatives that let's them create containers in a secure way as they typically run these containers on remote systems (e.g. kubernetes clusters).
Some of such alternatives are:

Pack, although not depending on docker build per this comment it does require Docker to be running on your container.

When you want to run pack as part of your CI/CD process or any other requisite (learning purposes) you might run it in a container on a Kubernetes platform and in order to run you will need to expose the Docker socket on the host machine and making the whole platform insecure.

Building containers should be a secure process that does not compromise your system in any possible way.

Proposed solution

Provide a mechanism to replace or have an alternative to using Docker to Build images.

Describe alternatives you've considered

Using kpack on the platform can be an alternative although AFAIK can have the same considerations on security (or lack of security).

@jorgemoralespou jorgemoralespou added status/triage Issue or PR that requires contributor attention. type/enhancement Issue that requests a new feature or improvement. labels Apr 6, 2020
@abitrolly
Copy link
Contributor

It is possible add support for podman by running it as a service with podman system service and adding detection of podman socket containers/podman#4499 (comment) The socket is meant to be Docker-compatible.

However, for other builders, such as LXD, a more generalized interface could be used. Canonical maintains https://snapcraft.io/multipass as generalized interface for running containers and VM.

@sclevine
Copy link
Member

sclevine commented Apr 6, 2020

Pack, although not depending on docker build per this comment it does require Docker to be running on your container.

When you want to run pack as part of your CI/CD process or any other requisite (learning purposes) you might run it in a container on a Kubernetes platform and in order to run you will need to expose the Docker socket on the host machine and making the whole platform insecure.

Building containers should be a secure process that does not compromise your system in any possible way.

The pack CLI is intended to be a tool for running Cloud Native Buildpack builds on a local workstation that doesn't natively support containers (often Windows or macOS). While it seems reasonable to support other local container runtimes besides Docker, CI platforms that support running container images are probably better off running the lifecycle directly, without needed a nested container runtime. (This doesn't require any privileges or capabilities.)

Here's a complex example of this for Tekton:
https://github.com/buildpacks/tekton-catalog/blob/master/buildpacks/buildpacks-v3.yaml

Relevantly, we've recently introduced a single lifecycle command that threads all of those steps together:
https://github.com/buildpacks/rfcs/blob/master/text/0026-lifecycle-all.md

When that functionality is documented, it should make running CNB builds on CI platforms much easier.

Using kpack on the platform can be an alternative although AFAIK can have the same considerations on security (or lack of security).

Kpack uses the lifecycle directly, and doesn't depend on a Docker daemon or expose a Docker socket. Builds run in unprivileged containers that are fully isolated from registry credentials.

@sclevine
Copy link
Member

sclevine commented Apr 6, 2020

Another way of describing this is: the lifecycle is comparable to kaniko or other unprivileged image building tools. The pack CLI is glue code that makes it easy to use lifecycle with the Docker daemon. We could expand the functionality of the pack CLI so that it acts as glue code for other container runtimes, but that glue code is only necessary when containers are not natively accessible already. Maybe that's a good idea, but I'd like to hear concrete use cases first.

@abitrolly
Copy link
Contributor

@sclevine building random projects on a local workstation without containerization has a risk of killing the system or build environment. If pack does this, it is insecure by design.

@sclevine
Copy link
Member

sclevine commented Apr 6, 2020

building random projects on a local workstation without containerization has a risk of killing the system or build environment. If pack does this, it is insecure by design.

This is not what I'm suggesting (or permitted by the CNB specification). Running the lifecycle directly is only supported on CI platforms that support running container images (such as a CNB builder image with the lifecycle binary).

I'm suggesting that supporting another container runtime would only benefit desktop Linux users and users of non-container CI systems. That doesn't match the requested use case:

When you want to run pack as part of your CI/CD process or any other requisite (learning purposes) you might run it in a container on a Kubernetes platform and in order to run you will need to expose the Docker socket on the host machine and making the whole platform insecure.

@abitrolly
Copy link
Contributor

@sclevine sorry, but your assumption that supporting another container runtime would benefit users of non-container CI systems contains logical error to me. I also don't see the connection to Linux desktop, which is about having Gnome or another WM.

If you want to say, that as a DevOps I should not have ability to use buildpacks on my Linux, and only do this in a self-hosted or vendor cloud, then I disagree. The system should be simple enough to troubleshoot it in parts gradually.

The last part of requested use case mentions my system explicitly.

Building containers should be a secure process that does not compromise your system in any possible way.

@sclevine
Copy link
Member

sclevine commented Apr 7, 2020

There are currently two ways to use the tooling provided by the Cloud Native Buildpacks project:

  1. With the pack CLI. This uses Docker (purely as a container runtime, without using docker build) to run a builder image (which contains the lifecycle). Docker is available and easy to install on macOS, Windows, and Linux.

  2. Without the pack CLI, by executing a builder image directly on platform that can already run containers (like k8s). Tekton, kpack, and concourse use this strategy. It does not require Docker or privileged containers.

While I imagine that we would welcome contributions to the pack CLI to add support for alternative container runtimes (like podman), those alternative container runtimes aren't easy to use on macOS or Windows. Additionally, platforms that support running container images natively (like k8s) wouldn't benefit from it, because they can already do what pack does (run builder images). Running pack inside of a container (which creates nested containers) is unnecessarily and decreases performance. The lifecycle can run directly in that container instead.

Therefore, as far as I can tell, only Linux users who don't want to build using Docker or K8s would benefit from support additional runtimes in the pack CLI. I'm not opposed to it, but I'm also not about to implement it myself 😄

@abitrolly
Copy link
Contributor

Yes, I am interested that pack CLI in 1. allowed more secure alternatives than Docker.

With pack CLI I can reuse CI/CD container pipelines that are more simple to maintain that verbose k8s configs for every step.

@jorgemoralespou
Copy link
Author

While I imagine that we would welcome contributions to the pack CLI to add support for alternative container runtimes (like podman), those alternative container runtimes aren't easy to use on macOS or Windows.
@sclevine Well, VMware is working on a replacement for Docker Desktop (Windows/Mac) called Nautilus (https://vmwarefusion.github.io/). In essence, what you're saying is that any user that decides to adopt VMware's technology for containers/VMs on the Desktop will not be able to work with pack or develop a CNB locally? I hope that Nautilus provides a docker.socket and transparent connection to it from your local laptop :-(

@sclevine
Copy link
Member

sclevine commented Apr 7, 2020

Well, VMware is working on a replacement for Docker Desktop (Windows/Mac) called Nautilus (https://vmwarefusion.github.io/). In essence, what you're saying is that any user that decides to adopt VMware's technology for containers/VMs on the Desktop will not be able to work with pack or develop a CNB locally?

While I can't speak for the other core team members, I imagine that we would welcome contributions to make the pack CLI compatible with Nautilus (or, as I mentioned, other alternative container runtimes).

To be clear, given that pack's only job is to interface with the container runtime and run the lifecycle, there is no way to implement it generically to support any container runtime. The lifecycle is the generic component. So support for, e.g., Nautilus would need to be added to pack explicitly. Are you interested in submitting a PR for it?

With pack CLI I can reuse CI/CD container pipelines that are more simple to maintain that verbose k8s configs for every step.

I don't believe that setting up a CI/CD pipeline that uses pack to keep containers up-to-date is easier than using kpack. You would need to monitor for changes to a number of upstream resources (buildpacks, stack run images, stack build images, source code). A simple pipeline that uses the pack CLI might beat most Dockerfile-based pipelines, but you'd lose the stronger security guarantee that, e.g., kpack provides.

@zmackie
Copy link
Contributor

zmackie commented Apr 7, 2020

@zmackie podman provides Docker API over /run/user/$UID/podman/podman.sock socket. But I can not find where in the pack source this path can be detected, or set explicitly through a config.

$ pack build myapp --builder heroku/buildpacks:18 -v
Pulling image index.docker.io/heroku/buildpacks:18
ERROR: failed to fetch builder image 'index.docker.io/heroku/buildpacks:18': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

#413 (comment)

@zmackie zmackie closed this as completed Apr 7, 2020
@zmackie
Copy link
Contributor

zmackie commented Apr 7, 2020

whoops!

@zmackie zmackie reopened this Apr 7, 2020
@sclevine
Copy link
Member

sclevine commented Apr 7, 2020

@abitrolly I think setting DOCKER_HOST to the podman socket location should just work, assuming podman provides the same daemon API as Docker. I don't think anyone has tested this though.

@jorgemoralespou
Copy link
Author

we're in the process of testing this. cc @GrahamDumpleton
Will update here with our findings

@GrahamDumpleton
Copy link

It failed is all I can say:

# pack build sample-java-app --path sample-java-app
44cc64492fb6a6d78d3e6d087f380ae6e479aa1b2c79823b32cdacfcc2f3d715: pulling image () fERROR: invalid builder 'cloudfoundry/cnb:bionic': builder index.docker.io/cloudfoundry/cnb:bionic missing label io.buildpacks.builder.metadata -- try recreating builder

It is hard for me to take it any further at this point since don't understand enough about either the podman socket support or the process by which pack works.

Since I am doing this from inside of a container may also be complicating things. Really should be tested directly on a full Fedora operating system initially.

@jromero
Copy link
Member

jromero commented Apr 7, 2020

@GrahamDumpleton can you try specifying a builder?

pack build sample-java-app -B cnbs/sample-builder:alpine --path sample-java-app

@GrahamDumpleton
Copy link

The builder was already set previously using:

# pack set-default-builder cloudfoundry/cnb:bionic
Builder cloudfoundry/cnb:bionic is now the default builder

Using different builder on command line makes no difference.

# pack build sample-java-app -B cnbs/sample-builder:alpine --path sample-java-app
93b31bfcf2537f44dad74107cd5ad9beae36e1e769f653c30847bb045bb85e12: pulling image () fERROR: invalid builder 'cnbs/sample-builder:alpine': builder index.docker.io/cnbs/sample-builder:alpine missing label io.buildpacks.builder.metadata -- try recreating builder

@abitrolly
Copy link
Contributor

@sclevine specifying DOCKER_HOST works for connecting to podman.

podman system service &
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
pack build myapp --builder heroku/heroku-buildpack-ruby -v

However, the pack then fails with the same error as @GrahamDumpleton mentioned.

$ pack build myapp --builder heroku/buildpacks:18 -v
Pulling image index.docker.io/heroku/buildpacks:18
8d10618c5b3b5b560c75e0353572b843e3a0d958eb3c6ff452519a7f7be5ea55: pulling image () from docker.io/heroku/buildpacks:18 
ERROR: invalid builder 'heroku/buildpacks:18': builder index.docker.io/heroku/buildpacks:18 missing label io.buildpacks.builder.metadata -- try recreating builder

@GrahamDumpleton
Copy link

I haven't got the environment setup to check again myself, but if you run podman images is the builder image actually there? If yes, can you inspect it to see what labels it does have set.

@natalieparellano natalieparellano added type/research Issue intended to be exploratory. status/ready Issue ready to be worked on. and removed status/triage Issue or PR that requires contributor attention. labels Apr 10, 2020
@abitrolly
Copy link
Contributor

@GrahamDumpleton the images is there. Here are the labels.

$ podman inspect docker.io/heroku/buildpacks:18
...
        "Labels": {
            "io.buildpacks.builder.metadata": "{\"description\":\"\",\"buildpacks\":[{\"id\":\"heroku/maven\",\"version\":\"0.1\"},{\"id\":\"heroku/jvm\",\"version\":\"0.1\"},{\"id\":\"heroku/ruby\",\"version\":\"0.0.1\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\"},{\"id\":\"heroku/python\",\"version\":\"0.1.2\"},{\"id\":\"heroku/gradle\",\"version\":\"0.1.2\"},{\"id\":\"heroku/scala\",\"version\":\"0.1.2\"},{\"id\":\"heroku/php\",\"version\":\"0.1.2\"},{\"id\":\"heroku/go\",\"version\":\"0.1.2\"},{\"id\":\"heroku/nodejs-engine\",\"version\":\"0.4.3\"},{\"id\":\"heroku/nodejs-npm\",\"version\":\"0.1.4\"},{\"id\":\"heroku/nodejs-yarn\",\"version\":\"0.0.1\"}],\"stack\":{\"runImage\":{\"image\":\"heroku/pack:18\",\"mirrors\":null}},\"lifecycle\":{\"version\":\"0.6.1\",\"api\":{\"buildpack\":\"0.2\",\"platform\":\"0.2\"}},\"createdBy\":{\"name\":\"Pack CLI\",\"version\":\"v0.9.0 (git sha: d42c384a39f367588f2653f2a99702db910e5ad7)\"}}",
            "io.buildpacks.buildpack.layers": "{\"heroku/go\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:8728779d674e06126ecede7af51a209d9d2c72577e71225acceda18e49c5515d\"}},\"heroku/gradle\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:342156a961934502ac4881585091c1538f1b9f0ad4d1df1ff8e2b76ddb62c4ce\"}},\"heroku/jvm\":{\"0.1\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"},{\"id\":\"io.buildpacks.stacks.bionic\"}],\"layerDiffID\":\"sha256:510b0e4d3fe6d3d68fc862ed098eed2c1042cc1e3348c81393acd4119f1ed381\"}},\"heroku/maven\":{\"0.1\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:fa0249cc869733cbf2ecbc9a266ed818c7b440f59f2e1cabef8a4f514a819126\"}},\"heroku/nodejs-engine\":{\"0.4.3\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"},{\"id\":\"io.buildpacks.stacks.bionic\"}],\"layerDiffID\":\"sha256:445c10f941efec2279767c320afb497e7ef85dd5c919a40ecb9d8bcac9826009\"}},\"heroku/nodejs-npm\":{\"0.1.4\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"},{\"id\":\"io.buildpacks.stacks.bionic\"}],\"layerDiffID\":\"sha256:6db38654c28768fd61d817dc4c5dd0843390d3995b8a80da761c5de7d86fc2e9\"}},\"heroku/nodejs-yarn\":{\"0.0.1\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:a526227220f571466890bbfc2fb7720587251339b50a574fe9b2f1e43b99a6e8\"}},\"heroku/php\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:3682dbc263721ff0594905ac825447dcc4df98835f87182ad8670677340c8f04\"}},\"heroku/procfile\":{\"0.5\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"},{\"id\":\"io.buildpacks.stacks.bionic\"}],\"layerDiffID\":\"sha256:630571793248869ed92fe7d0b6afc055204fd634cd627f3318f4bdbc9627ceb7\"}},\"heroku/python\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:6a5e32362fc1d3c4a493fcd0ec2ad09256bebdf667d1d8147ee806c7a522112a\"}},\"heroku/ruby\":{\"0.0.1\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:f4220c9fb652014fde63510985106325d4cffc74a9355d2af162fbde9c6da4a2\"}},\"heroku/scala\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:233d963fddf3390e58ed55220d6f5420976f7e39ba2a29b1f759171855544d80\"}}}",
            "io.buildpacks.buildpack.order": "[{\"group\":[{\"id\":\"heroku/ruby\",\"version\":\"0.0.1\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/python\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/jvm\",\"version\":\"0.1\"},{\"id\":\"heroku/maven\",\"version\":\"0.1\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/gradle\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/scala\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/php\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/go\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/nodejs-engine\",\"version\":\"0.4.3\"},{\"id\":\"heroku/nodejs-yarn\",\"version\":\"0.0.1\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/nodejs-engine\",\"version\":\"0.4.3\"},{\"id\":\"heroku/nodejs-npm\",\"version\":\"0.1.4\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]}]",
            "io.buildpacks.stack.id": "heroku-18",
            "io.buildpacks.stack.mixins": "null"
        },
...

@abitrolly
Copy link
Contributor

I could find out which requests are being sent to podman and repeat them with curl. The bug is most likely in podman 1.8.2, which Docker API doesn't return labels like podman inspect command does.

$ podman system service --log-level debug
...
DEBU[0015] APIHandler -- Method: POST URL: /v1.38/images/create?fromImage=heroku%2Fbuildpacks&tag=18 (conn 0/0) 
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/heroku/buildpacks:18" 
DEBU[0015] APIHandler -- Method: GET URL: /v1.38/images/index.docker.io/heroku/buildpacks:18/json (conn 0/1) 
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/heroku/buildpacks:18" 
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]@c533962c38b1b71b08ff03d07119d9d63f82d03192076016743cdde9d79fbd70" 
DEBU[0015] exporting opaque data as blob "sha256:c533962c38b1b71b08ff03d07119d9d63f82d03192076016743cdde9d79fbd70" 
DEBU[0020] APIServer.Shutdown called 2020-04-19 07:46:17.331984794 +0300 +03 m=+20.612378751, conn 0/2 
$ curl -sS --unix-socket $XDG_RUNTIME_DIR/podman/podman.sock http:/v1.38/images/index.docker.io/heroku/buildpacks:18/json | jq . | grep Labels
    "Labels": null
    "Labels": null

@ShadowJonathan
Copy link

+1 to this, I was already thinking about how pack could be integrated in my soon-to-be kubernetes-ran CI system, and other kubernetes-based CD systems.

@jspawar
Copy link

jspawar commented May 28, 2020

@sclevine Have also been hoping to run pack in contexts without Docker daemons, specifically CI because the approach of spinning up an entire cluster with kpack and configuring that entire workflow seems relatively heavy-handed compared to just running a Concourse task that just does pack build.

Without the pack CLI, by executing a builder image directly on platform that can already run containers (like k8s). Tekton, kpack, and concourse use this strategy. It does not require Docker or privileged containers.

I have been poking around with trying to do this with the cloudfoundry/cnb:bionic image; however, I noticed there is one significant omission: the exporter from the buildpack lifecycle only allows exporting to a Docker daemon or directly uploading to a remote registry for us: https://github.com/buildpacks/lifecycle/blob/5be3695ca4f67a7b512b1962407dd283146abce3/cmd/lifecycle/exporter.go#L176-L191.

The latter is the desired end result of course; however, this would break a lot of Concourse flows since we lose the ability to track resource versioning via explicit resources that we put to perform the upload.

I would imagine other [CI] users would like to have the option to also just simply export to a tarball. Do you have any suggestions for handling this then? Or should I try something else that is to the same effect as "executing a builder image directly"?

Can also open an issue about this potential feature request in the lifecycle repo too if you'd like.

@jorgemoralespou
Copy link
Author

Not only that, but eventually not requiring Docker could also help improve the lifecycle in terms of build speed, artifacts caching, rebase, ....

jspawar pushed a commit to cloudfoundry/capi-dockerfiles that referenced this issue May 28, 2020
- copied this from the BOSH team:
  - https://github.com/cloudfoundry/bosh/blob/master/ci/old-docker/main-bosh-docker
- ideally should be able to remove everything besides downloading the
`pack` CLI after the following issue in the `pack` repo is resolved:
  - buildpacks/pack#564

[#172847711]
jspawar pushed a commit to cloudfoundry/capi-ci that referenced this issue May 28, 2020
- copied this from the BOSH team:
  - https://github.com/cloudfoundry/bosh/blob/master/ci/old-docker/main-bosh-docker
- ideally should be able to remove everything besides downloading the
`pack` CLI after the following issue in the `pack` repo is resolved:
  - buildpacks/pack#564

[#172847711]
@sclevine
Copy link
Member

@jspawar I think we would welcome a contribution to the lifecycle that allows exporting an OCI image to tar format on disk. You could simulate this right now by spinning up a local registry in the container and pulling the image to disk, but I agree that it would be a nice feature when you're using the builder directly in concourse / other CI.

Just FYI, we've made the workflow you're describing much easier recently with the lifecycle creator binary, which runs through all the steps automatically without needed the ephemeral data files.

@jspawar kpack is a Docker-less CNB platform for k8s.

@jorgemoralespou The lifecycle already runs efficiently without Docker on platforms that natively provide a container runtime. But like I said, I think we'd be happy to merge support for podman, etc. to support VM-based CI / Linux workstation use cases. 😄

@abitrolly
Copy link
Contributor

@sclevine a sequence diagram with API calls employed in building an image would help to estimate the effort required to add podman support. Instead of waiting for full Docker API compatibility layer landed in podman.

@sclevine
Copy link
Member

CC: @jromero

@fatherlinux
Copy link

All, FYI, we have an issue open on the Podman side. I just tested with the latest version of Podman in Fedora (podman 2.1.1) and we still have the lack of an archive method blocking us. But, I wanted to say that this is on our radar and building up and stabilizing the docker compatible interface is high on our priority list. I can't commit to a timeline, but I'm investigating adding Pack CLI to RHEL 8/9 so we'll be doing more research over the coming months. @jorgemoralespou thanks for submitting this issue. We are interested from our side.

containers/podman#6050

@jromero
Copy link
Member

jromero commented Aug 10, 2021

Given that this issue was a little broad to begin with, I'm going to close it in favor of what did come out of it. Pack now supports podman via the docker socket interface. Any alternative to Docker that supports the docker socket interface should also work.

@jromero jromero closed this as completed Aug 10, 2021
@jonashackt
Copy link

jonashackt commented Oct 14, 2021

Maybe others also come along here looking for a solution to the initially mentioned

There's many users that are starting to not have Docker installed on their systems because there are other alternatives that let's them create containers in a secure way as they typically run these containers on remote systems (e.g. kubernetes clusters). [...] Pack, although not depending on docker build [...] does require Docker to be running on your container.

We have a GitLab CI connected to a EKS / K8s cluster with Kubernetes executors/runners, where we don't have docker inside the build pods/containers - nor want to mount the docker socket /var/run/docker.sock oder use Docker-in-Docker (dind) approach for security reasons. We desparately searched for a solution, but only ever found the quote from this comment in mind:

If you're looking to build images in CI (not locally), I'd encourage you to use the lifecycle directly for that, so that you don't need Docker. Here's an example: https://github.com/tektoncd/catalog/blob/master/buildpacks/buildpacks-v3.yaml

So here's our interpretation/solution to the problem simply using the "lifecycle directly" (here's the full story on stackoverflow) in our .gitlab-ci.yml (should work quite similar on other CI systems):

image: paketobuildpacks/builder

stages:
  - build

# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
  - mkdir ~/.docker
  - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json

build-image:
  stage: build
  script:
    - /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest

Hope this is of help 😃

@abitrolly
Copy link
Contributor

An awesome writeup at SO. Deserves to be a blog post.

@jonashackt
Copy link

jonashackt commented Oct 14, 2021

Great idea, will write one 😉 Done: https://blog.codecentric.de/en/2021/10/gitlab-ci-paketo-buildpacks/

@eimarfandino
Copy link

eimarfandino commented Apr 22, 2023

great blog @jonashackt ! i implemented exaclty as you describe it, but sadly i am getting this with my spring boot application:

ERROR: failed to launch: determine start command: process type web was not found

@eimarfandino
Copy link

your solution @jonashackt works really nice. It gets a bit more tricky when you need to pass maven build arguments. I managed to add the maven arguments like this:

`- echo "-Dmaven.test.skip=true --no-transfer-progress package spring-boot:repackage" >> platform/env/BP_MAVEN_BUILD_ARGUMENTS`

@nezygis
Copy link

nezygis commented Jul 13, 2023

@jonashackt hey, thanks a lot for the solution. is there a way to pass BP env variables to the build?

@eimarfandino
Copy link

@jonashackt hey, thanks a lot for the solution. is there a way to pass BP env variables to the build?

Did you see my reply? I posted how to pass an env, but i have to tell you i am afraid it doesnt work with all the variables

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/ready Issue ready to be worked on. type/enhancement Issue that requests a new feature or improvement. type/research Issue intended to be exploratory.
Projects
None yet
Development

No branches or pull requests