Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure out how to support docker build on Kubernetes #1806

Closed
bgrant0607 opened this issue Oct 15, 2014 · 42 comments
Closed

Figure out how to support docker build on Kubernetes #1806

bgrant0607 opened this issue Oct 15, 2014 · 42 comments

Comments

@bgrant0607
Copy link
Member

@bgrant0607 bgrant0607 commented Oct 15, 2014

Touched upon in #503, #1294, #1567, and elsewhere.

Privileges are currently required to do docker build. Docker in Docker doesn't really work, so in order to do this on Kubernetes, one has to mount the host Docker socket into their container, which is a big security and abstraction problem.

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Oct 20, 2014

  • There has to be some way to control the elevated privileges that a docker socket gives you (which means a proxy on the API today or some integration with Docker authz in the future).
  • It should be possible to grant access to the socket to a non-root user instead of requiring privileged - that user has to be preallocated
  • It's likely that the core system should not be too coupled to this mechanism, because different ways of doing builds may require different capabilities (what if you want to push at the end of the build? etc).

Given those constraints, one way to implement this may be as a special extension volume type to me - mount Docker with xyz conditions on use.

@csrwng @ncdc fyi on this umbrella issue

@erictune
Copy link
Member

@erictune erictune commented Oct 20, 2014

Consolidated use cases:

  1. pods that build docker images and push them to some repository. Source: #503
  2. Use a Dockerfile in place of a docker image identifier in the container spec of a Pod. Source #1294
  3. Building should be a service that runs with a role separate from any real user, and which can somehow identify whether images were built according to certain (possibly deployment-specific) rules (e.g. all commits reviewed, built with known compiler version, etc). Source: @erictune
@erictune
Copy link
Member

@erictune erictune commented Oct 20, 2014

An alternative to what @smarterclayton said above is to treat all builds as happening via a build service, which itself it trusted at the same level as root-on-kubelet. A build services is basically a pod which listens for build requests from some source, and then executes them using docker build, and pushes the result to some repository, or error messages somewhere else accessible.

@erictune
Copy link
Member

@erictune erictune commented Oct 20, 2014

Many organizations have the requirement that a single user cannot author code and build it and push it to production. I believe this is an SOx requirement, or a common interpretation of SOx.

So that implies that there has to be a different user who does either the build or the push step. The push step needs to reply on the integrity of the build step. So the build step has to be done by a trusted user (service account).

So then the questions are:

  • given that there is a build service for "production builds", should this service also be used for all builds
    • faster response from having build pods always ready to go.
    • better utilization of standing build pods with multiple users
    • remote concern about privilege escalation through compiler bugs or ad-hoc build steps)
  • is it okay to run the build role account in a privileged pod
    • building production binaries is root equivalent, for practical deployments.
@erictune
Copy link
Member

@erictune erictune commented Oct 20, 2014

A service might seem heavyweight, though, for smaller organizations. Could make it harder to debug the build process, given that there is a proxy, and that user may not be able to ssh into build-user's pod and poke around.

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Oct 20, 2014

The benefits of per pod builds are that you can do resource and security isolation (in theory, eventually docker build will be secure, although it's a ways out from that) the same way that you run users. It would be ideal if docker build could be run via a single process exec (instead of the daemon) that has higher level privileges.

Today, acknowledging that the docker daemon is essentially a build service, it should still be possible to run docker builds in such a way that the daemon can nest / inherit the security / resource constraints of the invoker so that builds can be contained.

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Oct 20, 2014

is it okay to run the build role account in a privileged pod

It is today, it should be an avenue of improvement in the future (a container execution and a build execution should be equivalently securable).

@drocsid
Copy link

@drocsid drocsid commented Nov 17, 2015

+1 This issue is more than a year old. I'm trying to work around this today by mounting the docker binary and required libraries into my pod. Not sure if it's going to work, and I'm pretty certain I'm breaking security principles. However I'm not sure docker suggestions are any better:

http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Nov 17, 2015

Today we run locked down pods that have access to hostPath bind mounted
docker sockets, where the build controller (that creates the build pod) has
elevated privileges to do so but the user does not. When PodSecurityPolicy
lands, that'll allow similar distinction. I don't think there's a better
way today - docker-in-docker can be painful, but is more secure than
accessing the host docker.

On Tue, Nov 17, 2015 at 1:08 PM, drocsid notifications@github.com wrote:

+1 This issue is more than a year old. I'm trying to work around this
today by mounting the docker binary and required libraries into my pod. Not
sure if it's going to work, and I'm pretty certain I'm breaking security
principles. However I'm not sure docker suggestions are any better:

http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/


Reply to this email directly or view it on GitHub
#1806 (comment)
.

@deepak7093
Copy link

@deepak7093 deepak7093 commented Mar 3, 2017

Hey,

Is there any workaround to pass Dockerfile to kubectl instead of Docker images.

@bytesandwich
Copy link

@bytesandwich bytesandwich commented Mar 17, 2017

It seems like mounting the docker socket makes the most sense as the solution to run builds in k8s. If I mount the docker socket and run docker build and then docker push from a pod, I end up with images taking up disk and inodes, and I want to make sure these images won't cause capacity problems. I'm trying to resolve two questions:

  1. Will kubelet include these images in 'all unused images' from the doc
    when it tries to free up space, facing disk pressure?

    Looking at image_gc_manager.go#L190 It seems like all images the docker daemon knows about, regardless of their origin k8s or otherwise, are garbage collectible. If this is correct I don't think I need to do anything special to clean up these images.

  2. I'm also wondering whether I need to do something to make kubelet aware after the image has been built and is not yet pushed, so kubelet won't try to delete the image yet.

    My understanding is deleting an image while it's being pushed may cause a docker tag does not exist error and an exit 1 from the docker push. Looking at image_gc_manager.go#L202 it seems like current kubelet would delete the image while it's being pushed. If this understanding is correct my builds may fail because of scheduling in k8s, which may happen for other reasons anyway.

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Mar 23, 2017

We've discussed adding that in a few places - a way to ask for both a container and image to be "preserved" for a reasonable period of time to allow commit and push to have time to work. Ideally that could be done without having to define an API on the master.

@mwuertinger
Copy link

@mwuertinger mwuertinger commented May 15, 2017

Beware that mounting the docker socket completely bypasses Kubernetes. That means that the scheduler is not aware of the stuff that is running there which might degrade your cluster reliability or performance. Also note that mounting the docker socket gives your pods full root privileges on the nodes, thus eliminating all of the security benefits of containers.

I recommend taking a look at acbuild which is a stand-alone tool for creating container images. Unfortunately there are still a lot of issues regarding docker compatibility, see this issue. I built a proof of concept that works with an older version of skopeo and demonstrates that it can work. Once the OCI spec is final everything will get a lot easier in that regard.

@abhinavdas
Copy link

@abhinavdas abhinavdas commented Oct 11, 2017

To isolate the container that needs to do docker build/run from the dind daemon, we use a sidecar container for dind daemon. Only the sidecar container is the privileged container. Since the docker api for dind daemon is exposed via a tcp socket, we have considered implementing a proxy layer that forbids the main container from running a container that asks for privilege escalation. Another aspect that must be considered it to prevent the main container from execing in to the privileged sidecar container. I am not sure if this is possible with RBAC/webhook since main and sidecar are in the same namespace.

Thoughts?

@discordianfish
Copy link
Contributor

@discordianfish discordianfish commented Nov 12, 2017

I don't think Kubernetes should wrap or abstract the Docker build API/concepts. Instead I would prefer to extend Kubernetes to allow comitting containers and storing them in shared storage (Volumes?). This could be used to build a Kubernetes native builder (which could support Dockerfiles) which can support any container runtime.

I'm sure others had similar thoughts about that. Was this discussed somewhere or is there even an issue for that?

@zoobab
Copy link

@zoobab zoobab commented Dec 8, 2017

I rewrote DIND with a simple SSHD+Dockerd on Alpine:

https://github.com/zoobab/dind

I am still struggling to get it running as privileged mode inside Kubernetes (minikube for now), so if you have any tips for it.

I am adding atm a change to the entry.sh to quit if SSHD or Dockerd is not running.

@dminkovsky
Copy link

@dminkovsky dminkovsky commented Dec 19, 2017

I wrote a tool that lets you specify Google Cloud Container Builder build steps directly inside your Kubernetes manifests: https://github.com/dminkovsky/kube-cloud-build. It's been a great help to me!

@mitar
Copy link
Contributor

@mitar mitar commented Jan 7, 2018

Could --no-new-privileges flag of dockerd help? https://www.projectatomic.io/blog/2016/03/no-new-privs-docker/

@krzyzacy
Copy link
Member

@krzyzacy krzyzacy commented Feb 5, 2018

@PeterKneale
Copy link

@PeterKneale PeterKneale commented Feb 19, 2018

Those looking to use new docker features such as multistage builds in CI/CD are looking for a faster upgrade cycle of docker than those administering a k8s cluster. If I understand the situation correctly then I feel that binding to the host docker socket causes these two differing priorities (features/stability) to come more into conflict.

@dlorenc
Copy link
Contributor

@dlorenc dlorenc commented Apr 16, 2018

FYI, we just open sourced a tool today that was designed to help address this issue: github.com/GoogleCloudPlatform/kaniko

@BenTheElder
Copy link
Member

@BenTheElder BenTheElder commented Apr 16, 2018

It's probably worth mentioning that kubernetes test-infra uses a lot of privileged pods running docker-in-docker to do CI for k8s on k8s. This works pretty well for us, though isn't quite ideal.

@bgrant0607
Copy link
Member Author

@bgrant0607 bgrant0607 commented Apr 17, 2018

Yes, there are solutions such as docker-in-docker and accessing the host docker socket, but in a world where we would like most containers to run with narrower privileges and where there are multiple container runtimes, those solutions are less than ideal.

The trend is towards standalone unprivileged build tools. Somewhere we should document alternatives, but that would be a website issue at this point.

So I'm (finally) closing this.

@bgrant0607 bgrant0607 closed this Apr 17, 2018
Workloads automation moved this from Backlog to Done Apr 17, 2018
@centerorbit
Copy link

@centerorbit centerorbit commented Apr 17, 2018

The Google Cloud Platform blog linked me here, but for those following this thread, there was no mention of Kaniko.
For those still interested (or those that find this issue), here's a solution:

@zoobab
Copy link

@zoobab zoobab commented Apr 18, 2018

For the record, those who are OK with DinD and running a privileged container inside a k8s cluster, I wrote this simple project:

https://github.com/zoobab/kubebuild/

@AkihiroSuda
Copy link
Member

@AkihiroSuda AkihiroSuda commented Apr 29, 2018

I'm working on "CBI", a vendor-neutral CRD interface for building container images on top of a Kubernetes cluster: https://github.com/containerbuilding/cbi

Currently, plugins for Docker, BuildKit, Buildah, and kaniko are implemented.
More plugins are to come.

@jeremych1000
Copy link

@jeremych1000 jeremych1000 commented Apr 4, 2019

Been investigating this as well for production deployments - is Kaniko proven/production grade? Is there any downside to using something other than docker build?

@BenHizak
Copy link

@BenHizak BenHizak commented Sep 10, 2019

@jeremych1000 Kaniko is certainly not production grade. I'm having multiple build issues (and reporting them as I find them).

@andrewhowdencom
Copy link

@andrewhowdencom andrewhowdencom commented Sep 11, 2019

I have used the following project with some success:

https://github.com/genuinetools/img

It requires a "privileged" container, but does not mount the docker socket in.

@AkihiroSuda
Copy link
Member

@AkihiroSuda AkihiroSuda commented Sep 11, 2019

BuildKit doesn't need privileged https://github.com/moby/buildkit/blob/master/docs/rootless.md#without-securitycontext-but-with---oci-worker-no-process-sandbox

Recent version also supports daemonless mode as in kaniko and img

@machavenu
Copy link

@machavenu machavenu commented Oct 8, 2019

@jeremych1000 Kaniko is certainly not production grade. I'm having multiple build issues (and reporting them as I find them).

Hi, Are you still having issues with Kaniko? I am trying to explore on Kaniko, Are there any other alternatives which are proven to be production grade?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Workloads
  
Done
Linked pull requests

Successfully merging a pull request may close this issue.

None yet