-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Figure out how to support docker build on Kubernetes #1806
Comments
Related issues on docker/docker: |
Given those constraints, one way to implement this may be as a special extension volume type to me - mount Docker with xyz conditions on use. |
Consolidated use cases:
|
An alternative to what @smarterclayton said above is to treat all builds as happening via a build service, which itself it trusted at the same level as root-on-kubelet. A build services is basically a pod which listens for build requests from some source, and then executes them using docker build, and pushes the result to some repository, or error messages somewhere else accessible. |
Many organizations have the requirement that a single user cannot author code and build it and push it to production. I believe this is an SOx requirement, or a common interpretation of SOx. So that implies that there has to be a different user who does either the build or the push step. The push step needs to reply on the integrity of the build step. So the build step has to be done by a trusted user (service account). So then the questions are:
|
A service might seem heavyweight, though, for smaller organizations. Could make it harder to debug the build process, given that there is a proxy, and that user may not be able to ssh into build-user's pod and poke around. |
The benefits of per pod builds are that you can do resource and security isolation (in theory, eventually docker build will be secure, although it's a ways out from that) the same way that you run users. It would be ideal if docker build could be run via a single process exec (instead of the daemon) that has higher level privileges. Today, acknowledging that the docker daemon is essentially a build service, it should still be possible to run docker builds in such a way that the daemon can nest / inherit the security / resource constraints of the invoker so that builds can be contained. |
It is today, it should be an avenue of improvement in the future (a container execution and a build execution should be equivalently securable). |
+1 This issue is more than a year old. I'm trying to work around this today by mounting the docker binary and required libraries into my pod. Not sure if it's going to work, and I'm pretty certain I'm breaking security principles. However I'm not sure docker suggestions are any better: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ |
Today we run locked down pods that have access to hostPath bind mounted On Tue, Nov 17, 2015 at 1:08 PM, drocsid notifications@github.com wrote:
|
Hey, Is there any workaround to pass Dockerfile to kubectl instead of Docker images. |
It seems like mounting the docker socket makes the most sense as the solution to run builds in k8s. If I mount the docker socket and run docker build and then docker push from a pod, I end up with images taking up disk and inodes, and I want to make sure these images won't cause capacity problems. I'm trying to resolve two questions:
|
We've discussed adding that in a few places - a way to ask for both a container and image to be "preserved" for a reasonable period of time to allow commit and push to have time to work. Ideally that could be done without having to define an API on the master. |
Beware that mounting the docker socket completely bypasses Kubernetes. That means that the scheduler is not aware of the stuff that is running there which might degrade your cluster reliability or performance. Also note that mounting the docker socket gives your pods full root privileges on the nodes, thus eliminating all of the security benefits of containers. I recommend taking a look at acbuild which is a stand-alone tool for creating container images. Unfortunately there are still a lot of issues regarding docker compatibility, see this issue. I built a proof of concept that works with an older version of skopeo and demonstrates that it can work. Once the OCI spec is final everything will get a lot easier in that regard. |
To isolate the container that needs to do docker build/run from the dind daemon, we use a sidecar container for dind daemon. Only the sidecar container is the privileged container. Since the docker api for dind daemon is exposed via a tcp socket, we have considered implementing a proxy layer that forbids the main container from running a container that asks for privilege escalation. Another aspect that must be considered it to prevent the main container from Thoughts? |
I don't think Kubernetes should wrap or abstract the Docker build API/concepts. Instead I would prefer to extend Kubernetes to allow comitting containers and storing them in shared storage (Volumes?). This could be used to build a Kubernetes native builder (which could support Dockerfiles) which can support any container runtime. I'm sure others had similar thoughts about that. Was this discussed somewhere or is there even an issue for that? |
I rewrote DIND with a simple SSHD+Dockerd on Alpine: https://github.com/zoobab/dind I am still struggling to get it running as privileged mode inside Kubernetes (minikube for now), so if you have any tips for it. I am adding atm a change to the entry.sh to quit if SSHD or Dockerd is not running. |
I wrote a tool that lets you specify Google Cloud Container Builder build steps directly inside your Kubernetes manifests: https://github.com/dminkovsky/kube-cloud-build. It's been a great help to me! |
Could |
Those looking to use new docker features such as multistage builds in CI/CD are looking for a faster upgrade cycle of docker than those administering a k8s cluster. If I understand the situation correctly then I feel that binding to the host docker socket causes these two differing priorities (features/stability) to come more into conflict. |
FYI, we just open sourced a tool today that was designed to help address this issue: github.com/GoogleCloudPlatform/kaniko |
It's probably worth mentioning that kubernetes test-infra uses a lot of privileged pods running docker-in-docker to do CI for k8s on k8s. This works pretty well for us, though isn't quite ideal. |
Yes, there are solutions such as docker-in-docker and accessing the host docker socket, but in a world where we would like most containers to run with narrower privileges and where there are multiple container runtimes, those solutions are less than ideal. The trend is towards standalone unprivileged build tools. Somewhere we should document alternatives, but that would be a website issue at this point. So I'm (finally) closing this. |
The Google Cloud Platform blog linked me here, but for those following this thread, there was no mention of Kaniko. |
For the record, those who are OK with DinD and running a privileged container inside a k8s cluster, I wrote this simple project: |
I'm working on "CBI", a vendor-neutral CRD interface for building container images on top of a Kubernetes cluster: https://github.com/containerbuilding/cbi Currently, plugins for Docker, BuildKit, Buildah, and kaniko are implemented. |
Been investigating this as well for production deployments - is Kaniko proven/production grade? Is there any downside to using something other than |
@jeremych1000 Kaniko is certainly not production grade. I'm having multiple build issues (and reporting them as I find them). |
I have used the following project with some success: https://github.com/genuinetools/img It requires a "privileged" container, but does not mount the docker socket in. |
BuildKit doesn't need privileged https://github.com/moby/buildkit/blob/master/docs/rootless.md#without-securitycontext-but-with---oci-worker-no-process-sandbox Recent version also supports daemonless mode as in kaniko and img |
Hi, Are you still having issues with Kaniko? I am trying to explore on Kaniko, Are there any other alternatives which are proven to be production grade? |
OCPBUGS-23565: Update to kubernetes 1.28.4
Touched upon in #503, #1294, #1567, and elsewhere.
Privileges are currently required to do
docker build
. Docker in Docker doesn't really work, so in order to do this on Kubernetes, one has to mount the host Docker socket into their container, which is a big security and abstraction problem.The text was updated successfully, but these errors were encountered: