Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(worker): mount docker socket when privileged=true #117

Closed

Conversation

technosophos
Copy link
Contributor

This fixes the privileged mode to mount the Docker socket.

This fixes the privileged mode to mount the Docker socket.
@technosophos
Copy link
Contributor Author

To be clear, this should allow you to use the docker image from inside of Kubernetes. This is useful for running in-cluster docker build operations and the like.

@u2takey
Copy link

u2takey commented Nov 10, 2017

It maybe better to add a new flag for mount the docker socket? (or move all volumn aciton out for user? ).

User use docker in docker may not need docker socket in host (and it is not safe). For example people can use https://github.com/jpetazzo/dind for docker build and save docker build cache in brigadeCache for next builds.

For now, i think the simplification of mounting docker socket on host is ok.

@technosophos
Copy link
Contributor Author

I think I agree with @u2takey

  • Option 1: Add a Job.mountDockerSocket=<boolean> option on Job, and require privileged == true before doing that.
  • Option 2: Mount docker socket by default, and add a job.noDockerSocket=<boolean> (in other words, enable mount by default

BTW -- has anyone tested DIND with Brigade? I know we do it in Draft

Are there other features we would need to expose in order to enable DIND?

@bacongobbler
Copy link
Contributor

You’ll want to expose the docker engine flags. Some clusters need to use specific storage backend as like overlayfs as the backing file system driver. The default fs backend for docker fails to work on Azure/minikube for example.

@bacongobbler
Copy link
Contributor

You can see the current list of issues with using dind on kubernetes for Draft here: https://github.com/Azure/draft/issues?q=is%3Aissue+overlay

@technosophos
Copy link
Contributor Author

I have been testing this out:

const { events, Job } = require("brigadier")

events.on("exec", () => {
  const dind = new Job("dind", "docker:edge-dind")
  dind.privileged = true
  dind.tasks = [
    "dockerd-entrypoint.sh &",
    "echo waiting && sleep 20",
    "ps -ef",
    "docker version",
    "killall dockerd"
  ]
  dind.run().then( () => {
    console.log("==== DONE ====")
  })
})

So it's a start.

@technosophos
Copy link
Contributor Author

I'm hearing that the DinD method is "working, but slowish" for people. Does it make sense for me to go ahead and try another PR that will expose the host docker socket, or should we just call it good with DinD?

@bacongobbler
Copy link
Contributor

bacongobbler commented Nov 17, 2017

I think it'd be better to make a PR to mount the host docker socket instead. It's easier to maintain since the docker engine has been optimized for the cloud provider and we won't need to constantly maintain updating/upgrading the docker daemon container image (and add new support for engine flags). It's overall a significantly easier maintenance burden.

As long as users have a way to run a dind container themselves, that should be more than OK.

FYI see https://github.com/Azure/draft/pull/434, I plan on pulling out dind from Draft due to maintenance hassle and other points I mentioned here.

@technosophos
Copy link
Contributor Author

Closing in favor of #154

@bacongobbler bacongobbler mentioned this pull request Nov 28, 2017
31 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants