New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for swarm mode in Docker 1.12 #141

Closed
alantrrs opened this Issue Jul 18, 2016 · 23 comments

Comments

Projects
None yet
@alantrrs

alantrrs commented Jul 18, 2016

I'm trying to use nvidia-docker with the swarm functionality introduced in the new Docker 1.12.

I tried to create a service docker service create .. with nvidia-docker (nvidia-docker service create ..) and didn't work. I haven't seen any way to pass devices to docker service create so I'm wondering if it's even supported on docker's side.

Any thoughts?

@flx42

This comment has been minimized.

Member

flx42 commented Jul 18, 2016

This question was asked on the docker GitHub a few hours ago:
moby/moby#24750

Currently, nvidia-docker doesn't support Docker Swarm and thus service create will be simply pass-through to the docker CLI.
You are right that there doesn't seem to be a way to pass devices to service create, so far we can only mount the volume:

docker service create --mount type=volume,source=nvidia_driver_367.35,target=/usr/local/nvidia,volume-driver=nvidia-docker [...]

But that's not enough, we can't get around the device cgroup.

Even if we could, in a cluster environment with Swarm there will also be a problem if different machines have a different number of GPUs.

@3XX0 thoughts?

@Josca

This comment has been minimized.

Josca commented Dec 6, 2016

+1 to add nvidia-docker support for docker swarm.

@davad

This comment has been minimized.

davad commented Jan 29, 2017

@flx42 @3XX0 any movement on this? I looked over the related issues and don't see anything recent. I'm itching to orchestrate CUDA jobs via docker across multiple machines 😄

@el3ment

This comment has been minimized.

el3ment commented Mar 31, 2017

Any progress on this?

@anaderi

This comment has been minimized.

anaderi commented Mar 31, 2017

would be cool, no?

@3XX0

This comment has been minimized.

Member

3XX0 commented Apr 4, 2017

This is basically what we need for basic GPU support:

docker/swarmkit#2090

@0fork

This comment has been minimized.

0fork commented May 27, 2017

@3XX0 It got merged today! I've been managing nvidia-docker containers manually via docker-compose so having swarmkit and all the v3 deploy things work would be absolutely fantastic.

Is there any potential merge conflicts that has to be dealt with before this can be merged to nvidia-docker?

@cheyang

This comment has been minimized.

cheyang commented May 30, 2017

@0fork , can you share any docs about how you played with it? So we can also try this cool feature. Thanks.

@3XX0

This comment has been minimized.

Member

3XX0 commented May 30, 2017

Yes, this is big step forward to get GPU working within Swarm. However, we're not quite there yet, we still need to add support in Docker itself and we are still missing some pieces which should come with nvidia-docker 2.0.

Stay tuned ;)

@luiscarlosborbon

This comment has been minimized.

luiscarlosborbon commented May 31, 2017

+1

@erbas

This comment has been minimized.

erbas commented Jun 9, 2017

@3XX0 What's the timeframe for this to come together?

@thommiano

This comment has been minimized.

thommiano commented Jun 14, 2017

My team is working on a machine with several GPUs, and we're using Docker to containerize all of our projects. I'm trying to figure out the best way to schedule GPU jobs that are running in Docker containers so that users don't accidentally interfere with existing jobs or have to sit around until one of the other team members frees up a GPU. Would swarm functionality solve this problem? Our current approach is to use NV_GPU=n in our nvidia-docker run command to isolate a GPU to that container, as referenced here, and I'm hoping that we can do away with this with job scheduling.

@omerh

This comment has been minimized.

omerh commented Jun 19, 2017

This is great feature and a must for docker swarm.
I am going to solve it with pre backed AMI and autoscale group.
But, only cause it fits my use case.
Waiting for updates on both moby project and nvidia-docker

@fvillarr

This comment has been minimized.

fvillarr commented Jun 25, 2017

@0fork , can I get also any information about how you played with it?
Thanks.

@0fork

This comment has been minimized.

0fork commented Jun 25, 2017

@fvillarr & @cheyang I'm sorry I don't understand what you want to know :) We've been using nvidia-docker via nvidia-docker-compose, not this swarmkit feature we're all anxiously waiting. Word of caution: using nvidia-docker in scale is a PITA right now. You have to manage each server separately because nvidia-docker-compose needs to generate specific mount points for NVIDIA drivers to work via compose. There's nothing available to automate this and I don't think we can scale this much further with the current setup of scripts and manual effort. I don't have any docs because it's all in nvidia-docker-compose repo, we just took it to scale.

@88plug

This comment has been minimized.

88plug commented Aug 26, 2017

+1

@3XX0

This comment has been minimized.

Member

3XX0 commented Nov 14, 2017

Closing, most of the issues remaining are on the Docker side. You can track our progress here:
moby/moby#33439

@3XX0 3XX0 closed this Nov 14, 2017

@nikoargo

This comment has been minimized.

nikoargo commented Jan 9, 2018

Any update on this now that all the PRs in moby/moby#33439 have been merged? They allow placing services according to generic resources, but I'm not sure how to actually mount the GPU inside the service's container.

@3XX0

This comment has been minimized.

Member

3XX0 commented Jan 10, 2018

@nikoargo with 17.12.0-ce you can configure the docker daemon to expose your GPUs to swarm:

  1. Create an override for the dockerd configuration, changing your default runtime and adding GPU resources. You can generate the resource flags like this:
    nvidia-smi -a | grep UUID | awk '{print "--node-generic-resource gpu="substr($4,0,12)}' | paste -d' ' -s
sudo systemctl edit docker

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --default-runtime=nvidia <resource output from the above>
  1. Uncomment swarm-resource under /etc/nvidia-container-runtime/config.toml

  2. Restart the docker daemon, create your swarm and create a new service requesting GPUs:
    docker service create -t --generic-resource "gpu=1" ubuntu bash

Note, there is currently a bug: moby/moby#35970 it should normally be --node-generic-resources, this will be fixed in the future

@nikoargo

This comment has been minimized.

nikoargo commented Jan 10, 2018

This is incredible. Thank you so much!

@romilbhardwaj

This comment has been minimized.

romilbhardwaj commented Feb 19, 2018

@3XX0 This is fantastic, thanks a lot!

An observation: this also seems to enforce exclusive allocation of the GPUs at the orchestration layer. For example, if I have a machine with two physical GPUs, I cannot create more than two services (each of which requests one GPU). Adding a third service results in a no suitable node (insufficient resources) message and docker swarm waits for a running service to end before scheduling the new one.

Is there any way to overcome this and allow sharing of GPUs across services while maintaining isolation? For instance, adding a third service in the above example should create a service and have it share a GPU with one of the existing services.

This can be achieved by using node labels (have a count label for GPUs at the node and any service that requires less GPUs than the count is deployed on the node), but this approach is incognizant of the resource requirements of the service and does not enforce isolation - all GPUs on the machine will be visible to a service which may require only one GPU.

@3XX0

This comment has been minimized.

Member

3XX0 commented Feb 20, 2018

Unfortunately we do not support sharing GPUs. We have the same limitation with Kubernetes and we're looking into relaxing this constraint.
Having said that, the hardware doesn't support true multitenancy, so doing this can be quite costly. We usually recommend writing your application with this in mind instead, and implement your own scheduling/batching taking full advantage of the whole GPU.

@CharlesJQuarra

This comment has been minimized.

CharlesJQuarra commented Apr 21, 2018

one must change the default runtime for a given node in order to use the gpu for swarm services? can the gpu generic resource be added to a swarm node while leaving runc as default runtime?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment