Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for executing into a task #1895

Open
aluzzardi opened this issue Jan 24, 2017 · 19 comments
Open

Support for executing into a task #1895

aluzzardi opened this issue Jan 24, 2017 · 19 comments

Comments

@aluzzardi
Copy link
Member

Similar to docker exec, swarmkit should provide a way to execute commands inside a task, mostly for debugging purposes.

@stevvooe
Copy link
Contributor

I'm not sure I understand the use case. I could see the point of attach but exec'ing other processes seems ripe for abuse. Is it just cause docker exec exists?

@hairyhenderson
Copy link

@stevvooe for me, the use case is exactly the same as docker exec - for debugging and troubleshooting. It can be difficult to access a swarm node where a specific task is running to do a local docker exec, and downright impossible if the container only runs for a few seconds and then dies, only to be rescheduled to a different worker. Imagine doing this on a swarm with dozens or hundreds of workers...

IMO it'd be very useful to be able to docker service exec myservice.1 foo or something similar.

@stevvooe
Copy link
Contributor

stevvooe commented Feb 1, 2017

@hairyhenderson You can already docker exec into a task container, even after exiting. The only challenge is getting to that node. Short-lived containers will be around until the manager instructs nodes to clean them up (i think we keep five for service instance).

I am not sure if this works, but it would be good if we could get to something like this:

ssh $(docker service ps <task-id> | awk '{print $4}' | grep -v NODE) docker exec <task-container-id> 

Ideally, we'd like to automate this, but if you need to debug, this may help as a workaround, as long as the node names will resolve with ssh.

@hairyhenderson
Copy link

The only challenge is getting to that node.

That's exactly my point 🙂

The problem with your discover-node-then-ssh approach (which I do use on occasion) is that it presumes the host is reachable via SSH. There's no strict requirement for SSH to be available on swarm workers otherwise, so that's where some sort of docker service exec command would be useful.

I have no pressing requirement for this right now, but I hope this helps convince you that this is a legitimate use-case 🙂

@stevvooe
Copy link
Contributor

stevvooe commented Feb 2, 2017

The problem with your discover-node-then-ssh approach (which I do use on occasion) is that it presumes the host is reachable via SSH. There's no strict requirement for SSH to be available on swarm workers otherwise, so that's where some sort of docker service exec command would be useful.

This is just a suggested possible workaround and basically what needs to be built it.

@sylvainmouquet
Copy link

+1 docker service exec

@pnickolov
Copy link

+1 docker service exec
Not sure if this is one or two different features: one is to be able to run the command on a specific task; the other one is to run it on all tasks of a service. Having at least the first one will be great!

@pnickolov
Copy link

I would love to see this working, especially now that 17.05.0-ce has service logs (I also think service signal, pause/unpause and top may provide a good closure, too).

In the meantime, check out this swarm-exec tool on github or use the ready container.

Essentially, it provides the equivalent of the desired docker service exec <task> <cmd> [<args>...] using the following command executed on the swarm manager node:

docker run -v /var/run/docker.sock:/var/run/docker.sock
    datagridsys/skopos-plugin-swarm-exec \
    task-exec <taskID> <command> [<arguments>...]

For details, how it works and current limitations, see the project on github

@duro
Copy link

duro commented Jul 21, 2017

I too have a use case for this. I have a swarm (via Docker Cloud) that I use to deploy feature branches too using a CI server. I need a way to trigger some cleanup tasks that can only be ran from inside the container when a branch is deleted.

My flow looks like this:

Delete Branch -> Github Webhook -> Cleanup CI Job Executed -> docker service exec '.bin/perform-cleanup.sh'

This is just my use case, but I'm certain there will be increasingly more of solid use cases for this feature.

fntlnz added a commit to fntlnz/swarmkit that referenced this issue Aug 28, 2017
I want to address moby#1895 - Please beaware that this is just an experiment
to get my hands dirty with the codebase and to understand how things
works.

At the current stage this basically is a gRPC service on the managers
that connects the client to the client to the container executing the
task of a specific service.

It does not have TTYs.

Feedback and What I need to go on:
- I need some advice in designing this and if possible code review.
- Since this is just a PoC I'm not yet writing tests, this is just
something I need to understand a codebase where I'm not even that
familiar to do TDD.
- There are probably, I already see some of them contexts that are not
closed.

Try it:
  swarmctl service exec <service-id>

(Yes I know that one expects to have `sh`) in the end or something like
that. It's too early

Signed-off-by: Lorenzo Fontana <lo@linux.com>
fntlnz added a commit to fntlnz/swarmkit that referenced this issue Sep 9, 2017
I want to address moby#1895 - Please beaware that this is just an experiment
to get my hands dirty with the codebase and to understand how things
works.

At the current stage this basically is a gRPC service on the managers
that connects the client to the client to the container executing the
task of a specific service.

It does not have TTYs.

Feedback and What I need to go on:
- I need some advice in designing this and if possible code review.
- Since this is just a PoC I'm not yet writing tests, this is just
something I need to understand a codebase where I'm not even that
familiar to do TDD.
- There are probably, I already see some of them contexts that are not
closed.

Try it:
  swarmctl service exec <service-id>

(Yes I know that one expects to have `sh`) in the end or something like
that. It's too early

Signed-off-by: Lorenzo Fontana <lo@linux.com>
@rdxmb
Copy link

rdxmb commented Oct 12, 2017

As a multiple tasks/containers can belong to one service, it would be more explicit to have a
docker task exec

With having
docker service exec
which task/container will be used?

That doesn't matter? Randomly? Ok, if it will defined and designed like this, no problem.

//edited:
When we talk about use cases, this can also be Backups (like database-dumps) triggered by cron on the Docker Hosts, as there is usually no cron in the official docker images.

@RehanSaeed
Copy link

I'd be happy with docker service exec entering a random container. Getting into a short lived container for trouble shooting is a nightmare at the moment.

@duro
Copy link

duro commented Jan 18, 2018

I tend to agree that a random container would be fine. I need this to run things like crons and cleanup scripts before teardown, so any of the containers would suffice.

Alternatively, you could pass one of the IDs of the running processes that show up when running docker service ps <service_name>

@rdxmb
Copy link

rdxmb commented Jan 19, 2018

just wrote a workaround through ssh and bash for that. Maybe it is helpful for somebody.

https://github.com/rdxmb/docker_swarm_helpers

@cypx
Copy link

cypx commented Mar 1, 2018

Thanks @rdxmb

I make a less sophisticated but very similar one a few days ago ;-)
https://gist.github.com/cypx/a2b4765fc1ccb497c99f77c78d0d4933

@westfood
Copy link

westfood commented Oct 8, 2019

Hmm... I would like to see

docker task exec to run in defined task, ideally with multiple task ids (if it does not break expectations regarding cli behavior)

and
docker service exec to run across all service tasks

@anthony-o
Copy link

As I said here, the simpliest command I found to docker exec into a swarm node (with a swarm manager at $SWARM_MANAGER_HOST) running the service $SERVICE_NAME (for example mystack_myservice) is the following:

SERVICE_JSON=$(ssh $SWARM_MANAGER_HOST "docker service ps $SERVICE_NAME --no-trunc --format '{{ json . }}' -f desired-state=running")
ssh -t $(echo $SERVICE_JSON | jq -r '.Node') "docker exec -it $(echo $SERVICE_JSON | jq -r '.Name').$(echo $SERVICE_JSON | jq -r '.ID') bash"

This asserts that you have ssh access to $SWARM_MANAGER_HOST as well as the swarm node currently running the service task.

This also asserts that you have jq installed (apt install jq), but if you can't or don't want to install it and you have python installed you can create the following alias (based on this answer):

alias jq="python3 -c 'import sys, json; print(json.load(sys.stdin)[sys.argv[2].partition(\".\")[-1]])'"

@aduzsardi
Copy link

nothing simple about that @anthony-o , maybe from a sysadmin's perspective is simple enough but for developers not so much

it would be great to have docker task|service exec commands , for the same reasons we have docker container exec
this would work great with the docker contexts and giving developers the option to connect to remote docker swarm services/tasks for debugging without hopping from host to host

@s4ke
Copy link
Contributor

s4ke commented Jul 10, 2023

Just to add my 2 cents here.

Right now I think arguably the best way to achieve this behaviour without changing moby/swarmkit is described here.

I think if this were to be added to moby/swarmkit, the barrier between moby/moby and moby/swarmkit needs to be torn down to some extent. I feel like a swarmkit native way to access the docker api of other nodes in the cluster would allow for quite a lot of useful features added in either docker cli plugins or the docker cli itself.

So rather than add a specific usecase to moby/swarmkit to exec into a service I think adding a more generic way could be a better solution? WDYT @dperny @neersighted @thaJeztah ?

@snth
Copy link

snth commented Aug 4, 2023

Hi,

I just started working on creating a bash script that would make working with services in stacks on Docker Swarm easier. You can find it here: https://github.com/snth/docker-stack

At the moment it makes exec'ing into a container and fetching the logs easier. I also want to add easier deployments that take environment variable substitutions into account like in docker-compose.

LMK if it works for you or if you encounter any problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests