Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to restart running tasks/jobs #698

Open
supernomad opened this issue Jan 22, 2016 · 54 comments
Open

Add ability to restart running tasks/jobs #698

supernomad opened this issue Jan 22, 2016 · 54 comments

Comments

@supernomad
Copy link

@supernomad supernomad commented Jan 22, 2016

So I would love the ability to restart tasks, at the very least restart an entire job, but preferably single allocations. This is very useful for when a particular allocation or job happens to get in a bad state.

I am thinking something like nomad restart <job> or nomad alloc-restart <alloc-id>.

One of my specific use cases, is I have a cluster of rabbitmq nodes, and at some point one of the nodes gets partitioned from the rest of the cluster. I would like to restart that specific node (allocation in nomad parlance), or be able to preform a rolling restart to the entire cluster (job in nomad parlance).

Does this sound useful?

@dadgar
Copy link
Contributor

@dadgar dadgar commented Jan 22, 2016

Its not a bad idea! In the mean time if you just want to restart the job you can stop and then run it again�.

@mkabischev
Copy link

@mkabischev mkabischev commented Feb 6, 2016

I think it will be good feature. Now i can stop and then run job, but it won`t be graceful.

@gpaggi
Copy link

@gpaggi gpaggi commented Apr 19, 2016

+1
Another use case: most of our services read their configuration either from static files or consul and when there are any changes in the properties the services need to be rolling-restarted.
Stopping and starting the job would cause a service interruption and a blue/green deployment for a configuration change is a bit over kill.

@supernomad did you get a chance to look into it?

@jtuthehien
Copy link

@jtuthehien jtuthehien commented May 24, 2016

+1 for this feature

@c4milo
Copy link
Contributor

@c4milo c4milo commented Jun 14, 2016

This is much needed in order to effectively reload configurations without having downtimes. As mentioned above, blue/green doesn't really scale well when you have too many tasks and it is sort of unpredictable since it depends on the specific app being deployed playing well with multiple versions of it running at the same time.

@liclac
Copy link

@liclac liclac commented Jul 14, 2016

I'd very much like to see this, for a slightly different use case:

I have something running as a system job (in this case, a wrapper script that essentially does docker pull ... && docker run ..., it needs to mount a host directory to work, this is a workaround for #150). To roll out an update, I currently need to change a dummy environment variable, or Nomad won't know anything changed.

@mohitarora
Copy link

@mohitarora mohitarora commented Aug 22, 2016

+1

@dennybaa
Copy link

@dennybaa dennybaa commented Sep 15, 2016

Why not, guys please add it, should be trivial.

@jippi
Copy link
Contributor

@jippi jippi commented Sep 27, 2016

👍 on this feature as well :)

@xyzjace
Copy link

@xyzjace xyzjace commented Jan 16, 2017

👍 For us, too.

@ashald
Copy link

@ashald ashald commented Jan 26, 2017

We would be happy to see this feature as well. Sometimes... services just need a manual restart. :( Would be nice if it was possible to restart individual tasks or task groups.

@rokka-n
Copy link

@rokka-n rokka-n commented Jan 26, 2017

Having rolling "restart" option is a very valid case for tasks/jobs.

@jippi
Copy link
Contributor

@jippi jippi commented Jan 26, 2017

What i've done as a hack is to have a key_or_default inline template{} stanza in the task stanza for each of these keys, simply writing them to some random temp file

  • apps/${NOMAD_JOB_NAME}
  • apps/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME}
  • apps/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME}/${NOMAD_ALLOC_INDEX}
  • apps/${NOMAD_ALLOC_NAME}

that each got a change_type = restart or signal with the appropriate change_signal value

so i can do manual rolling restart of any nomad task by simply changing or creating one of those consul keys in my cluster programatically... at my own pace to do a controlled restart too :)

writing to consul KV /apps/${NOMAD_JOB_NAME} will restart all tasks in the job
writing to consul KV /apps/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME} will restart all tasks within a job
writing to consul KV /apps/${NOMAD_JOB_NAME}/${NOMAD_TASK_NAME}/${NOMAD_ALLOC_INDEX} will restart one specific task index within the job

@ashald
Copy link

@ashald ashald commented Jan 26, 2017

@jippi that's super smart! Thanks, I guess I'll use that for the time being. :)

But that level of control is something that would be great to see in Nomad's native API.

P.S.: That reminds me about my hack/workaround to secure any resource in Nginx (e.g., Nomad API) using Consul ACL tokens with auth_request to some read-only api endpoints. :D

@pznamensky
Copy link

@pznamensky pznamensky commented Aug 29, 2017

Would be useful for us too.

@dansteen
Copy link

@dansteen dansteen commented Sep 6, 2017

This would also be useful for the new deployment stuff. The ability to re-trigger a deployment would be great.

@JewelPengin
Copy link

@JewelPengin JewelPengin commented Sep 6, 2017

Throwing in my +1 but also my non-consul based brute force way:

export NOMAD_ADDR=http://[server-ip]:[admin-port]

curl $NOMAD_ADDR/v1/job/:jobId | jq '.TaskGroups[0].Count = 0 | {"Job": .}' | curl -X POST -d @- $NOMAD_ADDR/v1/job/:jobId

sleep 5

curl $NOMAD_ADDR/v1/job/:jobId | jq '.TaskGroups[0].Count = 1 | {"Job": .}' | curl -X POST -d @- $NOMAD_ADDR/v1/job/:jobId

It requires the jq binary to be installed (which I would highly recommend anyway), but it will first grab the job, modify the task group count to 0, post it back to update, then all over again back to 1 (or whatever number is needed).

Again, kinda brute force and not as elegant as @jippi's, but it works if I need to get something done quickly.

@danielwpz
Copy link

@danielwpz danielwpz commented Sep 14, 2017

Really useful feature! Please do it :D

@sullivanchan
Copy link

@sullivanchan sullivanchan commented Sep 19, 2017

I have do some verification follow @jippi suggestion, and data = "{{ key apps/app1/app1/${NOMAD_ALLOC_INDEX} }}" in template stanza, but job start always pending, seems env just get by https://www.nomadproject.io/docs/job-specification/template.html#inline-template {{ env "ENV_VAR" }}, i want to know how to integrate env variable into key string, does anybody have the same question?

@mildred
Copy link
Contributor

@mildred mildred commented Sep 19, 2017

This is standard golang template:

          {{keyOrDefault (printf "apps/app1/app1/%s" (env "NOMAD_ALLOC_INDEX")) ""}}
@mildred
Copy link
Contributor

@mildred mildred commented Sep 19, 2017

I suggest you use keyOrDefault instead of just key which will prevent your service to start unless the key exists in consul.

@vtorhonen
Copy link

@vtorhonen vtorhonen commented Feb 22, 2018

As a workaround I've been using Nomad's meta stanza to control restarts. Meta keys are populated as environment variables to tasks, so whenever meta block is changed all related tasks (or task groups) are restarted. Meta blocks can be defined on the top-level of the job, per task-group or per task.

For example to restart all tasks in all task groups you could run this:

$ nomad inspect some-job | \
jq --arg d "$(date)" '.Job.Meta={restarted_at: $d}' | \
curl -X POST -d @- nomad.service.consul:4646/v1/jobs

This follows update stanza as well.

@maihde
Copy link
Contributor

@maihde maihde commented Mar 2, 2018

I have made a first pass at implementing this, you can find my changes here.

Basically, I've added a -restart flag to nomad run. For example:

nomad run -restart myjob.nomad

When the -restart flag is applied it triggers an update, the same as if you would have changed the meta block, so you get the benefits of canaries and rolling restarts without having to actually change the job file.

If there is agreement that this implementation is going down the right path, I will go the the trouble of writing tests and making sure it works for system scheduler, parameterized jobs, etc.

@jovandeginste
Copy link

@jovandeginste jovandeginste commented Mar 2, 2018

Why not implement this without the need for a plan? Basically, nomad restart myjobname (which should use the current plan)

As a sysop, I sometimes need to force a restart of a job, but I don't have the plan (and don't want to go through nomad inspect | parse)

@rkettelerij
Copy link
Contributor

@rkettelerij rkettelerij commented Mar 2, 2018

Agreeing with @jovandeginste here. A restart shouldn't need a job definition in my option, since the job definition is already known inside Nomad.

@jovandeginste
Copy link

@jovandeginste jovandeginste commented Mar 2, 2018

I do see the case to re-submit an existing job with a plan that may or may not have changed but always wanting to force a restart (of the whole job) while submitting. So both are interesting options.

@marcosnils
Copy link

@marcosnils marcosnils commented Aug 17, 2018

Its not a bad idea! In the mean time if you just want to restart the job you can stop and then run it again

@dadgar Is there a way to do this but without having downtime?. Stopping and running the job won't honor the update stanza.

@maihde
Copy link
Contributor

@maihde maihde commented Aug 17, 2018

@marcosnils the workaround I've used is placing something in the meta stanza that can be changed as described in this post.

#698 (comment)

Of course this is kinda annoying, hence the reason I made the pull-request that added the restart behavior directly.

@upccup
Copy link

@upccup upccup commented May 16, 2019

hope is coming soon

@camerondavison
Copy link
Contributor

@camerondavison camerondavison commented Jun 6, 2019

looks like #5502 is out for allocs 🎉

@tgross
Copy link
Contributor

@tgross tgross commented Nov 11, 2019

Doing some issue cleanup: this was released in Nomad 0.9.2. https://github.com/hashicorp/nomad/blob/master/CHANGELOG.md#092-june-5-2019

@tgross tgross closed this Nov 11, 2019
@multani
Copy link
Contributor

@multani multani commented Nov 11, 2019

@tgross Actually ... not exactly (unless I missed something!)

Although it's cool to be able to restart specific allocations (which was in 0.9.2), it would be very cool if there was a simple way to restart all the allocations, while ensuring the restart/upgrade properties of the job.

In our case, I think almost every time we had to force restart a particular allocation, it was because actually all of them were stuck in some kind of buggy behavior and we ended restarting all of them nevertheless. I can definitely be scripted, but it would also make sense (IMO!) in terms of "UI" (web or CLI) to have something simple to restart the whole job 👍

@rkettelerij
Copy link
Contributor

@rkettelerij rkettelerij commented Nov 11, 2019

In our case, I think almost every time we had to force restart a particular allocation, it was because actually all of them were stuck in some kind of buggy behavior and we ended restarting all of them nevertheless. I can definitely be scripted, but it would also make sense (IMO!) in terms of "UI" (web or CLI) to have something simple to restart the whole job 👍

I second that.

@tgross
Copy link
Contributor

@tgross tgross commented Nov 11, 2019

Fair enough. I'll re-open. And although re-running nomad job run ends up restarting all the allocations, it isn't quite the same as it reschedules them as well.

@tgross tgross reopened this Nov 11, 2019
@joec4i
Copy link

@joec4i joec4i commented Jan 3, 2020

Fair enough. I'll re-open. And although re-running nomad job run ends up restarting all the allocations, it isn't quite the same as it reschedules them as well.

I just want to mention that nomad job run would only re-deploy canaries if canary deployment is enabled and there are canary allocations. It'd be great if the job-level restart is supported.

@analytically
Copy link

@analytically analytically commented Jan 7, 2020

I'd second this - I run (stateful) Airflow as Docker containers (web-workers-scheduler) where the DAG files are mounted as volumes (using artifact stanza) and we'd like to restart all allocations from our CI upon a git push.

@taiidani
Copy link

@taiidani taiidani commented Apr 25, 2020

I ran into this problem because I'm using the "exec" driver and SSHing subsequent updates to my binary. Sending another run won't restart the process because the job specification hasn't changed.

Would love a run -restart option so that I don't need to build 2 separate workflows for initial provision + subsequent code deploys!

@sbrl
Copy link

@sbrl sbrl commented Aug 12, 2020

Just run into this issue too. My use-case is that I want to restart jobs / tasks in order to update to a newer version of a Docker container.

For context, I'm attempting to setup the following workflow:

  1. Check to see if container needs rebuilding
  2. If necessary, rebuild docker container
  3. If docker container was rebuilt, restart all dependent Nomad jobs / tasks
@yishan-lin
Copy link
Contributor

@yishan-lin yishan-lin commented Aug 31, 2020

On our radar - thanks all for the input!

@scorsi
Copy link
Contributor

@scorsi scorsi commented Oct 30, 2020

Can we hope see that feature implemented in Nomad 1.0 ? :)

@mxab
Copy link

@mxab mxab commented Nov 9, 2020

I currently do:

nomad job status my-job-name | grep -E 'run\s+running' | awk '{print $1}' | xargs -t -n 1 nomad alloc restart

use ... | xargs -P 5 .... to run 5 restarts in parallel

@geokollias
Copy link

@geokollias geokollias commented Nov 30, 2020

Any update on this issue would be great! Thank you!

@Oloremo
Copy link

@Oloremo Oloremo commented Dec 1, 2020

looking forward to this as well

@datadexer
Copy link

@datadexer datadexer commented Dec 2, 2020

same here!

@OmarQunsul
Copy link

@OmarQunsul OmarQunsul commented Jan 7, 2021

I am also surprised this feature doesn't exist. In Docker Swarm for example docker service update --force SERVICE_NAME.
I was expecting something under the job command nomad job restart, that restarts each alloc without downtime on the whole job

@tgross tgross added this to Needs Roadmapping in Nomad - Community Issues Triage Feb 12, 2021
@jpasichnyk
Copy link

@jpasichnyk jpasichnyk commented Feb 24, 2021

+1 for this feature. We just moved to nomad 1.x and are trying to move to the built in Nomad UI (from HashiUI - https://github.com/jippi/hashi-ui), and having the ability to restart a job from here would be great. Sometimes we have application instances that go unhealthy from a system perspective but are still running fine in docker. In this case we don't want to force restart them as depending on the reason they are unhealthy they may not be able to safely restart. Restarting the whole job via a rolling restart is a great way to fix this state, but there is no way to do it for us other than building a new container version and promoting a new job over the existing job (even if the bits being deployed are identical). HashiUI can restart via rolling restart or a stop/start. Nomad UI and CLI should support doing this as well.

@tgross tgross removed this from Needs Roadmapping in Nomad - Community Issues Triage Mar 3, 2021
@thatsk
Copy link

@thatsk thatsk commented Apr 27, 2021

is this added in nomad UI. Or still in phase.?

@stupidlamo
Copy link

@stupidlamo stupidlamo commented May 28, 2021

+1 to this feature, really need to shut down hashi-ui and use only nomad native, but can't due to unvailabilty of rolling restart

@kunalsingthakur
Copy link

@kunalsingthakur kunalsingthakur commented Jul 29, 2021

yeah @tgross there is situation where container dependent on consul key-value and if we update key value in consul then after restart our service it will populate new values in out container so we really think this need to be allocated in nomad UI and get rid of hashiui . don't need to maintain two UI for nomad

@kunalsingthakur
Copy link

@kunalsingthakur kunalsingthakur commented Jul 29, 2021

are we supposed to think this is on our roadmap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet