Skip to content

scheduler(system): Fix potential panic in deployment handling.#27571

Merged
jrasell merged 2 commits intomainfrom
b-NMD-1266
Mar 4, 2026
Merged

scheduler(system): Fix potential panic in deployment handling.#27571
jrasell merged 2 commits intomainfrom
b-NMD-1266

Conversation

@jrasell
Copy link
Member

@jrasell jrasell commented Feb 23, 2026

When a system job deployment is successful and a task group has no feasible candidate nodes, the task group's deployment state is set to nil in the mapping. If the deployment is persisted to state, because the job has multiple task groups and at least one results in successful placements, a subsequent evaluation likely triggered by a recovering node will look up the previous deployment state. It was at this point, the scheduler was not correctly handling the nil object.

Another option could be to ensure the state write never includes a nil deployment object for a task group. It would require more work in upsert handling, so I feel this is the right approach as it's defensive and will always work.

Links

Jira: https://hashicorp.atlassian.net/browse/NMD-1266
Closes: #27567

Contributor Checklist

  • Changelog Entry If this PR changes user-facing behavior, please generate and add a
    changelog entry using the make cl command.
  • Testing Please add tests to cover any new functionality or to demonstrate bug fixes and
    ensure regressions will be caught.
  • Documentation If the change impacts user-facing functionality such as the CLI, API, UI,
    and job configuration, please update the Nomad product documentation, which is stored in the
    web-unified-docs repo. Refer to the web-unified-docs contributor guide for docs guidelines.
    Please also consider whether the change requires notes within the upgrade
    guide
    . If you would like help with the docs, tag the nomad-docs team in this PR.

Reviewer Checklist

  • Backport Labels Please add the correct backport labels as described by the internal
    backporting document.
  • Commit Type Ensure the correct merge method is selected which should be "squash and merge"
    in the majority of situations. The main exceptions are long-lived feature branches or merges where
    history should be preserved.
  • Enterprise PRs If this is an enterprise only PR, please add any required changelog entry
    within the public repository.

When a system job deployment is successful and a task group has
no feasible candidate nodes, the task group's deployment state is
set to nil in the mapping. If the deployment is persisted to state,
because the job has multiple task groups and at least one results
in successful placements, a subsequant evaluation likely triggered
by a recovering node will look up the previous deployment state.
It was at this point, the scheduler was not correctly handling the
nil object.
@jrasell jrasell self-assigned this Feb 23, 2026
@jrasell jrasell added the backport/1.11.x backport to 1.11.x release line label Feb 23, 2026
@jrasell jrasell requested review from schmichael and tgross February 23, 2026 11:58
@jrasell jrasell marked this pull request as ready for review February 23, 2026 11:58
@jrasell jrasell requested review from a team as code owners February 23, 2026 11:58
Copy link
Member

@tgross tgross left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This patch seems fine but this same code path came up in #27382 as well. It seems to me that we shouldn't be touching the deployment at all if it's terminal, and that this is where the underlying bug is. I've been looking at that block of code the last week or so and don't quite understand why we're doing that, unfortunately.

LGTM but let's chat about whether there's a more comprehensive fix too.

@jrasell jrasell marked this pull request as draft February 23, 2026 15:08
tgross added a commit that referenced this pull request Feb 26, 2026
In #27533 we added write skew protection against the scheduler overwriting
deployment state written by the clients. But there's a potential edge case where
if a deployment were to drop the task group dstate entirely, we would overwrite the
existing one. It doesn't appear possible to hit this case in the scheduler
without hitting the panic described in #27571 but this is belt-and-suspenders.

Ref: #27533
Ref: #27382
Ref: #27571
tgross added a commit that referenced this pull request Feb 26, 2026
In #27533 we added write skew protection against the scheduler overwriting
deployment state written by the clients. But there's a potential edge case where
if a deployment were to drop the task group dstate entirely, we would overwrite the
existing one. It doesn't appear possible to hit this case in the scheduler
without hitting the panic described in #27571 but this is belt-and-suspenders.

Ref: #27533
Ref: #27382
Ref: #27571
tgross added a commit that referenced this pull request Feb 26, 2026
In #27533 we added write skew protection against the scheduler overwriting
deployment state written by the clients. But there's a potential edge case where
if a deployment were to drop the task group dstate entirely, we would overwrite the
existing one. It doesn't appear possible to hit this case in the scheduler
without hitting the panic described in #27571 but this is belt-and-suspenders.

Ref: #27533
Ref: #27382
Ref: #27571
@tgross
Copy link
Member

tgross commented Feb 26, 2026

@jrasell I think #27605 and #27604 cover the remaining concerns we had about this code path, so this should be good-to-go.

tgross added a commit that referenced this pull request Feb 26, 2026
In #27533 we added write skew protection against the scheduler overwriting
deployment state written by the clients. But there's a potential edge case where
if a deployment were to drop the task group dstate entirely, we would overwrite the
existing one. It doesn't appear possible to hit this case in the scheduler
without hitting the panic described in #27571 but this is belt-and-suspenders.

Ref: #27533
Ref: #27382
Ref: #27571
tgross added a commit that referenced this pull request Feb 26, 2026
In #27533 we added write skew protection against the scheduler overwriting
deployment state written by the clients. But there's a potential edge case where
if a deployment were to drop the task group dstate entirely, we would overwrite the
existing one. It doesn't appear possible to hit this case in the scheduler
without hitting the panic described in #27571 but this is belt-and-suspenders.

Ref: #27533
Ref: #27382
Ref: #27571
tgross added a commit that referenced this pull request Feb 26, 2026
…27608)

In #27533 we added write skew protection against the scheduler overwriting
deployment state written by the clients. But there's a potential edge case where
if a deployment were to drop the task group dstate entirely, we would overwrite the
existing one. It doesn't appear possible to hit this case in the scheduler
without hitting the panic described in #27571 but this is belt-and-suspenders.

Ref: #27533
Ref: #27382
Ref: #27571

Co-authored-by: Tim Gross <tgross@hashicorp.com>
@jrasell jrasell marked this pull request as ready for review March 4, 2026 16:07
@jrasell jrasell merged commit f953716 into main Mar 4, 2026
42 checks passed
@jrasell jrasell deleted the b-NMD-1266 branch March 4, 2026 16:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport/1.11.x backport to 1.11.x release line

Projects

None yet

Development

Successfully merging this pull request may close these issues.

nomad 1.11.2 processing eval panicked scheduler

2 participants