Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enforce number of concurrent Jobs running at the same time #1004

Open
dgarros opened this issue Oct 15, 2021 · 6 comments
Open

Enforce number of concurrent Jobs running at the same time #1004

dgarros opened this issue Oct 15, 2021 · 6 comments
Assignees
Labels
type: feature Introduction of substantial new functionality to the application

Comments

@dgarros
Copy link
Contributor

dgarros commented Oct 15, 2021

Environment

  • Python version: 3.7
  • Nautobot version: 1.1.3

Proposed Functionality

As Austin the Network automation Engineer I want to be able to create a Nautobot Job that can be executed only once at a time.
If a second execution of the same Job is requested while the Job is already running, the new Job should be queued until the first one finishes.

there are few other options that could be interesting to explore but not sure what is possible within the existing Framework: terminate the existing job early and start the new one, allow N numbers of concurrent execution ...

Use Case

Some Jobs are accessing shared resources and if multiple Jobs are running at the same time it could result in some inconsistency.

Database Changes

Probably not

External Dependencies

No

@glennmatthews
Copy link
Contributor

Not just Job-specific - we should probably have similar enforcement for Git repository sync/refresh as well.

@glennmatthews glennmatthews added type: feature Introduction of substantial new functionality to the application group: automation labels Oct 15, 2021
@itdependsnetworks
Copy link
Contributor

Not to feature creep here, but job isolation tied to inventory would be great as well.

@jathanism
Copy link
Contributor

This might be possible with a simple Redis lock and a check on task start.

Also worth looking at for inspiration: https://github.com/steinitzu/celery-singleton

@jathanism
Copy link
Contributor

Digging into this is tricky because of how we rely on the run_job Celery task to be the central way that Jobs are executed. We're not using a custom BaseTask subclass to dictate any behaviors. The celery-singleton package provides its own Singleton base task class, but that's also too much in the other direction. There isn't a way to dynamically select the base task class used on a decorated task function. Since this has to be decided at registration time.

Ideas:

  • We might be able to take advantage of subtasks here?
  • Subclass the Singleton task class to to make it only "dynamically" assert uniqueness when a certain flag is set?
  • Call a different task other than run_job somehow?

@Kircheneer
Copy link
Contributor

This is something that could be really useful for longer-running SSoT jobs. We have been facing issues where those have been accidentally run twice - best case one of them errors out, worst case is a different outcome then if only one was run.

@bryanculver bryanculver modified the milestones: v2.0.0, v2.2.0 Nov 3, 2022
@bryanculver bryanculver modified the milestones: v2.2.0, v2.1.0 May 9, 2023
@lampwins lampwins removed this from the v2.1.0 milestone Nov 3, 2023
@Kircheneer
Copy link
Contributor

My implementation idea would be different to what Jathan had in mind when he first tackled this:

  • stick a Meta class option on the job called is_singleton or similar
  • stick a job_singleton_timeout variable into the config, default it to something like 4 hours
  • at the beginning of run, if this is True, read or set a redis key parametrized by the job name/slug/natural key
    • if already set, error out the job
    • if not set, set it
  • at the end of run / the job execution (have to look into how this is handled for 2.x), unset the redis key

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: feature Introduction of substantial new functionality to the application
Projects
No open projects
Status: To Groom
Development

Successfully merging a pull request may close this issue.

8 participants