-
Notifications
You must be signed in to change notification settings - Fork 691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jobstore table not purged when remove_all_jobs() is called by a new run of the scheduler #151
Comments
after some tests it feels like even if print_jobs return nothing, the remaining jobs from old scheduler runs will run ! so it seems like remove_all_jobs does not work and apsheduler behaves like the option #2 but old jobs don't appear even as they are going to be executed, and they cannot be removed with remove_all_jobs. |
Are you sure you're starting the scheduler before running either method? |
Thank you for your answer, I use a blocking scheduler, so I cannot execute code after starting the scheduler. What you say is that I cannot remove remaining jobs before starting the scheduler ? So yes I use both methods before starting the scheduler. Isn't it how it should be done ? |
Both methods only affect pending jobs then, as the job stores haven't been started yet. |
Understood, the scheduler is configured but only connects to jobstore when starting. In case of failure, I would like to be able to drop all jobs from the job store before starting it, otherwise a job may start immediately which I don't want. Is there a way to do this ? I use heroku and am compelled to use the blocking sheduler, so after I start it there's no way to execute code other that starting a job which itself destroys job from the jobstore ? A workaroud would be to destroy the table directly with sqlalchemy, but I'm sure there's a more clever way. |
Could you not use a background scheduler, start it paused, work with the job stores and then resume processing on the scheduler and go to a sleep loop? |
Yes definitely, will try that. |
Dear all,
I have remarked the following behaviour : the postgresql jobstore table is not emptied when scheduler.remove_all_jobs is called by a new run of the scheduler. Still, the list of jobs to run is empty : confirmed by calling print_jobs(), with the same jobstore, there's no job to run.
So what I understand from this : if my scheduler dies for any reason, I loose the jobs that have not been run yet, but the number of rows in apscheduler table keeps growing, not reflecting the current state of the scheduler.
I would expect one of the two following behaviours :
or the apscheduler table is a reflection of the jobs that have to be run, and does not keep remnants of old runs of the scheduler.
or, as we have a persistent jobstore, restarting the scheduler on that jobstore would restore jobs that have not been run yet, and add new jobs to this list of already existing jobs.
the current behaviour will fill the table with runnable and non-runnable jobs, and if the scheduler is restarted a lot of times, this will lead to a table full of unrunnable jobs that does not reflect the real state of the scheduler, filled with a growing number of rows.
What do you think about this ?
Best regards
The text was updated successfully, but these errors were encountered: