You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The old instructions to delete jobs in order to allow new jobs to run can cause all sorts of issues. Namely, deleting a job could delete a Kingfisher Process collection or Pelican dataset while work is still queued in RabbitMQ, which could cause 100,000s of errors to be reported to Sentry.
We can instead have a process where a job can be manually scheduled, and delete those instructions. There are two scenarios to consider:
Last job is complete, but we want to run a new job sooner than scheduled.
We can check that the collection isn't already is_out_of_date (to avoid duplication by manageprocess if e.g. an admin unfreezes a publication, and is then in a rush to schedule it, instead of waiting 5 mins), and then create the job. (This scenario also checks that no job is running.)
The old instructions to delete jobs in order to allow new jobs to run can cause all sorts of issues. Namely, deleting a job could delete a Kingfisher Process collection or Pelican dataset while work is still queued in RabbitMQ, which could cause 100,000s of errors to be reported to Sentry.
We can instead have a process where a job can be manually scheduled, and delete those instructions. There are two scenarios to consider:
is_out_of_date
(to avoid duplication bymanageprocess
if e.g. an admin unfreezes a publication, and is then in a rush to schedule it, instead of waiting 5 mins), and then create the job. (This scenario also checks that no job is running.)Update the docs that refer to this issue (
#350
) and surrounding text.Add relevant tests first #128
Blocked by #352
The text was updated successfully, but these errors were encountered: