New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Job never requeued after raising unhandled error with until_and_while_executing? #322
Comments
Unless you specify I believe that is a situation you would want to avoid so the locks need to stay in the case of an error. Most people do not disable retries (since it needs to be done explicitly). If you want to clear the lock when the job dies please see: https://github.com/mhenrixon/sidekiq-unique-jobs#cleanup-dead-locks |
As you can see, I did specify The case I'm concerned about is when a job throws an exception (due to external reasons), then the external reasons are fixed, and the job is resubmitted, but rejected (due to a lock?). I don't think the lock ought to be held if the job throws an exception in this case, because the implication of a I understand that the concern is that the lock should be kept if sidekiq automatically retries the job, however in the case of Also, this is how v5 worked, so I consider this change a regression. Here's the behavior with 5.0.10: class TestWorker
include Sidekiq::Worker
sidekiq_options retry: false,
unique: :until_and_while_executing, # <-- switched back to `unique`
log_duplicate_payload: true
def perform(args)
warn "executed #{jid} with #{args}"
raise "oops: #{jid}"
end
end
|
edit2:
Back on 6.0.6, I tried adding a death handler to manually clean up the job per your recommendation, however, it does not seem to be called when the job dies. (Should this be a separate GH issue?)
Here's my initializer, which I have confirmed is otherwise working. I expect to see something in the logs or my breakpoint hit, but nothing seems to happen.
|
@aharpervc it is not a regression it is a bug that got fixed. If you read my initial reply you would come to the conclusion that the only reason your scenario worked like you want it to in v5 was because any number of duplicates would be allowed in those same scenarios. The lock was ALWAYS removed even when it was retried meaning unlimited number of the same job could potentially execute simultaneously. Surely you can't mean fixing that bug is a regression? |
Ha, I'm never going to be mad about fixed bugs. I meant that in v6, I have a proposal; for E: |
This is with sidekiq 5.1.3 & sidekiq-unique-jobs 6.0.6
Describe the bug
It appears that if a worker throws an exception with
lock: :until_and_while_executing
, that it won't ever be re-queued.Expected behavior
I'm not 100% on my expectations, but I think I would expect that the lock is released if the job fails, so it can be re-run later.
Current behavior
TestWorker.perform_async
initially is queued and runs and throws an exception, but subsequent calls do not run the job.Worker class
Additional context
In a
rails c
with a clear redis instance and the above job, run TestWorker twice (precise timing doesn't matter):sidekiq log:
Note how the first attempt (correctly) throws the exception, but the second attempt never gets that far.
The text was updated successfully, but these errors were encountered: