New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent score removal jobs from overlapping #11162
Prevent score removal jobs from overlapping #11162
Conversation
from what I gather it's more like the lock is just held forever unless Also the "release" is more like sending it back to queue? |
If the lock hasn't expired yet, other workers will still run the job and immediately complete it and remove it from the reserved queue, and it doesn't get picked up again. If the original worker gets killed by OOM and doesn't fire the error handler, the job is gone. |
where did you get that? That's not what the handle function says at least
|
oh, it also needs the It's still not great:
It might be passable but it's also still a roundabout way of having another setting that acts like |
it's probably still better than adding different queue just to have different timeout |
|
||
public function middleware(): array | ||
{ | ||
return [new WithoutOverlapping($this->beatmapset->getKey(), $this->timeout, $this->timeout)]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
string key?
Use
WithoutOverlapping
middleware to lock the job. There will be a bogus try added (and also gets counted as successful run) if the job takes longer thanretry_after
to run and the job gets moved to a:delayed
queue.InteractsWithQueue
is needed for the actual$job->release()
behaviour of the queue. There's still the problem if something kills the worker before it can run any fail handlers and the job won't be retried until the timeout/lock expires, but that's not any different than ifretry_after
is changed.The issue is workers will automatically retry reserved jobs after
retry_after
has expired even it they're still running, so the same job starts being run by multiple workers simultaneously and have their attempt counts incremented, eventually causingMaxAttemptsExceededException
to be thrown (while the job is still running on the first worker). If the job does fail now, it won't be retried anymore.