-
-
Notifications
You must be signed in to change notification settings - Fork 290
Force index for reserved_at field on db queue driver #449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a line for CHANGELOG. Thanks.
|
👍 |
|
I'm getting a few deadlocks after this PR release. |
|
@gustavovendramini, I'm not sure, that this deadlock is related to changes introduced in this PR. It only forces DB to use index and doesn't introduce any additional queries or touch any other columns, that were used before. Can you ensure, that with the downgraded version ( |
|
@erickskrauch I'll check it, downgrade it in production and wait a few days to see how it behaves, then I will inform back here. |
|
@erickskrauch I'm late but, I did the downgrade to 2.3.3 and the deadlocks have stopped. So is something related to this improvement, or related to my infrastructure I'm running the queue this way at crontab: And my Job have this methods to retry failed jobs |
|
@gustavovendramini @samdark |
|
@nadar Do you keep finished jobs in queue? |
|
i have to admit @rob006, i don't know. I don't think so. We just use the queue like the example configuration provides. The driver is DB queue and the table does not contain those jobs anymore after processing. So i believe: no Or could you give me an example of what it would look like when we have finished jobs in the queue? |
|
I'm not sure why adding |
|
HI, we have downgraded as well, but the error keeps displaying eyery 15 to 30 minutes. |
In the production environment, we encountered a situation where the worker didn't work for a while, but jobs kept being pushed to the queue. By the time the problem was fixed, there were already over 300k jobs in queue. We started the queue, but it was very slow. With the help of the profiler, we were able to discover that the request to unblock unfinished tasks was taking the longest time. And being inside the mutex lock, it didn't allow us to parallelize the tasks. After investigating the behavior of the code in the MariaDB database, I came up with the solution to add a simple pre-filter that would force the database to use the filter first, and then apply more complex filters on the rest of the set.
We tested this solution in production and now everything works for large data volumes.