You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, great job with the project. It's awesome.
Now, to get to my problem. I'm using RQ in a high throughput system, with hundreds of thousands of jobs/day. Even though we have a result_ttl set up and the keys for the jobs themselves get cleared out, the rq:finished:low ZSET which contains ids of finished jobs does not seem to be cleared. As a result, since we put the system in production (4 days ago), it has grown to ~3M entries and keeps growing. This is starting to use a lot of memory and has a visible effect on the performance.
You could cleanup this by calling finished_job_registry.cleanup(). We should also create a rq cleanup management script that can be called periodically to run cleanup jobs of all registries.
First of all, great job with the project. It's awesome.
Now, to get to my problem. I'm using RQ in a high throughput system, with hundreds of thousands of jobs/day. Even though we have a
result_ttl
set up and the keys for the jobs themselves get cleared out, therq:finished:low
ZSET which contains ids of finished jobs does not seem to be cleared. As a result, since we put the system in production (4 days ago), it has grown to ~3M entries and keeps growing. This is starting to use a lot of memory and has a visible effect on the performance.@nvie
The text was updated successfully, but these errors were encountered: