-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do keys generated by bull expire? #261
Comments
At the moment there are no keys that require any sort of expiration. It is true that Bull will quickly accumulate keys in redis if you don't regularly clean jobs. Are you using Queue##clean at all? |
It may be worth to implement an expire option for completed (and failed jobs), so the user is free to configure if he wants automatic key expiration after a certain optional time, or no expiration at all. |
That shouldn't be too difficult. I'll get back into it and see if I can do this cleanly. |
Nice! |
Any update here? I saw no expire option in the latest reference, and staled keys in Redis are still eating my RAM even with Queue##clean. |
@CCharlieLi why don't you use the option |
Thanks for reminding @manast , I guess it's the same as |
@CCharlieLi you should be able to instantiate the old queue and then call clean on it. |
It would be very useful to be able to set TTL, for example in an environment where Nodes go up and down all the time, with unique queue names for inter-server communication. F.ex.: two AWS spot instances communicate with each other, each instance uses a unique named queue based local instance id. both instances get shut down at the same time by aws. remaining data on redis never expires, unless manually keeping track of instances (or queue) names for cleaning up. (edit for better example) |
In my use case removeIOnComplete isn't an ideal option because I would like to keep track within a given time interval which records have completed. For example: I remove all completed queues after 2 weeks because by then I have seen and acknowledged they were completed. It brings peace of mind to know queues are being completed and not just purging them immediately after they complete. |
See my implementation here: Several Issues
https://redis.io/topics/notifications#timing-of-expired-events |
Implementation to expire expired keys from a job redis key: |
@nidhhoggr glancing at your implementation I have immediate concerns about jobId being expected and cast to be number/integer, my understanding is that jobId can also be a string. |
Disregard that, I misread it. |
@ritter Disregard what? These are both valid concerns, I was unaware there was a possibility of jobId as string with non-numeric keys. The issue with having a string as a key means that the jobId must be confirmed as an actual job entry of type hash, because bull has other keys with the convention client.psubscribe(_this.toKey('expired'), function(err, result) { The event handler also needs error checking as well. In conclusion, my implementation isn't PR ready but feel free to provide suggestions and/or take from it what you will to submit your own PR. |
Actually, regarding the expired keys events listener my example above would not work because the name of the keyspace event looks like this: "pmessage","__key*__:*","__keyevent@0__:expired","bull:updateCauseTotalSavings:stalled-check" When I attempted to listen for client.subscribe("__keyevent@0__:expired") I wasn't getting anything so I changed it to the following with success. client.psubscribe("*:expired") |
This feature is really missing. For now, the only option to automate cleaning up the queue is to:
This is a tedious task. I'd suggest adding an option to |
@sarneeh I will try to improve this in the following days. |
@manast But don't feel like you have to. It's open source, if I'd need it very much I'd do it myself. It's just a suggestion if you'd like to improve it somehow 😃 Your work already saves a dozen of hours creating such a solution ( |
I need it myself in some project :). |
It is not really expiring all the keys, this would be a complete different story, we may implement it in the future for ephemeral queues. |
https://github.com/OptimalBits/bull/blob/develop/REFERENCE.md#queueadd this realtimeCheckTaskQueue.add({}, {
// keep the latest 1000jobs
removeOnComplete: 1000
}) |
I also believe that this is a legit usecase. It happened to us as well to be using bull so that we avoid storing jobs locally when running on AWS nano, where memory is limited anyhow. Jobs are only for internal instance use, so when the application is scaling up and down by AWS, and as the queue name is given based on the instance, there are entries that remain forever on redis, not linked with anything. I believe that an expires option makes sense to be there. However, it might not be the most optimal solution for the cleanup either. We already use removeOnCompleted. However,, there might still be some jobs left on redis in this usecase. |
In my case, I use Bull to execute CronJob agent. In some cases, I may have too many agents being created for the number of tasks Bull can handle simultaneously. In that case, I would like to see the jobs that have been in the queue for too long simply deleted. Normally you can do this easily with Redis by playing the realtimeCheckTaskQueue.add({}, {
// automatically remove job after 1 minute
expire : 600000
}) Edit : To avoid my problem, I made this code in my express APP : const cleanInterval = setInterval(async () => {
await Promise.all([
realtimeCheckTaskQueue.clean(5 * 60 * 1000, 'wait'),
realtimeCheckTaskQueue.clean(5 * 60 * 1000, 'delayed'),
]);
}, 60000); Every minutes, I call Bull to clean |
@throrin19 what is basically what would be needed by bull internally since there is no way to automatically expire the jobs being in different sets and lists. |
I got it working as well via this method. However, when adding the option on the queue itself it doesn't work. What I did was specify |
I also got this issue |
See further discussion and the actual fix on #2265 . |
Hi guys,
I'm using Bull in a personal project. After running for a while, I notice bull generated a lot of keys and Redis starts hogging lots of RAM. Will these keys expire?
Thanks!
The text was updated successfully, but these errors were encountered: