Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

waitUntilFinished hang if removeOnComplete is set without TTL #85

Closed
Embraser01 opened this issue Dec 11, 2019 · 5 comments · Fixed by #746
Closed

waitUntilFinished hang if removeOnComplete is set without TTL #85

Embraser01 opened this issue Dec 11, 2019 · 5 comments · Fixed by #746
Labels

Comments

@Embraser01
Copy link
Contributor

When using the removeOnComplete option on a job, calling the waitUntilFinished will have an undefined behavior (unless ttl has been set).

If it's called before the job is cleared from the queue, it will work as intended but if the job is already removed, it will just hang indefinitely.

I think the waitUntilFinished function should warn the user if removeOn... is set as it may never resolve.

@manast
Copy link
Contributor

manast commented Dec 12, 2019

ok, seems like a bug to me. Would you mind to provide a small snippet that reproduces the issue that we can use as a test case?

@Embraser01
Copy link
Contributor Author

Here is a reproduction repo I made to check: https://github.com/Embraser01/bullmq-issue-85

It gives the percentage of jobs that hit the TTL with and without removeOnCompleted (I got 4% of failed jobs with it enabled).

@antoniusostermann
Copy link

Just found this - @Embraser01 did you find any way to work around this? If you set removeOnComplete to false, how do you avoid exceeding your redis' memory?

job.waitUntilFinished() is super helpful in some cases imho, but this really seems like an issue for me.

@Embraser01
Copy link
Contributor Author

Just found this - @Embraser01 did you find any way to work around this? If you set removeOnComplete to false, how do you avoid exceeding your redis' memory?

job.waitUntilFinished() is super helpful in some cases imho, but this really seems like an issue for me.

I added a recurrent job that just clean jobs that have completed from more than X seconds (like 15/30s)

github-actions bot pushed a commit that referenced this issue Sep 12, 2021
## [1.46.5](v1.46.4...v1.46.5) (2021-09-12)

### Bug Fixes

* **is-finished:** reject when missing job key ([#746](#746)) fixes [#85](#85) ([bd49bd2](bd49bd2))
@github-actions
Copy link

🎉 This issue has been resolved in version 1.46.5 🎉

The release is available on:

Your semantic-release bot 📦🚀

manast added a commit that referenced this issue Jan 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants