New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
task_timeout wrong functionality #98
Comments
Hi. You are right, this is a bug. I did some digging and in the timeout handler I don't account yet for apply calls, just map. In the map case all workers are supposed to be killed and the pool shuts down. In the apply case only the one that timeouts should be interrupted and the rest should continue as normal. I will work on a solution shortly. |
Hi @sybrenjansen, Is there any update on this issue? |
I've planned to work on this end of week or early next week. I expect it will be done shortly after |
FYI. I'm going to work on it this Friday. Expecting a new release in the following week |
…imeout didn't interrupt all tasks when the timeout was reached. Fixes #98
* Fixed a bug where starting multiple `apply_async` tasks with a task timeout didn't interrupt all tasks when the timeout was reached. Fixes #98 --------- Co-authored-by: sybrenjansen <sybren.jansen@gmail.com>
Released in v2.8.1 |
@sybrenjansen It works great, thank you! |
You're welcome :) |
Hi, recently, I've been encountering defunct processes while using Pebble (https://github.com/noxdafox/pebble). Consequently, I began investigating other libraries.
I utilized the following test code (although it may look messy, it effectively highlights the issue):
The output that I get:
After the first timeout, one of the workers was terminated, but the remaining workers (which should also have been terminated) were not, and they continued to wait until the end of the function (time.sleep(30)).
I would expect the following:
Worker-0
in this case) was terminated,mpire
should have initiated the creation of a new worker.What am I missing here?
Another simpler example:
The text was updated successfully, but these errors were encountered: