Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task was destroyed but it is pending! #85

Open
sentry-io bot opened this issue Apr 24, 2024 · 3 comments
Open

Task was destroyed but it is pending! #85

sentry-io bot opened this issue Apr 24, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@sentry-io
Copy link

sentry-io bot commented Apr 24, 2024

Sentry Issue: VOIIO-SAM-9

Task was destroyed but it is pending!
task: <Task pending name='Task-4668' coro=<AsyncioListenerRunner.run.<locals>.run_ack_function_asynchronously() done, defined at /app/.heroku/python/lib/python3.11/site-packages/slack_bolt/listener/asyncio_runner.py:111> wait_for=<Future pending cb=[Task.task_wakeup()]>>
@sentry-io sentry-io bot added the bug Something isn't working label Apr 24, 2024
@codingjoe
Copy link
Member

Graceful shutdown or restarts are an issue. If you have a ongoing inference run, a SIGTERM will leave it hanging. Which can lead to errors, if you have a run that is pending function execution. Users will not be able to add new messages to a thread or start another run for 10 minutes (run timeout).

@amureki
Copy link
Member

amureki commented Apr 26, 2024

Graceful shutdown or restarts are an issue. If you have a ongoing inference run, a SIGTERM will leave it hanging. Which can lead to errors, if you have a run that is pending function execution. Users will not be able to add new messages to a thread or start another run for 10 minutes (run timeout).

So, what I could imagine - on a graceful shutdown - we should keep it in the queue and come back to it as soon as the server is back up.
Or what would you expect here?

@codingjoe
Copy link
Member

That's a great idea, how do we know if a run is still in the queue, though? Or rather, how do we remember which runs we should poll again? Listing runs for all threads on redis seems like an overkill. We would store runs in a queue and acknowledge them upon completion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants