-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagate listener task exceptions to the main loop. #3378
Propagate listener task exceptions to the main loop. #3378
Conversation
4f8e964
to
8f9a081
Compare
600504d
to
827b661
Compare
I'm actually going to re-structure this a bit. Will ping for review when ready again. |
- Raise exceptions in the message listener task if not silenced. Check if the task is completed and if an exception was set when polling for results in the caches and raise the exception if it was set. - Refactor some of the logic. When disconnecting persistent connection providers, shut down the listener task before closing the connection to avoid any sort of errors when the task is still trying to read from the closing connection.
827b661
to
1296e2b
Compare
17da8d7
to
781b5d3
Compare
- Define a base pattern for the listener task on the base ``PersistentConnectionProvider`` class. Define provider-specific methods that can be configured on the implementation classes in order to handle the provider-specific logic and error logging.
781b5d3
to
e4ebe38
Compare
Just kidding this is still in progress 😔 |
- Simultaneously check for subscription messages while checking if the listener task is done, raising any exceptions if they occurred in the listener task. - Add tests for subscriptions with iterator pattern (would've failed before this commit). - This adds a bit of added overhead for the subscription message stream. Change INFO logs to DEBUG when the message stream is out of sync with the websocket connection. These aren't super useful to the end user and can be noisy now.
0427c95
to
bf56dd2
Compare
e995b4e
to
ea812c5
Compare
- Instead of using ``asyncio.wait()`` to poll which task finishes first, simply push a ``None`` value to the sub queue as part of the callback when the listener task finishes. This tells any message stream that the listener task has finished, pops the iterator out of polling the queue, and allows it to address any listener task exceptions that may have occurred.
49e07bc
to
ca4d424
Compare
ca4d424
to
8a2cb8f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice tests and nice refactor! This looks good to me overall. Pushing None
to the queue is the only thing that stands out as something that might be a little foot-gun-y. In my head, it seems like it would be more descriptive and maybe less error prone if we could raise a custom exception there, but I can't quite see the path to how that could work. So I'm good leaving as-is, but wanted to flag in case you saw a nice solution off the bat.
I think you're absolutely right. I don't see |
- More gracefully and explicitly handle when a queue relying on a task that is not running is being awaited on for a result. This involves raising a new ``TaskNotRunning`` exception when a task is not running which is pushed to the queue in the reliant task's completion callback.
4332b03
to
0ee8a22
Compare
0ee8a22
to
5a5c4f3
Compare
5a5c4f3
to
8ed6df9
Compare
@kclowes I came up with a more elegant solution for the queue to operate correctly as long as the listener task is running
taking this as inspiration, lmk what you think: a4790d6 ...other comments addressed in 8ed6df9 |
8ed6df9
to
b4a3cd4
Compare
super elegant 🤩 🚀 |
What was wrong?
Closes #3375 (along with #3387)
How was it fixed?
As I understand it, the way asyncio works is each task operates under its own context. Directly communicating with another context and trying to immediately propagate an exception to the main loop, for example, does not seem to be a very straightforward thing to do surprisingly. For better or worse, I think when we poll for messages we should check if the listener task is done and, if it is, if an exception was set. If an exception was set then we can raise it in the main loop where we are polling for messages.
done
since no messages will be coming in and we will hang indefinitely. Push a new exceptionTaskNotRunning
to the queue. Revamp the queue to raise when this is pushed to it. Handle this exception in the subscription polling.I added some nice logging that really shouldn't be too noisy unless the connection is reconnecting all the time (which I forced for the example here).
Bonus:
I've been meaning to do this for some time and I think this is an appropriate time. Refactor the commonality of the listener task down to the base
PersistentConnectionProvider
class and define provider-specific methods that either need to be implemented or can be overridden to fine-tune logic specific to each persistent connection provider. This keeps things quite a bit DRYer.Todo:
Cute Animal Picture