Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix a never-ending subtask in BasePeerPool #1310

Merged
merged 2 commits into from
Sep 21, 2018

Conversation

gsalgado
Copy link
Collaborator

The handle_peer_count_requests() coroutine runs as a background task but
did not exit when the pool's cancel token was triggered.

Also introduce a custom log formatter for trinity that uses the last
element of the logger's name instead of the module, otherwise it's not
clear where they're coming from (specially for those generated by
BaseService subclasses).

The handle_peer_count_requests() coroutine runs as a background task but
did not exit when the pool's cancel token was triggered.

Also introduce a custom log formatter for trinity that uses the last
element of the logger's name instead of the module, otherwise it's not
clear where they're coming from (specially for those generated by
BaseService subclasses).
async def f() -> None:
# FIXME: There must be a way to cancel event_bus.stream() when our token is triggered,
# but for the time being we just wrap everything in self.wait().
async for req in self.event_bus.stream(PeerCountRequest):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cburgdorf is there a way to do this already?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh..so one thing you can do with the stream API is to break out e.g.

async for req in self.event_bus.stream(PeerCountRequest):
    if not self.is_operational:
        break

This will then be taken care of to handle the unsubscribing.

https://github.com/ethereum/lahja/blob/4c2bb9c6f038f81566f7852ef9abd0d3928d0ca5/lahja/endpoint.py#L191-L192

except GeneratorExit:
    self._queues[event_type].remove(queue)

However, as I'm writing this, I realize this is probably not what you want because it only allows you to break out at the moment where another event comes through.

I think what we really want is to be able to pass a CancelToken as in.

async for req in self.event_bus.stream(PeerCountRequest, cancel_token=token):
    ...

Does that make sense?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, passing a CancelToken is what I had in mind. Also, as you pointed out, doing the is_operational check and breaking out of the loop is not going to fix it

Copy link
Contributor

@cburgdorf cburgdorf Sep 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, there are a bunch of improvements that I'd like to get in for a next release soon. If we can live with the stopgap approach of this PR for a few days I'll come up with a new release soon that will have this API change.

I created an issue for this particular problem. ethereum/lahja#9

We now use the TRACE level for most discovery/kademlia logs, so we
no longer need to force their level to INFO in trinity

Closes: ethereum#1243
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants