Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PreferredNodePeerPool #744

Conversation

pipermerriam
Copy link
Member

@pipermerriam pipermerriam commented May 21, 2018

link: #741

What was wrong?

  • Running trinity --light with a niave peer pool and discovery will rarely find an LES peer to connect to.
  • Using HardCodedNodesPeerPool eliminates discovery, but also locks Trinity into only being able to connect to a few nodes.
  • We will be implementing APIs for Trinity to support reserving slots for certain nodes and trusted nodes.

How was it fixed?

Implement a new PeerPool class which implements list of preferred nodes.

  • Anytime the peer pool is looking for a new node to connect to it will try to use one of the preferred nodes.
  • Each preferred node may only be used once every 300 seconds.
  • If there are no preferred nodes which have not been used recently then it falls back to use the discovery protocol

Cute Animal Picture

86037878e80b3023632ab6b9907a750b

@pipermerriam pipermerriam force-pushed the piper/extend-HardCodedNodePeerPool-to-use-discovery branch 2 times, most recently from 04ae1c4 to 8bcdf3f Compare May 22, 2018 20:05
@pipermerriam pipermerriam force-pushed the piper/extend-HardCodedNodePeerPool-to-use-discovery branch from 8bcdf3f to da52306 Compare May 22, 2018 20:14
@pipermerriam pipermerriam changed the title WIP: Add PreferredNodePeerPool Add PreferredNodePeerPool May 22, 2018
p2p/peer.py Outdated
if options:
yield random.choice(options)
else:
yield None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will the caller gracefully handle a None result here? Maybe it should just throw an exception if no bootnodes are set.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is a mistake, it should just not have the else clause and return an empty generator.

p2p/peer.py Outdated
"""
Returns up to `min_peers` nodes, preferring nodes from the preferred list.
"""
preferred_nodes = self._get_eligible_preferred_nodes()[:self.min_peers]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm reading this right, every trinity install will try to connect to the first min_peers preferred nodes in the list? Seems like that would cause a usage imbalance.

One alternative direction:

_get_eligible_preferred_nodes() could be be generated from a randomized ordering of self.preferred_nodes, and _get_random_preferred_node would just return the first result from that generator. Then theoretically we could get rid of the overhead of generating the whole tuple every time in _get_eligible_preferred_nodes(), and use it as a raw generator, switching to:

preferred_nodes = take(self.min_peers, self._get_eligible_preferred_nodes())

(of course that only has a performance impact if the preferred nodes list gets big)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is a lot of room for improvement to this class, including something like you suggest here, but for now I'd like to call this good enough.

The bootnodes will not allow additional connections if all of their peer slots are full and trinity will then fallback to discovery. It should only take a moment to exhaust the pre-defined list of preferred nodes, after which discovery will take over.

@pipermerriam pipermerriam force-pushed the piper/extend-HardCodedNodePeerPool-to-use-discovery branch from 31e3447 to 69c8fc0 Compare May 23, 2018 03:00
@pipermerriam
Copy link
Member Author

@carver Can you double check the body of run_lightnode_process. I had handle a merge conflict and I want to be sure everything still looks right.

@pipermerriam pipermerriam force-pushed the piper/extend-HardCodedNodePeerPool-to-use-discovery branch from 69c8fc0 to 51ff716 Compare May 23, 2018 03:02
@pipermerriam pipermerriam force-pushed the piper/extend-HardCodedNodePeerPool-to-use-discovery branch from 51ff716 to 60d9c31 Compare May 23, 2018 03:05
@@ -17,9 +18,10 @@
Generator,
Iterator,
List,
Sequence,
TYPE_CHECKING,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick, this seems to screw up alphabetical ordering

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think isort which is implemented in one of our repos (maybe eth-account?) is the programatic solution for this.

p2p/peer.py Outdated
@to_tuple
def _get_eligible_preferred_nodes(self) -> Generator[Node, None, None]:
"""
Returns nodes from the preferred_nodes which have not been used within
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: Style guide prefers imperative, present tense.

Return nodes from the preferred_nodes which have not been used within the last preferred_node_recycle_time

p2p/peer.py Outdated

def _get_random_preferred_node(self) -> Node:
"""
Returns a random node from the preferred list.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above

p2p/peer.py Outdated

def _get_random_bootnode(self) -> Generator[Node, None, None]:
"""
Returns a single node to bootstrap, preferring nodes from the preferred list.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above

p2p/peer.py Outdated

def get_nodes_to_connect(self) -> Generator[Node, None, None]:
"""
Returns up to `min_peers` nodes, preferring nodes from the preferred list.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above

"""
preferred_nodes: Sequence[Node] = None
preferred_node_recycle_time: int = 300
_preferred_node_tracker: Dict[Node, float] = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not as part of this PR but would it make sense to maintain a list of preferred nodes in the db or in some updated configuration file so that the list of preferred nodes becomes something well maintained (with newly discovered peers added and stale peers removed) to not fall back to the hardcoded nodes whenever there's a restart?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, verbally planned. Would you like to open an issue for this. Metrics we've talked about are.

  • latency
  • correct responses (doesn't send unexpected data)
  • num success/failed requests

Initial idea is to have some sort of PeerSubscriber API which would be given a view into peer requests and responses and then to aggregate that data into some accessible form. Then the a FancyPeerPool class could use this data to inform what nodes/peers it returns.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pipermerriam pipermerriam merged commit 88b6ddb into ethereum:master May 23, 2018
@pipermerriam pipermerriam deleted the piper/extend-HardCodedNodePeerPool-to-use-discovery branch May 23, 2018 18:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants