-
Notifications
You must be signed in to change notification settings - Fork 26
kad: Allow connecting to more than one DHT network #473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
I would like to release this as v0.12.1 (because the public API hasn't changed) and include into litep2p upgrade PR paritytech/polkadot-sdk#9685 in polkadot-sdk. UPD: or we can merge the litep2p upgrade PR now and later just bump |
| let Some(actions) = self.pending_dials.remove(&peer) else { | ||
| entry.insert(PeerContext::new()); | ||
| // Note that we do not add peer entry if we don't have any pending actions. | ||
| // This is done to not populate `self.peers` with peers that don't support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dq: This won't affect networks with a single DHT started? Maybe we could add a trace here in case some issues popup in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking outloud: If that's the case we could maybe add a new builder method on the KadConfig to signal we run in multi-dht-worlds? And return none only on multi-dht?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't affect a single DHT network, as the entry is always inserted anyway when substream is opened. Strictly speaking, it doesn't make sense to consider transport-level connected peers as connected, because they might not speak the Kademlia protocol (even in a single DHT case). We are interested in substreams over a specific protocol.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking outloud: If that's the case we could maybe add a new builder method on the KadConfig to signal we run in multi-dht-worlds? And return none only on multi-dht?
The logic shouldn't be different for a single DHT versus multi-DHT cases.
| let pending_action = &mut self | ||
| .peers | ||
| .get_mut(&peer) | ||
| // If we opened an outbound substream, we must have pending actions for the peer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't we get in a race case here between the following timelines?
- T0: Pending to open outbound substream
- T1: Outbound opened and queue for reporting
- T2: disconnect_peer - the peer is disconnected and reported
- T3: Outbound opened reproted now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might happen when there was an error reading inbound request at the same time as we sent an outbound request. The worst that can happen is we won't process pending actions for a peer, but this is not much different to an error during outbound request.
As a side note, the PR doesn't change the way this race can happen.
Needed for IPFS support in substrate, because we need to connect to polkadot & IPFS DHTs simultaneously.
Close #471.