Skip to content

Conversation

@dmitry-markin
Copy link
Collaborator

Needed for IPFS support in substrate, because we need to connect to polkadot & IPFS DHTs simultaneously.

Close #471.

@dmitry-markin
Copy link
Collaborator Author

dmitry-markin commented Nov 13, 2025

I would like to release this as v0.12.1 (because the public API hasn't changed) and include into litep2p upgrade PR paritytech/polkadot-sdk#9685 in polkadot-sdk. UPD: or we can merge the litep2p upgrade PR now and later just bump litep2p with cargo update in polkadot-sdk.

let Some(actions) = self.pending_dials.remove(&peer) else {
entry.insert(PeerContext::new());
// Note that we do not add peer entry if we don't have any pending actions.
// This is done to not populate `self.peers` with peers that don't support
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dq: This won't affect networks with a single DHT started? Maybe we could add a trace here in case some issues popup in the future?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking outloud: If that's the case we could maybe add a new builder method on the KadConfig to signal we run in multi-dht-worlds? And return none only on multi-dht?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't affect a single DHT network, as the entry is always inserted anyway when substream is opened. Strictly speaking, it doesn't make sense to consider transport-level connected peers as connected, because they might not speak the Kademlia protocol (even in a single DHT case). We are interested in substreams over a specific protocol.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking outloud: If that's the case we could maybe add a new builder method on the KadConfig to signal we run in multi-dht-worlds? And return none only on multi-dht?

The logic shouldn't be different for a single DHT versus multi-DHT cases.

let pending_action = &mut self
.peers
.get_mut(&peer)
// If we opened an outbound substream, we must have pending actions for the peer.
Copy link
Collaborator

@lexnv lexnv Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't we get in a race case here between the following timelines?

  • T0: Pending to open outbound substream
  • T1: Outbound opened and queue for reporting
  • T2: disconnect_peer - the peer is disconnected and reported
  • T3: Outbound opened reproted now

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might happen when there was an error reading inbound request at the same time as we sent an outbound request. The worst that can happen is we won't process pending actions for a peer, but this is not much different to an error during outbound request.

As a side note, the PR doesn't change the way this race can happen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

kad: Allow connecting to more than one DHT network simultaneously

3 participants