Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

protocols/gossipsub: Improve bandwidth #2327

Merged
merged 18 commits into from
Dec 21, 2021
Merged

Conversation

AgeManning
Copy link
Contributor

@AgeManning AgeManning commented Nov 5, 2021

This PR adds some bandwidth improvements to gossipsub.

After a bit of inspection on live networks a number of improvements have been made that can help reduce unnecessary bandwidth on gossipsub networks. This PR introduces the following:

  • A 1:1 tracking of all in-flight IWANT requests. This not only ensures that all IWANT requests are answered and peers penalized accordingly, but gossipsub will no no longer create multiple IWANT requests for multiple peers. Previously, gossipsub sampled the in-flight IWANT requests in order to penalize peers for not responding with a high probability that we would detect non-responsive nodes. Futher, it was possible to re-request IWANT messages that are already being requested causing added duplication in messages and wasted unnecessary IWANT control messages. This PR shifts this logic to only request message ids that we are not currently requesting from peers.
  • Triangle routing naturally gives rise to unnecessary duplicates. Consider a mesh of 4 peers that are interconnected. Peer 1 sends a new message to 2,3,4. 2 propagates to 3,4 and 3 propagates to 2,4 and 4 propagates to 2,3. In this case 3 has received the message 3 times. If we keep track of peers that send us messages, when publishing or forwarding we no longer send to peers that have sent us a duplicate, we can eliminate one of the sends in the scenario above. This only occurs when message validation is async however. This PR adds this logic to remove some elements of triangle-routing duplicates.

protocols/gossipsub/src/behaviour/tests.rs Outdated Show resolved Hide resolved
if let Some((peer_score, ..)) = &mut self.peer_score {
peer_score.duplicated_message(propagation_source, &msg_id, &raw_message.topic);
// Report the duplicate
if self.message_is_valid(&msg_id, &mut raw_message, propagation_source) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why just report if the message is valid now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was a bug in the previous code. In principle we shouldn't have invalid messages in the duplicates, and if we do, we penalize them but we don't register it as a duplicate (for scoring). The duplicate would benefit the peer, but we dont want to improve scores for peers that are sending us invalid messages.

AgeManning and others added 2 commits November 11, 2021 09:49
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
Copy link
Member

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Anything else left from your side?

protocols/gossipsub/CHANGELOG.md Outdated Show resolved Hide resolved
protocols/gossipsub/src/mcache.rs Outdated Show resolved Hide resolved
@AgeManning
Copy link
Contributor Author

Yep. Just want to run a bit more and run with the new metrics to quantify the gains here. Will report back once tests are complete

AgeManning and others added 3 commits November 16, 2021 13:14
Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Max Inden <mail@max-inden.de>
@AgeManning
Copy link
Contributor Author

@mxinden - Have confirmed this has a net positive effect and works as expected.
Could you review and maybe we can get this merged down.

Copy link
Member

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just two small comments. Otherwise looks good to me.

@@ -4,7 +4,11 @@

- Migrate to Rust edition 2021 (see [PR 2339]).

- Improve bandwidth performance by tracking IWANTs and reducing duplicate sends
(see PR 2327).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
(see PR 2327).
(see [PR 2327]).

Comment on lines 1245 to 1252
let message_ids = iwant_ids_vec
.into_iter()
.map(|id| {
// Add all messages to the pending list
self.pending_iwant_msgs.insert(id.clone());
id.clone()
})
.collect::<Vec<_>>();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit pick: A side effect (self.pending_iwant_msgs) in a map is a bit surprising to me. Would an additional imperative for msg in message_ids be ok as well? I would guess this does not have any impact on the performance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've re-written it with a for loop. Just makes the message_ids vec mutable. Just wanted to do the clone and the copy in the one iteration :p

@AgeManning
Copy link
Contributor Author

Hopefully this guy is ready now :)

Copy link
Member

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the follow-ups!

@mxinden mxinden changed the title Improve Gossipsub Bandwidth protocols/gossipsub: Improve bandwidth Dec 21, 2021
@mxinden mxinden merged commit 379001a into libp2p:master Dec 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants