Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bitswap Improvement Plan #5723

Closed
6 of 9 tasks
hannahhoward opened this issue Nov 2, 2018 · 11 comments
Closed
6 of 9 tasks

Bitswap Improvement Plan #5723

hannahhoward opened this issue Nov 2, 2018 · 11 comments
Labels
topic/bitswap Topic bitswap topic/meta Topic meta topic/perf Performance

Comments

@hannahhoward
Copy link
Contributor

hannahhoward commented Nov 2, 2018

Goals

This is a meta issue to track and discuss improving the Bitswap implementation

. Currently, what we have is:

Our goal is to measure the current behavior of the various implementations and to improve them. All of them suffer from duplicate blocks to differing degrees.

Out of scope:

  • Any intelligence about what we are fetching other than the existence of a Session (i.e. DAG smarts and/or GraphSync)

Tasks

Things we want to do to improve performance:

  • Merge fixes to sessions and additional tests from Jeromy's PR referenced above
  • Diagnose why Jeromy's PR is sometimes not working in real world scenarios
  • Write benchmarks that effectively reproduce the issues that prevent Jeromy's PR from working in real world scenarios
  • Bitswap Refactor For Clarity
  • Experement with various alternate approaches to smart sessions (see comments)
  • Measure real'ish world transfer speeds by driving full IPFS instances (possibly interop)
  • Improve the experimental version of Bitswap sessions
    • Initial reordering of peers by last block received
    • More complex refactors to optimize for many scenarious
@hannahhoward
Copy link
Contributor Author

Reasons we think @whyrusleeping 's PR is not working with Wikipedia:

  1. First, dupl does not increase a lot. However, it increases near the beginning to either 2 or 3 and it's not clear if that is a good idea
  2. Wikipedia has up to around 2000 peers. This presents significant challenges to the current implementation which picks peers at random
  3. Because peers are picked at random, even with a dupl factor of 3, there are common scenarios in which the picked peers are actually dead peers who don't respond, causing a broadcast
  4. Multiple broadcasts of wants to 2000 peers produces a ton of duplicates.

Stats gathered on a run of LS (incomplete, but a long way in):
2279 Peers, 44440 Fetches, 5560 Broadcasts -- the broadcast ratio to fetch ration is almost 10:1 but 5560 x 2000 == 12.2 million

@djdv djdv added topic/bitswap Topic bitswap topic/meta Topic meta labels Nov 2, 2018
@magik6k magik6k added the topic/perf Performance label Nov 4, 2018
@whyrusleeping
Copy link
Member

@hannahhoward one point, the peers we are selecting from here should not ever be dead peers.

@whyrusleeping
Copy link
Member

It may be worth the extra effort to select peers based on how many blocks they have sent us so far, instead of just random selection. The random selection stuff works pretty well when there are a small number of peers in the potential set, but 2000 is absurd, and we need to be smarter (we should never broadcast to 2000 peers...)

@whyrusleeping
Copy link
Member

Also, over 2000 peers feels sketchy. Do we have even 2000 connections? Maybe something is up there...

@hannahhoward
Copy link
Contributor Author

@whyrusleeping

yea I now think the problem is not dead peers but slow peers vs fast peers.

I'm going to post a PR to bitswap with a test that replicates the issue

2279 is the length of session.activePeersArray.... I dunno if maybe we're not checking for uniqueness or something? seems unlikely though.

@whyrusleeping
Copy link
Member

Yeah, 2279 feels very wrong. We shouldnt even have that many connections at all in general, let alone connections to peers that have the data we're interested in.

@eingenito
Copy link
Contributor

Supersedes #2111

@hannahhoward
Copy link
Contributor Author

Just want to say this issue is still open, and the things that are not checked off need to be done.

@ghost
Copy link

ghost commented May 31, 2019

Would a possible workaround be to limit the Wants which contain each block request to a single node per block such that, initially, each node receives a Want containing a unique block which is not sent to other nodes unless in a given timeout period expires. Then when a node fails to provide the block requested the block is added to the Want list provided to a different node, with the previously failed node being blacklisted for the duration of the operation, until all the nodes in the DHT have been tried.

This would significantly reduce the overhead of requesting that every node containing a block should send it, ensuring that all but the first download to finish would be wasted duplication. And would in effect result in striping block requests across the available nodes.

This has the obvious downside of meaning that if for any reason you have a large number of nodes advertising a block which they refuse, or fail, to provide; that the client could end up waiting a multiple of the timeout period multiplied by the number of bad nodes until it reaches a node which can/will satisfy the request. This could be mitigated by introducing a sliding window, where after a configurable number of failed requests the client includes the block in multiple Wants, to a number that increases over time. For example the client could include the block to only one unique node for 3 attempts, then to 2 nodes for 3 attempts, then to 4 nodes, etc until the client has either excluded all the nodes as being unable to provide the block or is including the block in a Want sent to every node.

@Stebalien
Copy link
Member

That's almost exactly what we now do.

@Stebalien
Copy link
Member

Closing as this sprint has finished.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/bitswap Topic bitswap topic/meta Topic meta topic/perf Performance
Projects
No open projects
Development

No branches or pull requests

6 participants