-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saving previously seen nodes for later bootstrapping #3926
Comments
Yes! definitely. The blocker here is making sure we store peerstore data to a persistent datastore (right now all peer info is kept in memory). |
Related anecdote Not sure what approach would be good for ipfs in terms of storing peers and managing stored peers for bootstrap, or what other systems have tried. |
@djdv that sounds like a great approach. Especially tracking failures for removal. |
So did this happen? Thanks. |
@Kubuxu @whyrusleeping |
@bigs is working on storing the peerstore on disk (libp2p/go-libp2p-peerstore#28). After that, it'll be a matter of remembering which peers tend to reliably be online. |
yup. we'll also need to consider how we re-initialize our TTLs after rebooting, but this is coming down the pike. |
Ideas being discussed in libp2p/go-libp2p-kad-dht#254 |
GFW blocked all default bootstrap nodes
|
@whyrusleeping Do you know the status of this issue? |
@Geo25rey there are a number of the issues (and some PRs) in the DHT around this that you can checkout. The short version is that there's interest in doing this and some good proposals, but the plan is not to do it until we've landed higher priority work on performing smooth upgrades to the DHT protocol (e.g. libp2p/go-libp2p-kad-dht#616). |
@aschmahmann What do you mean by that? |
Two years later, neither libp2p/go-libp2p-kad-dht#616 nor libp2p/go-libp2p-kad-dht#254 happened. Meanwhile, various countries and companies can cripple connectivity by blocking well-known list of hardcoded bootstrappers. I propose we do something rather than keeping the current broken state. Help wantedIf someone opens a PR that is preserving currently connected peers ( MVP:
|
Per @lidel, taking this one since no one in the community has tackled it yet (if someone is still interested I can be the guide and reviewer but should say so now). |
Actively working on this. It's taking some time to do it right, will push something by EOW. |
Beautiful people of the ipfsphere, I have a very WIP PR with an initial implementation of this in #8856. Any feedback will be very useful to better understand the use cases we should support. |
One thing that might be useful considering for this (and potentially other cases) is generalizing a bit to have bootstrapping/peers grouped. If I'm shipping an application it might make sense to have 3 groups of bootstrapping nodes - the default nodes to connect to the broader decentralized IPFS ecosystem, a set of nodes that the application shipper is running to enable closer connections to other users that are also using the app (and to take some load off of the default nodes), and previously seen nodes. I could see this being useful with a combo of priority levels and percent of nodes to connect to within a group (e.g. 100% of application shipper nodes at priority 1, 25% of default nodes to reduce the burden on those also at priority 1, and 33% of previously connected nodes at priority 2). I wouldn't want to sample randomly among all nodes in the bootstrapping list, I would want to sample within these groups to maintain better performance of my application. I don't think this improvement should hold up the initial code, but this would be a more general approach that would solve this PR and would more broadly help IPFS in other use cases too. Additionally, having nodes grouped could return useful feedback in the future (e.g. all nodes in the default bootstrapping list are not connecting - maybe there's an issue occurring there or they're blocked by e.g. GFW or corporate firewall) and this could trigger further actions. |
Version information:
N/A
Type:
Enhancement
Severity:
Medium
Description:
Bootstrap nodes built into ipfs client don't seem to work for me at all. Had to find and add some manually.
Generally having so centralized bootstrap system for otherwise decentralized network is its weakness:
#3908
Should ipfs automatically save all the nodes it seen and the time of last seen, to clean out the list later?
That should make the bootstrapping more reliable and truly decentralized.
The text was updated successfully, but these errors were encountered: