Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store peerstore data to disk #2848

Open
whyrusleeping opened this issue Jun 13, 2016 · 12 comments
Open

Store peerstore data to disk #2848

whyrusleeping opened this issue Jun 13, 2016 · 12 comments
Labels
exp/expert Having worked on the specific codebase is important status/blocked Unable to be worked further until needs are met

Comments

@whyrusleeping
Copy link
Member

This is very similar to #2847.

Information about peers (their addresses, the protocols they have, their public keys) are currently stored in memory and need to be written to disk to avoid consuming excess memory.

I havent started this one yet and would appreciate some help getting this done.

@whyrusleeping whyrusleeping added help wanted Seeking public contribution on this issue exp/expert Having worked on the specific codebase is important labels Jun 13, 2016
@whyrusleeping whyrusleeping added this to the Ipfs 0.4.3 milestone Jun 21, 2016
@Kubuxu
Copy link
Member

Kubuxu commented Jul 6, 2016

With @whyrusleeping we decided to bump it off the 0.4.3, it uses only 6MiB of RAM on long running nodes.

@Kubuxu Kubuxu removed this from the ipfs-0.4.3 milestone Jul 6, 2016
@csasarak
Copy link
Contributor

I know this was removed from the milestones, but I wouldn't mind digging a little deeper into the structures IPFS uses if you still want the help @whyrusleeping

@whyrusleeping
Copy link
Member Author

@csasarak that would be very helpful! I'd love to have some help with this :)

@csasarak
Copy link
Contributor

csasarak commented Aug 4, 2016

Cool, I'm actually traveling starting Saturday through to next Friday so unfortunately I won't be able to help until then. I've been thinking about this though and was thinking there might be some way to make some reified caching strategy object (if one doesn't exist already) or interface. That way things could get a bit smarter than simply writing entries to disk when the finger table gets larger than a certain point.

Also, crazy idea, but would it be possible or desirable to store peerstore data in IPFS itself? I'm wondering if that might be an efficient way to pass around peerstores or parts of peerstores to other nodes that request them?

@whyrusleeping
Copy link
Member Author

The thought of storing the information in IPFS itself is very appealing, but it has a few challenges off the bat, the first of which is that the peerstore data is very mutable in nature, addresses change quickly, new peers come online and leave and our knowledge of things changes with nearly every RPC made (at an extreme). IPFS objects are immutable, so every change that gets made will end up creating a new object (unless optimized similarly to how the mfs bubbling up happens).

The second concern there is about privacy, its not necessarily a problem, but something that needs to be thought through. If you put all the peerstore data in ipfs, it becomes accessible to everyone.

Anyways, Safe travels and i'll hopefully hear from you after next friday :)

@csasarak
Copy link
Contributor

csasarak commented Aug 4, 2016

I figured there'd be something wrong with it, and it's probably a spec question rather than an issue for go-ipfs anyways. Anyways, I will get back to you - have a good week!

@jbenet
Copy link
Member

jbenet commented Aug 4, 2016

  • the peerstore data in IPFS makes sense, like the pinset records (private,
    not distributed)
  • with a lazy write (i.e. No need to flush on every write because it's not
    consistency critical)
  • would enable in memory peerstore to become an LRU cache
  • step towards peerdiscovery protocol based on previously seen nodes
    On Thu, Aug 4, 2016 at 11:31 Christopher Sasarak notifications@github.com
    wrote:

I figured there'd be something wrong with it, and it's probably a spec
question rather than an issue for go-ipfs anyways. Anyways, I will get back
to you - have a good week!


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#2848 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAIcoVOeMe67wEqxjnnQEdrESp_UkrtIks5qcgW9gaJpZM4I0sTr
.

@csasarak
Copy link
Contributor

@whyrusleeping I've returned. @jbenet Are you saying that we should try to implement it this way? I'd assume that storing it in IPFS would achieve the goal of writing it to disk.
Or should we open a discussion elsewhere about storing data in IPFS and go with the more naive implementation for now?

@whyrusleeping
Copy link
Member Author

@csasarak for now lets not store the data in ipfs, we need to figure a lot of stuff out first.

Lets just put this data in leveldb for now, But before we get to that, we need to make the go-libp2p-peerstore support writing into a go-datastore. You should go open an issue there to start discussing the design/changes

@ghost
Copy link

ghost commented Nov 15, 2016

I took a first stab at it here libp2p/go-libp2p-peerstore/pull/10, storing peer addresses in datastore.

Anybody care to take a look?

@whyrusleeping whyrusleeping added the status/ready Ready to be worked label Nov 28, 2016
@Stebalien Stebalien added status/deferred Conscious decision to pause or backlog need/review Needs a review status/blocked Unable to be worked further until needs are met and removed status/ready Ready to be worked status/deferred Conscious decision to pause or backlog need/review Needs a review labels Dec 18, 2018
@Stebalien
Copy link
Member

This has now been implemented but is blocked on work being done in libp2p to make it performant (e.g., libp2p/go-libp2p#1702).

@melroy89
Copy link

IPFS is actually using the most memory over time of all my server apps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exp/expert Having worked on the specific codebase is important status/blocked Unable to be worked further until needs are met
Projects
No open projects
Development

No branches or pull requests

6 participants