Skip to content
This repository has been archived by the owner on Jan 7, 2022. It is now read-only.

Clean up old peers from libp2p address book (don't retry) #831

Closed
holmesworcester opened this issue May 18, 2021 · 10 comments
Closed

Clean up old peers from libp2p address book (don't retry) #831

holmesworcester opened this issue May 18, 2021 · 10 comments
Assignees
Projects

Comments

@holmesworcester
Copy link
Contributor

Current proposal is to remove peer from libp2p addressbook on first connection error

@holmesworcester holmesworcester created this issue from a note in Zbay (Backlog) May 18, 2021
@holmesworcester
Copy link
Contributor Author

holmesworcester commented May 18, 2021

We should ask js ipfs folks about this if helpful.

@holmesworcester holmesworcester changed the title Clean up old peers from libp2p address book Clean up old peers from libp2p address book (don't retry) May 18, 2021
@EmiM
Copy link

EmiM commented May 19, 2021

I found a related topic on libp2p forum and we asked a question: https://discuss.libp2p.io/t/peerstore-pruning/41/4

Conclusion: js implementation doesn't have mechanism for clearing old peers but there is a way to do it from application layer - turn off autoDial and attempt to dial peers.

@EmiM
Copy link

EmiM commented May 19, 2021

I connected to peer:discovery event and for each discovered peer printed its multiaddreses. Example for one peer:

cbafzbeicejdwbbosal5hsbmibvd2t527yalahwoetpz2qbysyr6zxbph7ma peer DISCOVERY
[
  <Multiaddr 363e32736836367873756f717862697165756478366f7577743366766a70627065777a743566377462726f636179723670326f773679626861642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/2sh66xsuoqxbiqeudx6ouwt3fvjpbpewzt5f7tbrocayr6p2ow6ybhad.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e326462796961726771727669786f6f666b657069326c727468707a6f6d75766e73676e6b6f357735637978626861746533777a78647969642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/2dbyiargqrvixoofkepi2lrthpzomuvnsgnko5w5cyxbhate3wzxdyid.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e326b646e636233797262746175666a32746a676d3375657037757066726d746f357a66746f6868766775787a3536643377326a716e6469642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/2kdncb3yrbtaufj2tjgm3uep7upfrmto5zftohhvguxz56d3w2jqndid.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e756d6569356536613632616e7a6277707a77353368646e76346e6261623733746f66636665697168373337736b376b6e65337172333261642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/umei5e6a62anzbwpzw53hdnv4nbab73tofcfeiqh737sk7kne3qr32ad.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e346a77776862626c61693266786e6c7a736c63366836763273367679787632686c766c367936707a366463623471357234706c70707579642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/4jwwhbblai2fxnlzslc6h6v2s6vyxv2hlvl6y6pz6dcb4q5r4plppuyd.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e777177636f6275796f6364666776796d706236786f3464713768656674636b7475736173756467377170797978707176783776716c7871642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/wqwcobuyocdfgvympb6xo4dq7heftcktusasudg7qpyyxpqvx7vqlxqd.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e7671336c6e3579753467727435706c3374676575726970796f78356866786f64707569717076737369636c7472766c6f766672776d7761642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/vq3ln5yu4grt5pl3tgeuripyox5hfxodpuiqpvssicltrvlovfrwmwad.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e737668686a787034737335726c3477776a696b6335767669376f763468366d3569346f7367686c66656263683364656a34617377337371642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/svhhjxp4ss5rl4wwjikc5vvi7ov4h6m5i4osghlfebch3dej4asw3sqd.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e7533373674786a656e677a6267696f66366c666c32666579686e68783532756f7374677a706d6a726b6268356c6d6c6b676d3670743661642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/u376txjengzbgiof6lfl2feyhnhx52uostgzpmjrkbh5lmlkgm6pt6ad.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e71776d7934627a66643733356a7565716d32666a357a626b6c776867766b667561723276706377766f6269363336356e6a677577347179642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/qwmy4bzfd735jueqm2fj5zbklwhgvkfuar2vpcwvobi6365njguw4qyd.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e667a6a746c376461326b6f726b7371676161756162636e69627677327335746978673570716161613476796233646c7577646d61777979642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/fzjtl7da2korksqgaauabcnibvw2s5tixg5pqaaa4vyb3dluwdmawyyd.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e333332346b74786d327735716f327371677265636978713274696c626c676374686e78796e6f6e6f767735646e3775756a72757a6f7771642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/3324ktxm2w5qo2sqgrecixq2tilblgcthnxynonovw5dn7uujruzowqd.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e326d746e346335697362336c676d73646b746f323279727a767a78696a6a7478686874777378376e766567746b646d7a73376468717579642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/2mtn4c5isb3lgmsdkto22yrzvzxijjtxhhtwsx7nvegtkdmzs7dhquyd.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e77736b746870673770677437636f376b746d343233326477747675356c65623667733465636874786c73616a633376626869706f7a6469642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/wskthpg7pgt7co7ktm4232dwtvu5leb6gs4echtxlsajc3vbhipozdid.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e6a6472786264626836656b717936746f3777366137747875626a357a6a6b737a6161666f6b336b6e6a3673347268636e6f6e6777753679642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/jdrxbdbh6ekqy6to7w6a7txubj5zjkszaafok3knj6s4rhcnongwu6yd.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e3379717935697467346e666c6d6269343674616b6f757773676e6861327377717662796937667a32357770756762776178786e356c6e69642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/3yqy5itg4nflmbi46takouwsgnha2swqvbyi7fz25wpugbwaxxn5lnid.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>,
  <Multiaddr 363e736b326e717a7079666c6474696d736f7468683376736f3573353734797461686272686a726437357168686d366e6737636967676a7869642e6f6e696f6e061e6cdd03a5032212204448ec10ba405f4f20b101a8f53eebf802c07b38937e7500e2588fb370bcff60 - /dns4/sk2nqzpyfldtimsothh3vso5s574ytahbrhjrd75qhhm6ng7ciggjxid.onion/tcp/7788/ws/p2p/QmSwCveGLa7nLFGRNENsfB89xK7K3xjutoQTnB82pB9M3V>
]

7788 is port used by entryNode. In this case the amount of multiaddresses is caused by running entrynode locally (for tests or something) and creating new .onion address with each run while preserving peerId.
Libp2p keeps all previous addresses of the node.

I am dialing all of them but dial doesn't throw any error regardless of address being current or not. Also it always return undefined. Perhaps I am missing something

@holmesworcester
Copy link
Contributor Author

holmesworcester commented May 19, 2021 via email

@EmiM
Copy link

EmiM commented May 19, 2021

Nevermind, that undefined was my mistake, we DO get a connection object when connection is established.
By "entry node" I mean the code that is also run as entry node on aws (so basically just waggle) but we run it locally sometimes and then multiple addresses can be generated.
In our case only entry node code will generate many addresses because when we run zbaylite we have both static peerid and onion address.

@EmiM
Copy link

EmiM commented May 19, 2021

What I am doing:

  • on each peer:discovery I try to dial discovered peer (just libp2p.dial(peerId), no extra options)
  • if dial fails (it returns undefined) I remove peerId from my books (libp2p.peerStore.delete)

After each deletion I log the current state of addressBook to be sure that it's really deleted. At first some peers come back to the list but I think it's because another peers are being discovered and they bring they own list with peers already removed from mine.
When discovering stops I am left with few active peers in addressBook.

Some stats for 20 minutes running zbaylite

  • received peer:discovery over 100 times
  • addressBook usually started with about 15 peers and after deleting shrinks to 3/4 (depending on current active peers)

@EmiM
Copy link

EmiM commented May 19, 2021

Previous tests I was performing on branch without dms. After I switched to current develop I see much more peer:discovery events. In fact, one of the app launches made the following actions go indefinitely in a loop: peer is discovered, address book enlarges, inactive peers are being deleted from the book.
However if we had cleaning mechanism in all active waggles, it wouldn't be a problem, I suppose.

@holmesworcester
Copy link
Contributor Author

holmesworcester commented May 19, 2021 via email

@EmiM EmiM moved this from Backlog to In progress in Zbay May 20, 2021
@EmiM
Copy link

EmiM commented May 20, 2021

I don't see a limit for peers.

Ultimately we will have certificate for each user in orbitdb, certificate will have information about onion address and peerId so based on that we could create full address and dial the user (peer).

@holmesworcester
Copy link
Contributor Author

holmesworcester commented May 20, 2021 via email

@EmiM EmiM moved this from In progress to Ready for QA in Zbay Jun 8, 2021
@kingalg kingalg added this to Ready for QA in Quiet Jun 15, 2021
@kingalg kingalg removed this from Ready for QA in Zbay Jun 15, 2021
@vinkabuki vinkabuki moved this from Ready for QA to Done / Approved in Quiet Nov 2, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
Archived in project
Quiet
Done
Development

No branches or pull requests

3 participants