You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently (in the cypress DHT branch) we refresh up until the bucket with the common prefix length of the peer in our routing table with the most shared bits with us. This could be a problem, as described in libp2p/go-libp2p-kbucket#71 (comment), because we could get unlucky and end up querying way more buckets then is reasonable for our network size.
This is currently not an issue because due to issues with our DHT RPCs we cannot query an arbitrary KadID to fill our buckets and so we have precalculated random ID prefixes for the first 15 buckets, and therefore will only query at most 15 buckets.
Once this issue is fixed and the limit of refreshing the first 15 buckets is dropped we could run into problems.
The text was updated successfully, but these errors were encountered:
Currently (in the cypress DHT branch) we refresh up until the bucket with the common prefix length of the peer in our routing table with the most shared bits with us. This could be a problem, as described in libp2p/go-libp2p-kbucket#71 (comment), because we could get unlucky and end up querying way more buckets then is reasonable for our network size.
This is currently not an issue because due to issues with our DHT RPCs we cannot query an arbitrary KadID to fill our buckets and so we have precalculated random ID prefixes for the first 15 buckets, and therefore will only query at most 15 buckets.
Once this issue is fixed and the limit of refreshing the first 15 buckets is dropped we could run into problems.
The text was updated successfully, but these errors were encountered: