You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 11, 2024. It is now read-only.
Just came out from a standup with @alanshaw and here is what we discussed.
In order to achieve the goal of "Be ~1 hop away from every other node in DHT", we need to have one evenly distributed PeerID for each other 20 nodes in the network, so that we land in everyone's else first kbucket
The formula is quite simple:
Number of total DHT nodes in the network / 20 = Number sybils to spawn -> to be 1 hop away
The current Network size is 20K, so applying the formulate we get that we need 1000 sybils to meet this goal.
We have the limit of running 200 sybils for each hydra node, but we can spawn multiple hydra. Because we want the PeerIDs to be evenly distributed, even across hydra nodes, we need to separate the logic of PeerID Gen to be a separate service that is shared by the hydras.
Additionally, we want to adjust the scale of the hydras to the number of nodes, for that, we want to run a cronjob to adjust it every week (or day).
One extra step is as we auto scale up and down, we don't want to loose the work already done in harvesting records. So instead of having each hydra with their own belly (record store) we want to have a shared record store across hydras (using a DB like postgres)
Tasks:
Separate the PeerID Gen into a networked service
Start a CronJob to autoscale the number of sybils
Implement the shared datastore with Postgres
The text was updated successfully, but these errors were encountered:
Just came out from a standup with @alanshaw and here is what we discussed.
In order to achieve the goal of "Be ~1 hop away from every other node in DHT", we need to have one evenly distributed PeerID for each other 20 nodes in the network, so that we land in everyone's else first kbucket
The formula is quite simple:
The current Network size is 20K, so applying the formulate we get that we need 1000 sybils to meet this goal.
We have the limit of running 200 sybils for each hydra node, but we can spawn multiple hydra. Because we want the PeerIDs to be evenly distributed, even across hydra nodes, we need to separate the logic of PeerID Gen to be a separate service that is shared by the hydras.
Additionally, we want to adjust the scale of the hydras to the number of nodes, for that, we want to run a cronjob to adjust it every week (or day).
One extra step is as we auto scale up and down, we don't want to loose the work already done in harvesting records. So instead of having each hydra with their own belly (record store) we want to have a shared record store across hydras (using a DB like postgres)
Tasks:
The text was updated successfully, but these errors were encountered: