Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat.Req] Enable relay network with pubsub sharding #295

Closed
3 tasks
hopeyen opened this issue Oct 24, 2023 · 1 comment · Fixed by #297
Closed
3 tasks

[Feat.Req] Enable relay network with pubsub sharding #295

hopeyen opened this issue Oct 24, 2023 · 1 comment · Fixed by #297
Assignees
Labels
p1 High priority size:medium Medium type:tracking Tracking issues with related scope

Comments

@hopeyen
Copy link
Collaborator

hopeyen commented Oct 24, 2023

Problem statement

Currently the radio nodes utilize filter and lightpush gossip protocols and rely on waku nodes hosted by us. The hosted nodes were solely responsible for relaying messages between all the radios nodes, and the traffic is becoming too heavy to handle for them. We need a way for the network to scale without the limitations from our hosted nodes.

Expectation proposal

Currently we have pubsub topic split into Graphcast mainnet and testnet, while utilizing content topics with radio_name/version/identifier where the identifier is usually a subgraph deployment hash.

Required

  • Enable relay protocol on all the radio nodes, move away from using filter and lightpush participation in the network.
  • Without filter protocol enabled, we must add an additional check in the radio on content_topic to only handle the messages the specific radios are interested in.

Expansion (can move to a separate issue focused on sharding)

  • To alleviate network traffic from participating nodes, we can utilize pubsub topic sharding. Given a fixed number of shards, we put all possible content topics into exactly one shard. The radio nodes will only participate in shards with their topics.
    • example sharding: If we split the network into 9 shards, then for each deployment, the corresponding shard is graphcast_mainnet_[deployment.bytes32().to_int() % 9]
    • for a radio, upon generating content topics, also generate the corresponding shards and subscribe to them
    • when a radio sends a message, make sure the correct shard is used for publishing
    • hosted nodes should each get 3 pubsub topics as their shards, but we should expect them only used for bootstrapping

Alternative considerations

  • RLN-relay requires a registration smart contract
  • Autosharding is undergoing development by Waku team

Additional context
Network sharding: https://rfc.vac.dev/spec/51/

@hopeyen hopeyen added size:medium Medium p1 High priority type:tracking Tracking issues with related scope labels Oct 24, 2023
@vpavlin
Copy link

vpavlin commented Oct 25, 2023

The described approach sounds reasonable! Definitely more scalable than radios only relying on Lightpush&Filter.

Keep in mind that if you ever decide to change number of shards, the contentTopic -> shar mapping will change. I don't think that is a huge problem for you, but just to be aware that during this change being applied to the network there might be some messages travelling to 2 different pubsub topics, so the delivery might be unreliable until the whole network is upgraded.

We are trying to solve it by introducing a generation number in the content topic, so that if we need to add shards, they'll be on the gen+1, so the old topics mapping will still apply

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
p1 High priority size:medium Medium type:tracking Tracking issues with related scope
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants