Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: make rln rate limit spec compliant #2294

Merged
merged 3 commits into from
Dec 15, 2023
Merged

fix: make rln rate limit spec compliant #2294

merged 3 commits into from
Dec 15, 2023

Conversation

alrevuelta
Copy link
Contributor

fix: make rln rate limit spec compliant
closes #2289

Copy link

github-actions bot commented Dec 14, 2023

You can find the image built from this PR at

quay.io/wakuorg/nwaku-pr:2294

Built from 041d67e

@alrevuelta
Copy link
Contributor Author

Note that the changes in this PR might be backwards incompatible, in some occasions, when having nodes sending > 1 msg/10 seconds:

  • Old nodes will Reject these messages.
  • New nodes will Accept these messages.

This could fork the network, but only when having rln memberships sending messages at a rate greater than 1msg/10sec.

Copy link
Contributor Author

@alrevuelta alrevuelta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rymnc This change triggered some edge cases? Mind checking my questions?

@@ -445,7 +445,7 @@ procSuite "WakuNode - RLN relay":
proofAdded1 = node1.wakuRlnRelay.appendRLNProof(wm1, time)
# another message in the same epoch as wm1, it will break the messaging rate limit
wm2 = WakuMessage(payload: "message 2".toBytes(), contentTopic: contentTopic)
proofAdded2 = node1.wakuRlnRelay.appendRLNProof(wm2, time + EpochUnitSeconds)
proofAdded2 = node1.wakuRlnRelay.appendRLNProof(wm2, time)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rymnc Unsure I understand why wm2 had time + EpochUnitSecods if its expected to "break the messaging rate limit". Isn't it better to set the same time?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't matter too much, just showing that 2 messages were generated within the same epoch window.

# mount the relay handler for node2
node2.subscribe((kind: PubsubSub, topic: DefaultPubsubTopic), some(relayHandler))
await sleepAsync(2000.millis)

await node1.publish(some(DefaultPubsubTopic), wm1)
await sleepAsync(10.seconds)
await node1.publish(some(DefaultPubsubTopic), wm2)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rymnc Mind clarifying why this sleep was needed? Epoch is hardcoded above, so perhaps not needed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was for message propagation i believe, due to some race condition we had earlier

res1 == true
res2 == false
res3 == true
node2.wakuRlnRelay.nullifierLog.len() == 2
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rymnc The change from 10 to 1 in EpochUnitSeconds triggered this edge case (I belive). Mind clarifying what would be expected here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is correct, thanks for the fix 😁

(res1 and res2 and res3) == true # all 3 are valid
node2.wakuRlnRelay.nullifierLog.len() == 1 # after clearing, only 1 is stored
res1 == true
res2 == false
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rymnc Not sure why rest2 was expected to complete. Since as stated above the second message "it will break the messaging rate limit" and that implies that the handler wont be triggered? So res==false would be expected isn't it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree, oversight on my part

Copy link
Contributor

@rymnc rymnc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the fixes in the edge cases!

@alrevuelta alrevuelta merged commit 5847f49 into master Dec 15, 2023
9 of 10 checks passed
@alrevuelta alrevuelta deleted the fix-rln-epoch branch December 15, 2023 09:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

bug: RLN messages are dropped
4 participants