Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(mempool)!: limit mempool gossip rate on a per-peer basis #787

Merged
merged 10 commits into from
May 20, 2024

Conversation

lklimek
Copy link
Collaborator

@lklimek lklimek commented May 13, 2024

Issue being fixed or feature implemented

As tx recv rate and tx send rate are global and shared by all peers. This is incorrect and not what we intended to get.

What was done?

Removed option TxRecvRatePunishPeer.
Moved rate limit logic to p2pclient.
Implemented rate limits on peer level, with garbage collector (peers not used for last 60s have their rate limits removed)

How Has This Been Tested?

  1. Added unit tests for sending and receiving messages.
  2. Tested on local devnet with 3 nodes, stopped core on local_1 during test to avoid consuming txs

Configuration:

local_1:

yarn dashmate config set --config local_1 platform.drive.tenderdash.mempool.txRecvRateLimit 12
yarn dashmate config set --config local_2 platform.drive.tenderdash.mempool.txSendRateLimit 10
yarn dashmate config set --config local_3 platform.drive.tenderdash.mempool.txSendRateLimit 10

yarn dashmate config set --config local_1 platform.drive.tenderdash.metrics.enabled true
yarn dashmate config set --config local_2 platform.drive.tenderdash.metrics.enabled true
yarn dashmate config set --config local_3 platform.drive.tenderdash.metrics.enabled true

yarn dashmate config set --config local_1 platform.drive.tenderdash.metrics.host 0.0.0.0
yarn dashmate config set --config local_2 platform.drive.tenderdash.metrics.host 0.0.0.0
yarn dashmate config set --config local_3 platform.drive.tenderdash.metrics.host 0.0.0.0

yarn dashmate config render --config local_1
yarn dashmate config render --config local_2
yarn dashmate config render --config local_3

yarn dashmate restart --config local_1 --platform -v
yarn dashmate restart --config local_2 --platform -v
yarn dashmate restart --config local_3 --platform -v

Result: 300 txs/15s (20 tx/s) gossiped and added to mempool of local_1 (correct result)

After changing recv lmit to 5:

yarn dashmate config set --config local_1 platform.drive.tenderdash.mempool.txRecvRateLimit 5

gets 150/15s = 10 txs/s == what expected

Breaking Changes

tx-recv-rate-punish-peer config option was removed.

Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have added or updated relevant unit/integration/functional/e2e tests
  • I have made corresponding changes to the documentation

For repository code-owners and collaborators only

  • I have assigned this pull request to a milestone

@lklimek lklimek changed the title fix(mempool)!: remove TxRecvRatePunishPeer option fix(mempool)!: limit mempool gossip rate on a per-peer basis May 15, 2024
Copy link
Member

@shumkov shumkov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@lklimek lklimek merged commit 50f8db6 into v0.14-dev May 20, 2024
16 checks passed
@lklimek lklimek deleted the fix/remove-punish-peer branch May 20, 2024 13:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants