Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QBFT: Mitigate OOM attack by spamming future messages #524

Closed
corverroos opened this issue May 13, 2022 · 6 comments
Closed

QBFT: Mitigate OOM attack by spamming future messages #524

corverroos opened this issue May 13, 2022 · 6 comments
Assignees
Labels
bug Something isn't working

Comments

@corverroos
Copy link
Contributor

corverroos commented May 13, 2022

Problem to be solved

The current implementation of QBFT buffers all future messages of all peers. This can be attacked by sending an "infinite" number of valid future round messages, resulting in OOM.

Proposed solution

  • Only buffer future messages of the "highest" round received from each peer.
  • In addition to buffering messages of the current round.
  • Keep a highestRound map[process]round for each peer.
  • When receiving messages check this map and drop messages with lower rounds for each peer.
  • If higher, update the map and trim the buffer of lower rounds for that peer.
  • Before starting to implement this, double check GoQuorum implementation to see how they handle this problem.
@corverroos corverroos added the bug Something isn't working label May 13, 2022
@leolara
Copy link
Contributor

leolara commented May 14, 2022

is this a bug? or more like a security feature?

@corverroos
Copy link
Contributor Author

corverroos commented May 15, 2022

On second thought, buffering only a single future round for each peer can negatively affect liveness. Since we might be throwing away valid messages that can cause consensus of earlier rounds.

We could strike a balance between liveness and safety by buffering only a "fixed amount of messages per peer", e.g. 100.

Buffering max 100 messages per peer should provide ample future message buffering to ensure optimal liveness in 99.9% of valid cases. And it will ensure that OOM is not possible.

100 msgs/peer == 25 rounds (4 messages per peer per round).

We could also make this configurable by adding PeerMsgLimit int to Definition

@leolara
Copy link
Contributor

leolara commented May 15, 2022

@corverroos but what is the eviction rule? the oldest are evicted?

what if we retain the last X rounds? I think that would be easier if we keep the messages in a map[dedupKey]Msg instead of []msg

@corverroos
Copy link
Contributor Author

Actually, I think we can remove the current trimBuffer AND dedupMsgs and just do FIFO-max-100 buffering. Since we dedup when counting and filtering in any case.

So maybe buffer can be var buffer map[int64][]Msg

@corverroos
Copy link
Contributor Author

something like this ^^. Feels like it actually simplifies the algo...?

@corverroos corverroos self-assigned this May 15, 2022
obol-bulldozer bot pushed a commit that referenced this issue May 15, 2022
Refactors buffering to FIFO per peer to prevent DDoS of future round messages.

category: refactor
ticket: #524
@leolara
Copy link
Contributor

leolara commented May 15, 2022

you mean with key as process? I think it is better than before

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants