Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zebra should limit the number of addresses it uses from a single Addrs response, to avoid address book takeover #1869

Closed
1 task
teor2345 opened this issue Mar 9, 2021 · 3 comments · Fixed by #7952
Assignees
Labels
A-network Area: Network protocol updates or fixes C-security Category: Security issues I-hang A Zebra component stops responding to requests I-invalid-data Zebra relies on invalid or untrusted data, or sends invalid data I-remote-node-overload Zebra can overload other nodes on the network

Comments

@teor2345
Copy link
Contributor

teor2345 commented Mar 9, 2021

Motivation

Zebra accepts up to 1000 addresses from each peer address request. But the Zcash mainnet only has around 200 peers, and the peerset_initial_target_size defaults to 75.

These limits will be particularly important once we limit the size of the address book (#1873), so that peers can't flood or replace our entire address book.

Solution

Zebra should limit the number of peers it accepts in response to each peer request.

Zebra should choose peers from each peer request using the following algorithm:

  • take peerset_initial_target_size / 2 peers at random, and leave the rest in the connection peer cache

These designs have security implications, so we shouldn't do them:

  • remove addresses that are already in the address book this can be exploited to fill the address book with new malicious addresses
  • sort peers by their untrusted_last_seen_time, newest first choosing peers based on untrusted data makes it easier to add new malicious addresses
  • Increase the default crawl_and_dial interval, to compensate for the peers we're discarding we already crawl in response to unmet demand, so this isn't needed

This algorithm makes it much less likely that a majority of Zebra's peers will come from a single peer request, while also limiting the number of extra peer requests that Zebra makes.

Context

zcashd does not have this issue.

@teor2345 teor2345 added C-bug Category: This is a bug A-rust Area: Updates to Rust code S-needs-triage Status: A bug report needs triage P-High C-security Category: Security issues I-hang A Zebra component stops responding to requests I-slow Problems with performance or responsiveness I-unbounded-growth Zebra keeps using resources, without any limit I-invalid-data Zebra relies on invalid or untrusted data, or sends invalid data labels Mar 9, 2021
@teor2345 teor2345 added this to No Estimate in Effort Affinity Grouping via automation Mar 9, 2021
@teor2345 teor2345 added this to To Do in 🦓 via automation Mar 9, 2021
@teor2345 teor2345 moved this from No Estimate to XS - 2 in Effort Affinity Grouping Mar 9, 2021
@mpguerra mpguerra removed the S-needs-triage Status: A bug report needs triage label Mar 10, 2021
@teor2345 teor2345 moved this from XS - 2 to S - 3 in Effort Affinity Grouping Mar 15, 2021
@teor2345 teor2345 added I-remote-node-overload Zebra can overload other nodes on the network P-Critical and removed P-High labels Mar 16, 2021
@teor2345 teor2345 removed this from the 2021 Sprint 9 milestone Mar 24, 2021
@teor2345
Copy link
Contributor Author

This doesn't affect remote nodes, so it's not a critical security issue.

@teor2345 teor2345 changed the title Zebra should limit the number of addresses it uses from a single node Zebra should limit the number of addresses it uses from a single Addrs response Apr 12, 2021
@teor2345 teor2345 changed the title Zebra should limit the number of addresses it uses from a single Addrs response Zebra should limit the number of addresses it uses from a single Addrs response, to avoid address book takeover Oct 25, 2021
@mpguerra mpguerra added this to the 2021 Sprint 23 milestone Oct 26, 2021
@teor2345 teor2345 removed this from the 2021 Sprint 23 milestone Nov 8, 2021
@teor2345
Copy link
Contributor Author

teor2345 commented Nov 8, 2021

zcashd has a very strict rate limit on addr responses. So we can't implement this ticket, because we might discard useful addresses, and never get them back. This is a particular issue on testnet, but it could also happen to mainnet nodes with a small number of initial peers.

Instead, we could eventually implement the DNS seeder retries in #1892.

For the moment, our current peer prioritisation should be good enough to avoid most of these issues, particularly after we limit the address book size in #1873.

@teor2345 teor2345 closed this as completed Nov 8, 2021
🦓 automation moved this from To Do to Done Nov 8, 2021
Effort Affinity Grouping automation moved this from S - 3 to Done Nov 8, 2021
@mpguerra mpguerra closed this as not planned Won't fix, can't repro, duplicate, stale May 18, 2023
@teor2345
Copy link
Contributor Author

teor2345 commented Nov 9, 2023

Might be needed for #7824

@teor2345 teor2345 reopened this Nov 9, 2023
🦓 automation moved this from Done to In progress Nov 9, 2023
@teor2345 teor2345 added P-Medium ⚡ A-network Area: Network protocol updates or fixes and removed A-rust Area: Updates to Rust code P-High 🔥 I-slow Problems with performance or responsiveness I-unbounded-growth Zebra keeps using resources, without any limit C-bug Category: This is a bug labels Nov 16, 2023
@teor2345 teor2345 self-assigned this Nov 16, 2023
@mergify mergify bot closed this as completed in #7952 Nov 19, 2023
🦓 automation moved this from In progress to Done Nov 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-network Area: Network protocol updates or fixes C-security Category: Security issues I-hang A Zebra component stops responding to requests I-invalid-data Zebra relies on invalid or untrusted data, or sends invalid data I-remote-node-overload Zebra can overload other nodes on the network
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

2 participants