Describe the feature
I propose implementing a stricter backpressure mechanism in the HeadersDownloader to prevent unbounded memory growth during the initial header synchronization phase.
Currently, under high latency or with slow peers, the downloader may queue requests without an effective hard limit, leading to excessive memory pressure (potential OOM) as the network_download_headers_buffer_size grows indefinitely.
Motivation
In Rust/Tokio applications, managing Streams without explicit backpressure is a common source of memory instability. During the initial sync, if the gap between the last confirmed header and the highest pending request becomes too large, the node consumes excessive RAM buffering these pending futures.
This improvement would enhance node stability, especially on resource-constrained environments or when syncing from peers with poor performance.
Proposed Solution
-
Hard Limit on Pending Requests:
Implement a configurable threshold for the HeaderSync stage. If the difference between the last confirmed header and the highest pending request exceeds this limit, the downloader should pause new requests until the buffer drains.
-
Peer Reputation Penalty:
Automatically deprioritize or ban peers that consistently return empty, delayed, or partial header batches. This prevents slow peers from congesting the sync pipeline.
-
Metrics Exposure:
Ensure network_download_headers_buffer_size is accurately tracked and exposed via metrics to allow monitoring of backpressure effectiveness.
Assignment / Reviewers
@mattsse @gakonst @rkrasiuk
(Note: Tagging key maintainers involved in network/sync logic)
Additional context
Offer to Validate (Stress Test):
I have an isolated environment (Docker/Ubuntu) ready to reproduce this behavior. I can run a stress test on the latest main branch to:
- Confirm if RAM consumption grows linearly during sync with simulated slow peers.
- Provide memory profiles and logs to validate the need for this hard limit.
- Test the proposed backpressure logic if a prototype branch is available.
Would the team be interested in these stress test results? I can share the data here to help prioritize this implementation.
Describe the feature
I propose implementing a stricter backpressure mechanism in the
HeadersDownloaderto prevent unbounded memory growth during the initial header synchronization phase.Currently, under high latency or with slow peers, the downloader may queue requests without an effective hard limit, leading to excessive memory pressure (potential OOM) as the
network_download_headers_buffer_sizegrows indefinitely.Motivation
In Rust/Tokio applications, managing
Streamswithout explicit backpressure is a common source of memory instability. During the initial sync, if the gap between the last confirmed header and the highest pending request becomes too large, the node consumes excessive RAM buffering these pending futures.This improvement would enhance node stability, especially on resource-constrained environments or when syncing from peers with poor performance.
Proposed Solution
Hard Limit on Pending Requests:
Implement a configurable threshold for the
HeaderSyncstage. If the difference between the last confirmed header and the highest pending request exceeds this limit, the downloader should pause new requests until the buffer drains.Peer Reputation Penalty:
Automatically deprioritize or ban peers that consistently return empty, delayed, or partial header batches. This prevents slow peers from congesting the sync pipeline.
Metrics Exposure:
Ensure
network_download_headers_buffer_sizeis accurately tracked and exposed via metrics to allow monitoring of backpressure effectiveness.Assignment / Reviewers
@mattsse @gakonst @rkrasiuk
(Note: Tagging key maintainers involved in network/sync logic)
Additional context
Offer to Validate (Stress Test):
I have an isolated environment (Docker/Ubuntu) ready to reproduce this behavior. I can run a stress test on the latest
mainbranch to:Would the team be interested in these stress test results? I can share the data here to help prioritize this implementation.