Skip to content

sync block pruning#536

Merged
g11tech merged 7 commits intomainfrom
syncing
Feb 3, 2026
Merged

sync block pruning#536
g11tech merged 7 commits intomainfrom
syncing

Conversation

@anshalshukla
Copy link
Copy Markdown
Collaborator

closes #528

Comment on lines +615 to +624
self.logger.info("peer {s}{} is ahead (peer_finalized_slot={d} > our_head_slot={d}), initiating sync by requesting head block 0x{s}", .{
status_ctx.peer_id,
self.node_registry.getNodeNameFromPeerId(status_ctx.peer_id),
status_resp.finalized_slot,
info.head_slot,
std.fmt.fmtSliceHexLower(&status_resp.head_root),
});
const roots = [_]types.Root{status_resp.head_root};
self.fetchBlockByRoots(&roots, 0) catch |err| {
self.logger.warn("failed to initiate sync by fetching head block from peer {s}{}: {any}", .{
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Up here we have peer_finalized_slot > our_finalized_slot , But the log says "our_head_slot" and prints info.head_slot.

self.chain.forkChoice.fcStore.latest_finalized.slot , // ← Use finalized, not head root

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so syncStatus makes the comparison between the current finalized and highest finalized slot amongst the peer. I don't exactly understand what modification you expect here

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log says our_head_slot but we're comparing finalized slots.

Comment on lines +437 to +443
if (self.network.fetched_blocks.count() >= constants.MAX_CACHED_BLOCKS) {
self.logger.warn("Cache full ({d} blocks), rejecting block 0x{s} at slot {d}", .{
self.network.fetched_blocks.count(),
std.fmt.fmtSliceHexLower(block_root[0..]),
block_slot,
});
return CacheBlockError.CachingFailed;
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason we are doing a hard rejection when the cache structures are full?
Can't we do a slot based cache eviction, where the slot farthest from the
current canonical tip can be evicted.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had suggested this in the issue but I cannot think of situation which leads us to this , unless we stop having finalization. And even when finalization is delayed we have good buffer of 1024 blocks. On finalization pruning occurs and removes all the blocks behind latest finalized slot

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern is a peer spamming orphan blocks to fill the cache these won't be pruned by finalization since they're not connected to the canonical chain.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validateBlock should reject most of the blocks doing that, I intentionally avoided signature verification here so that it is an inexpensive validation. We have additional checks in cacheBlockAndFetchParent to prevent spams

@g11tech g11tech changed the title Syncing sync block pruning Feb 3, 2026
@g11tech g11tech merged commit 03fa4e3 into main Feb 3, 2026
12 checks passed
@g11tech g11tech deleted the syncing branch February 3, 2026 07:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Chain syncing improvements

3 participants