Skip to content

Conversation

@wemeetagain
Copy link
Member

@wemeetagain wemeetagain commented Jan 23, 2026

Motivation

Description

  • Make writeBlockInputToDb async with block import/head update
  • Add a job queue to trigger trigger writes (one at a time)
  • For serving unfinalized blocks, check the block input cache first, before checking hot db
  • For serving unfinalized block blob sidecars, check the block input cache first, before checking hot db
    • see new chain.getBlobSidecars and chain.getSerializedBlobSidecars -- note: only serves all or none
    • new chain methods used in API and reqresp
    • note: old db method still used in by_range
  • For serving unfinalized block data column sidecars, check the block input cache first, before checking hot db
    • see new chain.getDataColumnSidecars and chain.getSerializedDataColumnSidecars
  • Let the writeBlockInputToDb process prune the block input cache after its run
  • Remove the eagerPersistBlock option, since it's now irrelevant

@wemeetagain wemeetagain requested a review from a team as a code owner January 23, 2026 16:27
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @wemeetagain, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the block persistence mechanism by introducing an asynchronous job queue for writing unfinalized block data to the database. This change aims to improve performance during block import by offloading I/O operations and ensuring a more controlled and sequential persistence process. It also optimizes block retrieval by prioritizing cached data and centralizes cache management within the persistence logic.

Highlights

  • Asynchronous Block Persistence: The process of writing block input data to the database has been decoupled from the main block import flow, allowing it to happen asynchronously.
  • Job Queue for Database Writes: A new JobItemQueue named unfinalizedBlockWrites has been introduced to manage and serialize database write operations for unfinalized blocks, ensuring they are processed one at a time.
  • Optimized Block Retrieval: When retrieving unfinalized blocks, the system now first checks a seenBlockInputCache before resorting to the hot database, potentially speeding up access to recently processed blocks.
  • Centralized Cache Pruning: The writeBlockInputToDb function is now responsible for pruning the seenBlockInputCache after successfully persisting block data, ensuring cache consistency and reducing redundant pruning calls.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces asynchronous persistence for block inputs, moving the "write block input to db" step to an asynchronous job queue. This change aims to improve performance by decoupling block import from immediate database writes. Key modifications include the introduction of a JobItemQueue for unfinalized block writes, updates to importBlock and verifyBlock to utilize this new queue, and a refactoring of writeBlockInputToDb to integrate with the async mechanism and ensure proper cache pruning. Additionally, the BeaconChain now checks the seenBlockInputCache for unfinalized blocks before querying the hot database, and a filterInPlace method was added to LinkedList to support job queue filtering. However, there is a critical bug in the writeBlockInputToDb function regarding how promises are collected and awaited.

Comment on lines 94 to 97
await Promise.all(fnPromises);
this.logger.debug("Persisted blocksInput to db", {
blocksInput: blocksInputs.length,
slots: prettyPrintIndices(slots),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The fnPromises array is declared once outside the for loop (line 16), and promises for each blockInput are pushed into it. However, await Promise.all(fnPromises); is called inside the loop (lines 94-97). This means that for each blockInput in blocksInputs, Promise.all will be called on the entire fnPromises array, which contains promises from all previous blockInputs processed so far. This leads to redundant awaiting and incorrect behavior, as promises from earlier blockInputs will be awaited multiple times. To fix this, these lines should be moved outside the for loop, after line 98, to ensure all promises are collected before being awaited once.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose this is correct, but not really the point of this PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depends on if we wanna write block inputs sequentially or in parallel so either should move fnPromises inside the for-loop or move await Promise.all(fnPromises) outside. From the logs it seems like sequential write is intended, maybe @matthewkeil remembers what the intend was. I don't think it matters much since we usually just write one block input per slot when following head

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8744f60b03

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Member

@nflaig nflaig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

part of this needs to be that we need to be able to serve by root requests which is now uncertain since we might update head before db write is done

I think we can use block input for this, so either serve the data from in-memory, or fallback to db query

@github-actions
Copy link
Contributor

github-actions bot commented Jan 23, 2026

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 7386c13 Previous: 3715a82 Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 987.51 us/op 1.0553 ms/op 0.94
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 37.195 us/op 35.364 us/op 1.05
BLS verify - blst 849.97 us/op 877.98 us/op 0.97
BLS verifyMultipleSignatures 3 - blst 1.3687 ms/op 1.3403 ms/op 1.02
BLS verifyMultipleSignatures 8 - blst 2.0982 ms/op 2.1056 ms/op 1.00
BLS verifyMultipleSignatures 32 - blst 5.2800 ms/op 4.3739 ms/op 1.21
BLS verifyMultipleSignatures 64 - blst 8.6691 ms/op 8.6236 ms/op 1.01
BLS verifyMultipleSignatures 128 - blst 16.914 ms/op 16.293 ms/op 1.04
BLS deserializing 10000 signatures 641.19 ms/op 629.38 ms/op 1.02
BLS deserializing 100000 signatures 6.4111 s/op 6.2516 s/op 1.03
BLS verifyMultipleSignatures - same message - 3 - blst 854.91 us/op 890.90 us/op 0.96
BLS verifyMultipleSignatures - same message - 8 - blst 1.1026 ms/op 1.4616 ms/op 0.75
BLS verifyMultipleSignatures - same message - 32 - blst 1.6857 ms/op 1.6345 ms/op 1.03
BLS verifyMultipleSignatures - same message - 64 - blst 2.5887 ms/op 2.5809 ms/op 1.00
BLS verifyMultipleSignatures - same message - 128 - blst 4.3224 ms/op 4.2449 ms/op 1.02
BLS aggregatePubkeys 32 - blst 17.995 us/op 17.735 us/op 1.01
BLS aggregatePubkeys 128 - blst 63.771 us/op 62.767 us/op 1.02
getSlashingsAndExits - default max 56.027 us/op 42.663 us/op 1.31
getSlashingsAndExits - 2k 394.96 us/op 318.48 us/op 1.24
isKnown best case - 1 super set check 417.00 ns/op 404.00 ns/op 1.03
isKnown normal case - 2 super set checks 413.00 ns/op 391.00 ns/op 1.06
isKnown worse case - 16 super set checks 401.00 ns/op 395.00 ns/op 1.02
InMemoryCheckpointStateCache - add get delete 2.0750 us/op 2.1350 us/op 0.97
validate api signedAggregateAndProof - struct 1.5527 ms/op 1.5161 ms/op 1.02
validate gossip signedAggregateAndProof - struct 1.5054 ms/op 1.5244 ms/op 0.99
batch validate gossip attestation - vc 640000 - chunk 32 109.94 us/op 115.53 us/op 0.95
batch validate gossip attestation - vc 640000 - chunk 64 116.38 us/op 100.99 us/op 1.15
batch validate gossip attestation - vc 640000 - chunk 128 92.908 us/op 94.386 us/op 0.98
batch validate gossip attestation - vc 640000 - chunk 256 87.188 us/op 87.175 us/op 1.00
bytes32 toHexString 515.00 ns/op 502.00 ns/op 1.03
bytes32 Buffer.toString(hex) 427.00 ns/op 409.00 ns/op 1.04
bytes32 Buffer.toString(hex) from Uint8Array 510.00 ns/op 490.00 ns/op 1.04
bytes32 Buffer.toString(hex) + 0x 483.00 ns/op 402.00 ns/op 1.20
Return object 10000 times 0.23950 ns/op 0.23120 ns/op 1.04
Throw Error 10000 times 3.2924 us/op 3.3038 us/op 1.00
toHex 99.908 ns/op 106.27 ns/op 0.94
Buffer.from 92.639 ns/op 96.682 ns/op 0.96
shared Buffer 65.759 ns/op 64.641 ns/op 1.02
fastMsgIdFn sha256 / 200 bytes 1.7090 us/op 1.6990 us/op 1.01
fastMsgIdFn h32 xxhash / 200 bytes 491.00 ns/op 374.00 ns/op 1.31
fastMsgIdFn h64 xxhash / 200 bytes 515.00 ns/op 437.00 ns/op 1.18
fastMsgIdFn sha256 / 1000 bytes 4.9930 us/op 5.0960 us/op 0.98
fastMsgIdFn h32 xxhash / 1000 bytes 472.00 ns/op 476.00 ns/op 0.99
fastMsgIdFn h64 xxhash / 1000 bytes 481.00 ns/op 492.00 ns/op 0.98
fastMsgIdFn sha256 / 10000 bytes 41.929 us/op 42.924 us/op 0.98
fastMsgIdFn h32 xxhash / 10000 bytes 1.4650 us/op 1.5230 us/op 0.96
fastMsgIdFn h64 xxhash / 10000 bytes 1.0510 us/op 1.0840 us/op 0.97
send data - 1000 256B messages 12.455 ms/op 11.758 ms/op 1.06
send data - 1000 512B messages 15.461 ms/op 13.890 ms/op 1.11
send data - 1000 1024B messages 22.660 ms/op 18.546 ms/op 1.22
send data - 1000 1200B messages 21.833 ms/op 18.737 ms/op 1.17
send data - 1000 2048B messages 18.428 ms/op 17.399 ms/op 1.06
send data - 1000 4096B messages 22.307 ms/op 21.863 ms/op 1.02
send data - 1000 16384B messages 135.26 ms/op 114.41 ms/op 1.18
send data - 1000 65536B messages 422.31 ms/op 273.09 ms/op 1.55
enrSubnets - fastDeserialize 64 bits 1.9340 us/op 1.0110 us/op 1.91
enrSubnets - ssz BitVector 64 bits 584.00 ns/op 539.00 ns/op 1.08
enrSubnets - fastDeserialize 4 bits 320.00 ns/op 366.00 ns/op 0.87
enrSubnets - ssz BitVector 4 bits 623.00 ns/op 535.00 ns/op 1.16
prioritizePeers score -10:0 att 32-0.1 sync 2-0 208.29 us/op 224.96 us/op 0.93
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 314.40 us/op 283.24 us/op 1.11
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 408.98 us/op 360.03 us/op 1.14
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 624.25 us/op 729.27 us/op 0.86
prioritizePeers score 0:0 att 64-1 sync 4-1 878.95 us/op 711.70 us/op 1.24
array of 16000 items push then shift 1.2385 us/op 1.2623 us/op 0.98
LinkedList of 16000 items push then shift 8.2870 ns/op 8.0380 ns/op 1.03
array of 16000 items push then pop 68.758 ns/op 78.243 ns/op 0.88
LinkedList of 16000 items push then pop 6.3850 ns/op 6.5510 ns/op 0.97
array of 24000 items push then shift 1.8374 us/op 1.9527 us/op 0.94
LinkedList of 24000 items push then shift 7.9360 ns/op 7.5460 ns/op 1.05
array of 24000 items push then pop 95.590 ns/op 98.584 ns/op 0.97
LinkedList of 24000 items push then pop 6.4210 ns/op 6.7650 ns/op 0.95
intersect bitArray bitLen 8 4.7200 ns/op 5.0080 ns/op 0.94
intersect array and set length 8 29.002 ns/op 30.638 ns/op 0.95
intersect bitArray bitLen 128 25.813 ns/op 26.824 ns/op 0.96
intersect array and set length 128 485.77 ns/op 521.98 ns/op 0.93
bitArray.getTrueBitIndexes() bitLen 128 1.2690 us/op 1.2240 us/op 1.04
bitArray.getTrueBitIndexes() bitLen 248 1.9590 us/op 1.9710 us/op 0.99
bitArray.getTrueBitIndexes() bitLen 512 4.4230 us/op 4.0040 us/op 1.10
Full columns - reconstruct all 6 blobs 269.64 us/op 290.03 us/op 0.93
Full columns - reconstruct half of the blobs out of 6 105.85 us/op 109.79 us/op 0.96
Full columns - reconstruct single blob out of 6 31.918 us/op 29.797 us/op 1.07
Half columns - reconstruct all 6 blobs 266.30 ms/op 255.89 ms/op 1.04
Half columns - reconstruct half of the blobs out of 6 135.97 ms/op 128.41 ms/op 1.06
Half columns - reconstruct single blob out of 6 58.040 ms/op 47.293 ms/op 1.23
Full columns - reconstruct all 10 blobs 337.12 us/op 288.93 us/op 1.17
Full columns - reconstruct half of the blobs out of 10 151.20 us/op 165.92 us/op 0.91
Full columns - reconstruct single blob out of 10 30.054 us/op 29.015 us/op 1.04
Half columns - reconstruct all 10 blobs 473.22 ms/op 410.39 ms/op 1.15
Half columns - reconstruct half of the blobs out of 10 312.33 ms/op 218.37 ms/op 1.43
Half columns - reconstruct single blob out of 10 54.948 ms/op 46.669 ms/op 1.18
Full columns - reconstruct all 20 blobs 821.57 us/op 631.20 us/op 1.30
Full columns - reconstruct half of the blobs out of 20 327.55 us/op 303.41 us/op 1.08
Full columns - reconstruct single blob out of 20 32.370 us/op 31.790 us/op 1.02
Half columns - reconstruct all 20 blobs 874.23 ms/op 832.41 ms/op 1.05
Half columns - reconstruct half of the blobs out of 20 417.66 ms/op 421.53 ms/op 0.99
Half columns - reconstruct single blob out of 20 49.559 ms/op 46.826 ms/op 1.06
Set add up to 64 items then delete first 1.6387 us/op 1.5902 us/op 1.03
OrderedSet add up to 64 items then delete first 2.4708 us/op 2.4567 us/op 1.01
Set add up to 64 items then delete last 1.8520 us/op 1.8283 us/op 1.01
OrderedSet add up to 64 items then delete last 2.8480 us/op 2.7121 us/op 1.05
Set add up to 64 items then delete middle 1.9249 us/op 1.7980 us/op 1.07
OrderedSet add up to 64 items then delete middle 4.4953 us/op 4.2928 us/op 1.05
Set add up to 128 items then delete first 3.7147 us/op 3.7804 us/op 0.98
OrderedSet add up to 128 items then delete first 5.5242 us/op 5.8215 us/op 0.95
Set add up to 128 items then delete last 3.7601 us/op 3.7014 us/op 1.02
OrderedSet add up to 128 items then delete last 5.6760 us/op 5.8416 us/op 0.97
Set add up to 128 items then delete middle 3.8916 us/op 3.8275 us/op 1.02
OrderedSet add up to 128 items then delete middle 12.285 us/op 11.589 us/op 1.06
Set add up to 256 items then delete first 8.0845 us/op 7.5143 us/op 1.08
OrderedSet add up to 256 items then delete first 11.806 us/op 11.955 us/op 0.99
Set add up to 256 items then delete last 8.3762 us/op 7.7839 us/op 1.08
OrderedSet add up to 256 items then delete last 12.074 us/op 11.324 us/op 1.07
Set add up to 256 items then delete middle 7.4443 us/op 7.1310 us/op 1.04
OrderedSet add up to 256 items then delete middle 35.627 us/op 34.837 us/op 1.02
pass gossip attestations to forkchoice per slot 2.1187 ms/op 2.1986 ms/op 0.96
forkChoice updateHead vc 100000 bc 64 eq 0 402.96 us/op 396.61 us/op 1.02
forkChoice updateHead vc 600000 bc 64 eq 0 2.2225 ms/op 2.3906 ms/op 0.93
forkChoice updateHead vc 1000000 bc 64 eq 0 3.6993 ms/op 3.7955 ms/op 0.97
forkChoice updateHead vc 600000 bc 320 eq 0 2.2920 ms/op 2.3390 ms/op 0.98
forkChoice updateHead vc 600000 bc 1200 eq 0 2.2688 ms/op 2.5150 ms/op 0.90
forkChoice updateHead vc 600000 bc 7200 eq 0 2.8012 ms/op 2.7092 ms/op 1.03
forkChoice updateHead vc 600000 bc 64 eq 1000 3.0828 ms/op 2.8699 ms/op 1.07
forkChoice updateHead vc 600000 bc 64 eq 10000 3.1250 ms/op 3.1379 ms/op 1.00
forkChoice updateHead vc 600000 bc 64 eq 300000 7.5532 ms/op 7.4064 ms/op 1.02
computeDeltas 1400000 validators 0% inactive 13.151 ms/op 12.467 ms/op 1.05
computeDeltas 1400000 validators 10% inactive 12.516 ms/op 11.773 ms/op 1.06
computeDeltas 1400000 validators 20% inactive 11.484 ms/op 10.724 ms/op 1.07
computeDeltas 1400000 validators 50% inactive 8.6039 ms/op 8.1570 ms/op 1.05
computeDeltas 2100000 validators 0% inactive 19.034 ms/op 19.298 ms/op 0.99
computeDeltas 2100000 validators 10% inactive 18.293 ms/op 17.554 ms/op 1.04
computeDeltas 2100000 validators 20% inactive 16.649 ms/op 19.577 ms/op 0.85
computeDeltas 2100000 validators 50% inactive 12.573 ms/op 14.212 ms/op 0.88
altair processAttestation - 250000 vs - 7PWei normalcase 1.6312 ms/op 2.0877 ms/op 0.78
altair processAttestation - 250000 vs - 7PWei worstcase 2.5386 ms/op 3.2720 ms/op 0.78
altair processAttestation - setStatus - 1/6 committees join 90.571 us/op 90.217 us/op 1.00
altair processAttestation - setStatus - 1/3 committees join 175.78 us/op 170.72 us/op 1.03
altair processAttestation - setStatus - 1/2 committees join 260.75 us/op 290.21 us/op 0.90
altair processAttestation - setStatus - 2/3 committees join 350.24 us/op 367.36 us/op 0.95
altair processAttestation - setStatus - 4/5 committees join 471.63 us/op 507.00 us/op 0.93
altair processAttestation - setStatus - 100% committees join 580.54 us/op 625.36 us/op 0.93
altair processBlock - 250000 vs - 7PWei normalcase 3.9637 ms/op 3.4758 ms/op 1.14
altair processBlock - 250000 vs - 7PWei normalcase hashState 14.772 ms/op 16.829 ms/op 0.88
altair processBlock - 250000 vs - 7PWei worstcase 22.458 ms/op 25.930 ms/op 0.87
altair processBlock - 250000 vs - 7PWei worstcase hashState 45.423 ms/op 58.847 ms/op 0.77
phase0 processBlock - 250000 vs - 7PWei normalcase 1.2216 ms/op 1.5331 ms/op 0.80
phase0 processBlock - 250000 vs - 7PWei worstcase 19.606 ms/op 21.359 ms/op 0.92
altair processEth1Data - 250000 vs - 7PWei normalcase 298.40 us/op 296.82 us/op 1.01
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 7.6210 us/op 6.5730 us/op 1.16
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 38.972 us/op 34.613 us/op 1.13
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 9.3800 us/op 10.734 us/op 0.87
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 5.8250 us/op 4.5380 us/op 1.28
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 147.36 us/op 151.14 us/op 0.98
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 1.3193 ms/op 1.3744 ms/op 0.96
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 1.6944 ms/op 1.8663 ms/op 0.91
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 1.7298 ms/op 1.7529 ms/op 0.99
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 3.6728 ms/op 3.7817 ms/op 0.97
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 1.9410 ms/op 2.0835 ms/op 0.93
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 3.9963 ms/op 4.0246 ms/op 0.99
Tree 40 250000 create 312.13 ms/op 293.86 ms/op 1.06
Tree 40 250000 get(125000) 99.821 ns/op 92.450 ns/op 1.08
Tree 40 250000 set(125000) 940.74 ns/op 970.48 ns/op 0.97
Tree 40 250000 toArray() 12.666 ms/op 14.228 ms/op 0.89
Tree 40 250000 iterate all - toArray() + loop 13.554 ms/op 14.821 ms/op 0.91
Tree 40 250000 iterate all - get(i) 36.455 ms/op 38.102 ms/op 0.96
Array 250000 create 2.0428 ms/op 2.2158 ms/op 0.92
Array 250000 clone - spread 621.78 us/op 665.02 us/op 0.93
Array 250000 get(125000) 0.48900 ns/op 0.50400 ns/op 0.97
Array 250000 set(125000) 0.48100 ns/op 0.50900 ns/op 0.94
Array 250000 iterate all - loop 56.427 us/op 58.380 us/op 0.97
phase0 afterProcessEpoch - 250000 vs - 7PWei 37.987 ms/op 38.836 ms/op 0.98
Array.fill - length 1000000 2.2766 ms/op 2.2793 ms/op 1.00
Array push - length 1000000 11.370 ms/op 7.4772 ms/op 1.52
Array.get 0.20044 ns/op 0.20678 ns/op 0.97
Uint8Array.get 0.20022 ns/op 0.20975 ns/op 0.95
phase0 beforeProcessEpoch - 250000 vs - 7PWei 15.661 ms/op 11.110 ms/op 1.41
altair processEpoch - mainnet_e81889 187.64 ms/op 201.94 ms/op 0.93
mainnet_e81889 - altair beforeProcessEpoch 14.064 ms/op 14.935 ms/op 0.94
mainnet_e81889 - altair processJustificationAndFinalization 5.0610 us/op 5.0540 us/op 1.00
mainnet_e81889 - altair processInactivityUpdates 3.1291 ms/op 3.4501 ms/op 0.91
mainnet_e81889 - altair processRewardsAndPenalties 17.088 ms/op 15.877 ms/op 1.08
mainnet_e81889 - altair processRegistryUpdates 793.00 ns/op 813.00 ns/op 0.98
mainnet_e81889 - altair processSlashings 377.00 ns/op 395.00 ns/op 0.95
mainnet_e81889 - altair processEth1DataReset 373.00 ns/op 386.00 ns/op 0.97
mainnet_e81889 - altair processEffectiveBalanceUpdates 2.0547 ms/op 1.1792 ms/op 1.74
mainnet_e81889 - altair processSlashingsReset 1.1120 us/op 945.00 ns/op 1.18
mainnet_e81889 - altair processRandaoMixesReset 1.3090 us/op 1.1450 us/op 1.14
mainnet_e81889 - altair processHistoricalRootsUpdate 389.00 ns/op 382.00 ns/op 1.02
mainnet_e81889 - altair processParticipationFlagUpdates 826.00 ns/op 684.00 ns/op 1.21
mainnet_e81889 - altair processSyncCommitteeUpdates 345.00 ns/op 374.00 ns/op 0.92
mainnet_e81889 - altair afterProcessEpoch 40.532 ms/op 41.805 ms/op 0.97
capella processEpoch - mainnet_e217614 715.80 ms/op 861.53 ms/op 0.83
mainnet_e217614 - capella beforeProcessEpoch 54.947 ms/op 65.230 ms/op 0.84
mainnet_e217614 - capella processJustificationAndFinalization 4.8100 us/op 5.3710 us/op 0.90
mainnet_e217614 - capella processInactivityUpdates 13.442 ms/op 13.373 ms/op 1.01
mainnet_e217614 - capella processRewardsAndPenalties 89.203 ms/op 89.123 ms/op 1.00
mainnet_e217614 - capella processRegistryUpdates 4.5670 us/op 4.8600 us/op 0.94
mainnet_e217614 - capella processSlashings 387.00 ns/op 384.00 ns/op 1.01
mainnet_e217614 - capella processEth1DataReset 387.00 ns/op 374.00 ns/op 1.03
mainnet_e217614 - capella processEffectiveBalanceUpdates 10.948 ms/op 14.461 ms/op 0.76
mainnet_e217614 - capella processSlashingsReset 1.1300 us/op 930.00 ns/op 1.22
mainnet_e217614 - capella processRandaoMixesReset 1.1920 us/op 1.3110 us/op 0.91
mainnet_e217614 - capella processHistoricalRootsUpdate 390.00 ns/op 374.00 ns/op 1.04
mainnet_e217614 - capella processParticipationFlagUpdates 700.00 ns/op 709.00 ns/op 0.99
mainnet_e217614 - capella afterProcessEpoch 106.02 ms/op 110.05 ms/op 0.96
phase0 processEpoch - mainnet_e58758 223.91 ms/op 193.47 ms/op 1.16
mainnet_e58758 - phase0 beforeProcessEpoch 46.454 ms/op 36.582 ms/op 1.27
mainnet_e58758 - phase0 processJustificationAndFinalization 5.3410 us/op 4.5290 us/op 1.18
mainnet_e58758 - phase0 processRewardsAndPenalties 17.003 ms/op 18.010 ms/op 0.94
mainnet_e58758 - phase0 processRegistryUpdates 4.0230 us/op 2.5200 us/op 1.60
mainnet_e58758 - phase0 processSlashings 404.00 ns/op 380.00 ns/op 1.06
mainnet_e58758 - phase0 processEth1DataReset 388.00 ns/op 374.00 ns/op 1.04
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 825.79 us/op 957.03 us/op 0.86
mainnet_e58758 - phase0 processSlashingsReset 1.0720 us/op 1.1510 us/op 0.93
mainnet_e58758 - phase0 processRandaoMixesReset 1.1710 us/op 1.1940 us/op 0.98
mainnet_e58758 - phase0 processHistoricalRootsUpdate 385.00 ns/op 394.00 ns/op 0.98
mainnet_e58758 - phase0 processParticipationRecordUpdates 1.1440 us/op 1.1720 us/op 0.98
mainnet_e58758 - phase0 afterProcessEpoch 32.470 ms/op 33.333 ms/op 0.97
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.8408 ms/op 1.1880 ms/op 1.55
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 1.2697 ms/op 1.5935 ms/op 0.80
altair processInactivityUpdates - 250000 normalcase 10.360 ms/op 10.590 ms/op 0.98
altair processInactivityUpdates - 250000 worstcase 10.280 ms/op 10.661 ms/op 0.96
phase0 processRegistryUpdates - 250000 normalcase 4.8680 us/op 2.5870 us/op 1.88
phase0 processRegistryUpdates - 250000 badcase_full_deposits 239.92 us/op 243.24 us/op 0.99
phase0 processRegistryUpdates - 250000 worstcase 0.5 55.904 ms/op 119.47 ms/op 0.47
altair processRewardsAndPenalties - 250000 normalcase 13.856 ms/op 15.610 ms/op 0.89
altair processRewardsAndPenalties - 250000 worstcase 18.383 ms/op 14.295 ms/op 1.29
phase0 getAttestationDeltas - 250000 normalcase 4.7801 ms/op 5.0124 ms/op 0.95
phase0 getAttestationDeltas - 250000 worstcase 4.6370 ms/op 5.0352 ms/op 0.92
phase0 processSlashings - 250000 worstcase 92.560 us/op 95.786 us/op 0.97
altair processSyncCommitteeUpdates - 250000 9.4033 ms/op 10.184 ms/op 0.92
BeaconState.hashTreeRoot - No change 390.00 ns/op 437.00 ns/op 0.89
BeaconState.hashTreeRoot - 1 full validator 72.116 us/op 68.095 us/op 1.06
BeaconState.hashTreeRoot - 32 full validator 815.43 us/op 658.38 us/op 1.24
BeaconState.hashTreeRoot - 512 full validator 8.5155 ms/op 6.2604 ms/op 1.36
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 98.847 us/op 73.218 us/op 1.35
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.3324 ms/op 1.1594 ms/op 1.15
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 21.732 ms/op 13.349 ms/op 1.63
BeaconState.hashTreeRoot - 1 balances 90.568 us/op 64.797 us/op 1.40
BeaconState.hashTreeRoot - 32 balances 827.70 us/op 640.43 us/op 1.29
BeaconState.hashTreeRoot - 512 balances 5.1357 ms/op 5.1918 ms/op 0.99
BeaconState.hashTreeRoot - 250000 balances 144.86 ms/op 139.05 ms/op 1.04
aggregationBits - 2048 els - zipIndexesInBitList 18.446 us/op 20.137 us/op 0.92
regular array get 100000 times 21.289 us/op 23.484 us/op 0.91
wrappedArray get 100000 times 21.323 us/op 23.471 us/op 0.91
arrayWithProxy get 100000 times 9.2478 ms/op 11.279 ms/op 0.82
ssz.Root.equals 19.958 ns/op 22.094 ns/op 0.90
byteArrayEquals 19.554 ns/op 21.523 ns/op 0.91
Buffer.compare 8.3690 ns/op 9.3160 ns/op 0.90
processSlot - 1 slots 9.2490 us/op 10.107 us/op 0.92
processSlot - 32 slots 1.8087 ms/op 1.9182 ms/op 0.94
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 3.7647 ms/op 5.0670 ms/op 0.74
getCommitteeAssignments - req 1 vs - 250000 vc 1.5616 ms/op 1.7118 ms/op 0.91
getCommitteeAssignments - req 100 vs - 250000 vc 3.1343 ms/op 3.4559 ms/op 0.91
getCommitteeAssignments - req 1000 vs - 250000 vc 3.3775 ms/op 3.7292 ms/op 0.91
findModifiedValidators - 10000 modified validators 681.49 ms/op 532.34 ms/op 1.28
findModifiedValidators - 1000 modified validators 368.64 ms/op 414.20 ms/op 0.89
findModifiedValidators - 100 modified validators 199.96 ms/op 281.57 ms/op 0.71
findModifiedValidators - 10 modified validators 186.79 ms/op 172.51 ms/op 1.08
findModifiedValidators - 1 modified validators 108.55 ms/op 123.84 ms/op 0.88
findModifiedValidators - no difference 126.15 ms/op 127.45 ms/op 0.99
migrate state 1500000 validators, 3400 modified, 2000 new 905.81 ms/op 939.75 ms/op 0.96
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 5.4300 ns/op 5.8900 ns/op 0.92
state getBlockRootAtSlot - 250000 vs - 7PWei 463.88 ns/op 479.14 ns/op 0.97
computeProposerIndex 100000 validators 1.3191 ms/op 1.3374 ms/op 0.99
getNextSyncCommitteeIndices 1000 validators 98.507 ms/op 102.05 ms/op 0.97
getNextSyncCommitteeIndices 10000 validators 98.312 ms/op 99.498 ms/op 0.99
getNextSyncCommitteeIndices 100000 validators 98.722 ms/op 102.06 ms/op 0.97
computeProposers - vc 250000 575.20 us/op 572.15 us/op 1.01
computeEpochShuffling - vc 250000 37.147 ms/op 38.270 ms/op 0.97
getNextSyncCommittee - vc 250000 9.2784 ms/op 9.5939 ms/op 0.97
nodejs block root to RootHex using toHex 110.53 ns/op 109.33 ns/op 1.01
nodejs block root to RootHex using toRootHex 75.574 ns/op 68.397 ns/op 1.10
nodejs fromHex(blob) 169.63 us/op 387.83 us/op 0.44
nodejs fromHexInto(blob) 597.99 us/op 630.36 us/op 0.95
nodejs block root to RootHex using the deprecated toHexString 192.49 ns/op 511.54 ns/op 0.38
browser block root to RootHex using toHex 137.34 ns/op 247.52 ns/op 0.55
browser block root to RootHex using toRootHex 124.42 ns/op 130.86 ns/op 0.95
browser fromHex(blob) 1.0532 ms/op 963.17 us/op 1.09
browser fromHexInto(blob) 635.57 us/op 627.02 us/op 1.01
browser block root to RootHex using the deprecated toHexString 525.73 ns/op 346.64 ns/op 1.52

by benchmarkbot/action

@wemeetagain
Copy link
Member Author

Added blob sidecar serving. Maybe some of logic chould be added into a chain method, like, getBlockByRoot

@wemeetagain
Copy link
Member Author

ready for re-review

@wemeetagain
Copy link
Member Author

Added queue metrics to BeaconChain dashboard

Screenshot from 2026-01-27 15-06-51

@nflaig nflaig added this to the v1.40.0 milestone Jan 27, 2026
throw new Error(`Expected block input to have columns: ${blockSlot} ${blockRootHex}`);
}
if (indices === undefined) {
return blockInput.getAllColumns();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we have to have this fallback?
instead of that just throw error to avoid unnecessary overhead

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes this is used in the beacon API. The intention of nullable indices is to provide a way to say "give me all stored columns for that block"

@nflaig
Copy link
Member

nflaig commented Jan 29, 2026

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant performance improvement by making block persistence asynchronous. The use of a job queue for database writes decouples block processing from I/O operations, which should improve chain head advancement times. The refactoring of data access methods to use a unified interface on the BeaconChain object is a great step towards better code organization and abstraction. My review includes a suggestion to update a vulnerable dependency and a fix for a method signature mismatch.

Copy link
Member

@nflaig nflaig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks pretty good to me

@twoeths twoeths merged commit 1c71f42 into unstable Jan 30, 2026
36 of 42 checks passed
@twoeths twoeths deleted the cayman/unfinalized-block-write-queue branch January 30, 2026 02:47
@codecov
Copy link

codecov bot commented Jan 30, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 52.33%. Comparing base (98e4089) to head (5c6e69e).
⚠️ Report is 30 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #8784      +/-   ##
============================================
+ Coverage     52.04%   52.33%   +0.28%     
============================================
  Files           848      848              
  Lines         64584    63784     -800     
  Branches       4762     4718      -44     
============================================
- Hits          33612    33379     -233     
+ Misses        30903    30336     -567     
  Partials         69       69              
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants