Skip to content

refactor: revert dual-state cache architecture from ePBS#9215

Merged
ensi321 merged 2 commits intounstablefrom
nc/revert-epbs-dual-state-cache
Apr 16, 2026
Merged

refactor: revert dual-state cache architecture from ePBS#9215
ensi321 merged 2 commits intounstablefrom
nc/revert-epbs-dual-state-cache

Conversation

@ensi321
Copy link
Copy Markdown
Contributor

@ensi321 ensi321 commented Apr 14, 2026

To prepare state cache for ethereum/consensus-specs#5094.

Original epbs state cache PR: #8868

Summary

  • Removes payloadPresent tracking from CheckpointStateCache, its datastore layer, and FIFOBlockStateCache
  • Reverts epochIndex from bitmask-based Map<RootHex, number> back to Set<RootHex>
  • Simplifies cache key format from {rootHex}_{epoch}_{payloadPresent} to {rootHex}_{epoch}
  • Removes upgradeToGloas() / upgradeForGloas() block state cache upgrade path
  • Removes PayloadAvailability enum and dual-variant iteration in persist/prune/reload paths
  • Drops payloadPresent suffix from datastore checkpoint keys

This reverts the dual-state architecture introduced in #8868, where each Gloas block produced both a block state and a payload state. The consensus specs have since moved to a single-state model (consensus-specs PR #5094), making this complexity unnecessary.

The regen interface still carries payloadPresent parameters as a pass-through — these are ignored by the cache and will be cleaned up in a follow-up PR.

This PR was written with AI assistance (Claude Code) for code generation and codebase understanding.

Test plan

  • pnpm check-types passes
  • pnpm lint passes
  • persistentCheckpointsCache.test.ts unit tests pass
  • regen.test.ts unit tests pass

🤖 Generated with Claude Code

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request simplifies the checkpoint state cache by removing the payloadPresent flag and rolling back dual-state logic previously introduced for the Gloas fork. The changes refactor several core components, including the state regenerator and datastore layers, to use a unified CheckpointHex representation. Feedback indicates an improvement opportunity in PersistentCheckpointStateCache to ensure that empty sets are removed from the epochIndex map during pruning to prevent unnecessary memory accumulation.

Comment thread packages/beacon-node/src/chain/stateCache/persistentCheckpointsCache.ts Outdated
@ensi321
Copy link
Copy Markdown
Contributor Author

ensi321 commented Apr 14, 2026

@lodekeeper review please

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 14, 2026

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: fe58128 Previous: 6641fd7 Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 889.82 us/op 908.25 us/op 0.98
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 39.200 us/op 39.193 us/op 1.00
BLS verify - blst 744.02 us/op 661.36 us/op 1.12
BLS verifyMultipleSignatures 3 - blst 1.3641 ms/op 1.3751 ms/op 0.99
BLS verifyMultipleSignatures 8 - blst 2.1346 ms/op 2.1827 ms/op 0.98
BLS verifyMultipleSignatures 32 - blst 6.8148 ms/op 6.8730 ms/op 0.99
BLS verifyMultipleSignatures 64 - blst 13.125 ms/op 13.255 ms/op 0.99
BLS verifyMultipleSignatures 128 - blst 25.716 ms/op 26.010 ms/op 0.99
BLS deserializing 10000 signatures 647.27 ms/op 636.08 ms/op 1.02
BLS deserializing 100000 signatures 6.3792 s/op 6.4062 s/op 1.00
BLS verifyMultipleSignatures - same message - 3 - blst 712.88 us/op 812.00 us/op 0.88
BLS verifyMultipleSignatures - same message - 8 - blst 839.55 us/op 897.88 us/op 0.94
BLS verifyMultipleSignatures - same message - 32 - blst 1.5220 ms/op 1.5174 ms/op 1.00
BLS verifyMultipleSignatures - same message - 64 - blst 2.3889 ms/op 2.3763 ms/op 1.01
BLS verifyMultipleSignatures - same message - 128 - blst 4.0858 ms/op 3.9731 ms/op 1.03
BLS aggregatePubkeys 32 - blst 17.729 us/op 17.428 us/op 1.02
BLS aggregatePubkeys 128 - blst 63.316 us/op 61.561 us/op 1.03
getSlashingsAndExits - default max 48.172 us/op 48.979 us/op 0.98
getSlashingsAndExits - 2k 411.60 us/op 333.88 us/op 1.23
proposeBlockBody type=full, size=empty 689.72 us/op 1.1832 ms/op 0.58
isKnown best case - 1 super set check 173.00 ns/op 165.00 ns/op 1.05
isKnown normal case - 2 super set checks 166.00 ns/op 167.00 ns/op 0.99
isKnown worse case - 16 super set checks 170.00 ns/op 166.00 ns/op 1.02
validate api signedAggregateAndProof - struct 1.5152 ms/op 1.5420 ms/op 0.98
validate gossip signedAggregateAndProof - struct 1.5003 ms/op 1.5254 ms/op 0.98
batch validate gossip attestation - vc 640000 - chunk 32 109.44 us/op 108.01 us/op 1.01
batch validate gossip attestation - vc 640000 - chunk 64 95.126 us/op 94.903 us/op 1.00
batch validate gossip attestation - vc 640000 - chunk 128 92.298 us/op 87.833 us/op 1.05
batch validate gossip attestation - vc 640000 - chunk 256 86.510 us/op 83.374 us/op 1.04
bytes32 toHexString 293.00 ns/op 287.00 ns/op 1.02
bytes32 Buffer.toString(hex) 189.00 ns/op 177.00 ns/op 1.07
bytes32 Buffer.toString(hex) from Uint8Array 267.00 ns/op 254.00 ns/op 1.05
bytes32 Buffer.toString(hex) + 0x 192.00 ns/op 178.00 ns/op 1.08
Return object 10000 times 0.21490 ns/op 0.21220 ns/op 1.01
Throw Error 10000 times 3.2686 us/op 3.2854 us/op 0.99
toHex 103.01 ns/op 97.083 ns/op 1.06
Buffer.from 86.304 ns/op 88.631 ns/op 0.97
shared Buffer 59.276 ns/op 61.960 ns/op 0.96
fastMsgIdFn sha256 / 200 bytes 1.4780 us/op 1.4840 us/op 1.00
fastMsgIdFn h32 xxhash / 200 bytes 167.00 ns/op 154.00 ns/op 1.08
fastMsgIdFn h64 xxhash / 200 bytes 207.00 ns/op 201.00 ns/op 1.03
fastMsgIdFn sha256 / 1000 bytes 4.7140 us/op 4.7290 us/op 1.00
fastMsgIdFn h32 xxhash / 1000 bytes 252.00 ns/op 237.00 ns/op 1.06
fastMsgIdFn h64 xxhash / 1000 bytes 258.00 ns/op 244.00 ns/op 1.06
fastMsgIdFn sha256 / 10000 bytes 41.780 us/op 41.315 us/op 1.01
fastMsgIdFn h32 xxhash / 10000 bytes 1.2750 us/op 1.2610 us/op 1.01
fastMsgIdFn h64 xxhash / 10000 bytes 838.00 ns/op 814.00 ns/op 1.03
send data - 1000 256B messages 5.2162 ms/op 4.0464 ms/op 1.29
send data - 1000 512B messages 5.3315 ms/op 4.2455 ms/op 1.26
send data - 1000 1024B messages 6.3705 ms/op 4.1498 ms/op 1.54
send data - 1000 1200B messages 7.2768 ms/op 4.8256 ms/op 1.51
send data - 1000 2048B messages 5.9704 ms/op 4.8659 ms/op 1.23
send data - 1000 4096B messages 6.9374 ms/op 5.7162 ms/op 1.21
send data - 1000 16384B messages 16.872 ms/op 12.359 ms/op 1.37
send data - 1000 65536B messages 326.35 ms/op 197.20 ms/op 1.65
enrSubnets - fastDeserialize 64 bits 818.00 ns/op 733.00 ns/op 1.12
enrSubnets - ssz BitVector 64 bits 266.00 ns/op 272.00 ns/op 0.98
enrSubnets - fastDeserialize 4 bits 109.00 ns/op 102.00 ns/op 1.07
enrSubnets - ssz BitVector 4 bits 427.00 ns/op 274.00 ns/op 1.56
prioritizePeers score -10:0 att 32-0.1 sync 2-0 226.62 us/op 211.86 us/op 1.07
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 263.29 us/op 236.85 us/op 1.11
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 407.73 us/op 354.66 us/op 1.15
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 656.81 us/op 611.04 us/op 1.07
prioritizePeers score 0:0 att 64-1 sync 4-1 743.24 us/op 721.13 us/op 1.03
array of 16000 items push then shift 1.2701 us/op 1.2543 us/op 1.01
LinkedList of 16000 items push then shift 8.5390 ns/op 7.1950 ns/op 1.19
array of 16000 items push then pop 70.822 ns/op 70.000 ns/op 1.01
LinkedList of 16000 items push then pop 6.2410 ns/op 5.8130 ns/op 1.07
array of 24000 items push then shift 1.8783 us/op 1.8354 us/op 1.02
LinkedList of 24000 items push then shift 8.4230 ns/op 6.7370 ns/op 1.25
array of 24000 items push then pop 101.52 ns/op 99.072 ns/op 1.02
LinkedList of 24000 items push then pop 6.3590 ns/op 5.8590 ns/op 1.09
intersect bitArray bitLen 8 4.8150 ns/op 4.6080 ns/op 1.04
intersect array and set length 8 29.390 ns/op 28.394 ns/op 1.04
intersect bitArray bitLen 128 24.020 ns/op 23.261 ns/op 1.03
intersect array and set length 128 499.55 ns/op 483.35 ns/op 1.03
bitArray.getTrueBitIndexes() bitLen 128 1.0110 us/op 1.0630 us/op 0.95
bitArray.getTrueBitIndexes() bitLen 248 1.8420 us/op 1.7910 us/op 1.03
bitArray.getTrueBitIndexes() bitLen 512 3.7040 us/op 3.6020 us/op 1.03
Full columns - reconstruct all 6 blobs 164.66 us/op 140.44 us/op 1.17
Full columns - reconstruct half of the blobs out of 6 91.979 us/op 61.502 us/op 1.50
Full columns - reconstruct single blob out of 6 34.368 us/op 32.261 us/op 1.07
Half columns - reconstruct all 6 blobs 396.19 ms/op 390.35 ms/op 1.01
Half columns - reconstruct half of the blobs out of 6 198.52 ms/op 195.79 ms/op 1.01
Half columns - reconstruct single blob out of 6 70.954 ms/op 70.375 ms/op 1.01
Full columns - reconstruct all 10 blobs 238.42 us/op 177.35 us/op 1.34
Full columns - reconstruct half of the blobs out of 10 132.05 us/op 95.272 us/op 1.39
Full columns - reconstruct single blob out of 10 32.355 us/op 29.849 us/op 1.08
Half columns - reconstruct all 10 blobs 642.94 ms/op 641.78 ms/op 1.00
Half columns - reconstruct half of the blobs out of 10 325.65 ms/op 324.84 ms/op 1.00
Half columns - reconstruct single blob out of 10 70.202 ms/op 69.443 ms/op 1.01
Full columns - reconstruct all 20 blobs 596.17 us/op 1.7621 ms/op 0.34
Full columns - reconstruct half of the blobs out of 20 199.06 us/op 171.55 us/op 1.16
Full columns - reconstruct single blob out of 20 29.510 us/op 29.995 us/op 0.98
Half columns - reconstruct all 20 blobs 1.2855 s/op 1.2905 s/op 1.00
Half columns - reconstruct half of the blobs out of 20 637.83 ms/op 647.26 ms/op 0.99
Half columns - reconstruct single blob out of 20 68.435 ms/op 71.162 ms/op 0.96
Set add up to 64 items then delete first 2.4686 us/op 2.1448 us/op 1.15
OrderedSet add up to 64 items then delete first 3.2252 us/op 3.3600 us/op 0.96
Set add up to 64 items then delete last 2.2936 us/op 2.3879 us/op 0.96
OrderedSet add up to 64 items then delete last 3.2468 us/op 3.3236 us/op 0.98
Set add up to 64 items then delete middle 2.0869 us/op 2.0192 us/op 1.03
OrderedSet add up to 64 items then delete middle 4.9246 us/op 4.7933 us/op 1.03
Set add up to 128 items then delete first 4.2661 us/op 4.3357 us/op 0.98
OrderedSet add up to 128 items then delete first 6.8627 us/op 6.5838 us/op 1.04
Set add up to 128 items then delete last 4.7922 us/op 4.4648 us/op 1.07
OrderedSet add up to 128 items then delete last 6.8365 us/op 6.2343 us/op 1.10
Set add up to 128 items then delete middle 4.6737 us/op 3.9347 us/op 1.19
OrderedSet add up to 128 items then delete middle 13.710 us/op 11.791 us/op 1.16
Set add up to 256 items then delete first 9.6468 us/op 7.9298 us/op 1.22
OrderedSet add up to 256 items then delete first 13.981 us/op 12.230 us/op 1.14
Set add up to 256 items then delete last 8.7977 us/op 7.7036 us/op 1.14
OrderedSet add up to 256 items then delete last 14.338 us/op 11.531 us/op 1.24
Set add up to 256 items then delete middle 9.7710 us/op 7.6591 us/op 1.28
OrderedSet add up to 256 items then delete middle 39.321 us/op 34.992 us/op 1.12
pass gossip attestations to forkchoice per slot 2.6304 ms/op 2.4887 ms/op 1.06
forkChoice updateHead vc 100000 bc 64 eq 0 387.32 us/op 380.30 us/op 1.02
forkChoice updateHead vc 600000 bc 64 eq 0 2.3656 ms/op 2.2816 ms/op 1.04
forkChoice updateHead vc 1000000 bc 64 eq 0 3.9257 ms/op 3.7842 ms/op 1.04
forkChoice updateHead vc 600000 bc 320 eq 0 2.3799 ms/op 2.2857 ms/op 1.04
forkChoice updateHead vc 600000 bc 1200 eq 0 2.4794 ms/op 2.3551 ms/op 1.05
forkChoice updateHead vc 600000 bc 7200 eq 0 3.3829 ms/op 2.6437 ms/op 1.28
forkChoice updateHead vc 600000 bc 64 eq 1000 2.8949 ms/op 2.8387 ms/op 1.02
forkChoice updateHead vc 600000 bc 64 eq 10000 3.0008 ms/op 2.9148 ms/op 1.03
forkChoice updateHead vc 600000 bc 64 eq 300000 6.8970 ms/op 6.6623 ms/op 1.04
computeDeltas 1400000 validators 0% inactive 12.450 ms/op 12.303 ms/op 1.01
computeDeltas 1400000 validators 10% inactive 11.593 ms/op 11.396 ms/op 1.02
computeDeltas 1400000 validators 20% inactive 10.586 ms/op 10.481 ms/op 1.01
computeDeltas 1400000 validators 50% inactive 8.1021 ms/op 8.0171 ms/op 1.01
computeDeltas 2100000 validators 0% inactive 18.736 ms/op 18.340 ms/op 1.02
computeDeltas 2100000 validators 10% inactive 17.330 ms/op 17.081 ms/op 1.01
computeDeltas 2100000 validators 20% inactive 15.936 ms/op 15.552 ms/op 1.02
computeDeltas 2100000 validators 50% inactive 9.2335 ms/op 9.1480 ms/op 1.01
altair processAttestation - 250000 vs - 7PWei normalcase 2.5770 ms/op 2.0293 ms/op 1.27
altair processAttestation - 250000 vs - 7PWei worstcase 3.3785 ms/op 2.4264 ms/op 1.39
altair processAttestation - setStatus - 1/6 committees join 118.08 us/op 101.67 us/op 1.16
altair processAttestation - setStatus - 1/3 committees join 203.47 us/op 212.17 us/op 0.96
altair processAttestation - setStatus - 1/2 committees join 292.74 us/op 284.24 us/op 1.03
altair processAttestation - setStatus - 2/3 committees join 366.68 us/op 374.98 us/op 0.98
altair processAttestation - setStatus - 4/5 committees join 521.28 us/op 511.43 us/op 1.02
altair processAttestation - setStatus - 100% committees join 643.22 us/op 609.25 us/op 1.06
altair processBlock - 250000 vs - 7PWei normalcase 5.1305 ms/op 3.3482 ms/op 1.53
altair processBlock - 250000 vs - 7PWei normalcase hashState 18.076 ms/op 14.500 ms/op 1.25
altair processBlock - 250000 vs - 7PWei worstcase 23.999 ms/op 21.305 ms/op 1.13
altair processBlock - 250000 vs - 7PWei worstcase hashState 47.073 ms/op 39.251 ms/op 1.20
phase0 processBlock - 250000 vs - 7PWei normalcase 1.4961 ms/op 1.6307 ms/op 0.92
phase0 processBlock - 250000 vs - 7PWei worstcase 18.964 ms/op 17.754 ms/op 1.07
altair processEth1Data - 250000 vs - 7PWei normalcase 299.17 us/op 292.72 us/op 1.02
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 5.5150 us/op 3.5460 us/op 1.56
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 21.383 us/op 20.693 us/op 1.03
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 7.1670 us/op 5.9990 us/op 1.19
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 3.7470 us/op 4.1440 us/op 0.90
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 92.606 us/op 91.593 us/op 1.01
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 1.4362 ms/op 1.5820 ms/op 0.91
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 2.2189 ms/op 1.8029 ms/op 1.23
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.3342 ms/op 1.7971 ms/op 1.30
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 5.2651 ms/op 3.7992 ms/op 1.39
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.3930 ms/op 2.0951 ms/op 1.14
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 6.3206 ms/op 4.0810 ms/op 1.55
Tree 40 250000 create 384.38 ms/op 349.78 ms/op 1.10
Tree 40 250000 get(125000) 91.924 ns/op 94.801 ns/op 0.97
Tree 40 250000 set(125000) 1.0659 us/op 1.0247 us/op 1.04
Tree 40 250000 toArray() 20.334 ms/op 9.2300 ms/op 2.20
Tree 40 250000 iterate all - toArray() + loop 23.541 ms/op 9.5867 ms/op 2.46
Tree 40 250000 iterate all - get(i) 42.321 ms/op 34.132 ms/op 1.24
Array 250000 create 2.4069 ms/op 2.1641 ms/op 1.11
Array 250000 clone - spread 739.59 us/op 697.58 us/op 1.06
Array 250000 get(125000) 0.30700 ns/op 0.30000 ns/op 1.02
Array 250000 set(125000) 0.30600 ns/op 0.30300 ns/op 1.01
Array 250000 iterate all - loop 57.502 us/op 58.469 us/op 0.98
phase0 afterProcessEpoch - 250000 vs - 7PWei 40.894 ms/op 55.245 ms/op 0.74
Array.fill - length 1000000 2.3319 ms/op 2.2904 ms/op 1.02
Array push - length 1000000 9.9816 ms/op 9.7492 ms/op 1.02
Array.get 0.20693 ns/op 0.20258 ns/op 1.02
Uint8Array.get 0.25304 ns/op 0.23984 ns/op 1.06
phase0 beforeProcessEpoch - 250000 vs - 7PWei 17.231 ms/op 15.253 ms/op 1.13
altair processEpoch - mainnet_e81889 279.84 ms/op 320.58 ms/op 0.87
mainnet_e81889 - altair beforeProcessEpoch 38.088 ms/op 35.932 ms/op 1.06
mainnet_e81889 - altair processJustificationAndFinalization 6.6500 us/op 5.7300 us/op 1.16
mainnet_e81889 - altair processInactivityUpdates 5.8504 ms/op 3.6228 ms/op 1.61
mainnet_e81889 - altair processRewardsAndPenalties 20.228 ms/op 20.407 ms/op 0.99
mainnet_e81889 - altair processRegistryUpdates 541.00 ns/op 550.00 ns/op 0.98
mainnet_e81889 - altair processSlashings 136.00 ns/op 139.00 ns/op 0.98
mainnet_e81889 - altair processEth1DataReset 131.00 ns/op 128.00 ns/op 1.02
mainnet_e81889 - altair processEffectiveBalanceUpdates 7.4068 ms/op 3.6970 ms/op 2.00
mainnet_e81889 - altair processSlashingsReset 735.00 ns/op 695.00 ns/op 1.06
mainnet_e81889 - altair processRandaoMixesReset 1.4770 us/op 1.5390 us/op 0.96
mainnet_e81889 - altair processHistoricalRootsUpdate 133.00 ns/op 131.00 ns/op 1.02
mainnet_e81889 - altair processParticipationFlagUpdates 476.00 ns/op 445.00 ns/op 1.07
mainnet_e81889 - altair processSyncCommitteeUpdates 110.00 ns/op 111.00 ns/op 0.99
mainnet_e81889 - altair afterProcessEpoch 41.853 ms/op 42.005 ms/op 1.00
capella processEpoch - mainnet_e217614 784.58 ms/op 944.45 ms/op 0.83
mainnet_e217614 - capella beforeProcessEpoch 59.475 ms/op 57.692 ms/op 1.03
mainnet_e217614 - capella processJustificationAndFinalization 6.6500 us/op 7.1040 us/op 0.94
mainnet_e217614 - capella processInactivityUpdates 16.264 ms/op 16.327 ms/op 1.00
mainnet_e217614 - capella processRewardsAndPenalties 91.361 ms/op 98.797 ms/op 0.92
mainnet_e217614 - capella processRegistryUpdates 4.4940 us/op 4.6250 us/op 0.97
mainnet_e217614 - capella processSlashings 135.00 ns/op 142.00 ns/op 0.95
mainnet_e217614 - capella processEth1DataReset 126.00 ns/op 136.00 ns/op 0.93
mainnet_e217614 - capella processEffectiveBalanceUpdates 17.932 ms/op 6.0193 ms/op 2.98
mainnet_e217614 - capella processSlashingsReset 683.00 ns/op 694.00 ns/op 0.98
mainnet_e217614 - capella processRandaoMixesReset 1.3380 us/op 1.3980 us/op 0.96
mainnet_e217614 - capella processHistoricalRootsUpdate 131.00 ns/op 134.00 ns/op 0.98
mainnet_e217614 - capella processParticipationFlagUpdates 452.00 ns/op 431.00 ns/op 1.05
mainnet_e217614 - capella afterProcessEpoch 106.96 ms/op 112.08 ms/op 0.95
phase0 processEpoch - mainnet_e58758 330.11 ms/op 352.63 ms/op 0.94
mainnet_e58758 - phase0 beforeProcessEpoch 83.330 ms/op 74.649 ms/op 1.12
mainnet_e58758 - phase0 processJustificationAndFinalization 6.6960 us/op 7.6330 us/op 0.88
mainnet_e58758 - phase0 processRewardsAndPenalties 15.833 ms/op 17.135 ms/op 0.92
mainnet_e58758 - phase0 processRegistryUpdates 2.2850 us/op 2.2740 us/op 1.00
mainnet_e58758 - phase0 processSlashings 135.00 ns/op 138.00 ns/op 0.98
mainnet_e58758 - phase0 processEth1DataReset 129.00 ns/op 142.00 ns/op 0.91
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.2596 ms/op 1.0571 ms/op 1.19
mainnet_e58758 - phase0 processSlashingsReset 893.00 ns/op 913.00 ns/op 0.98
mainnet_e58758 - phase0 processRandaoMixesReset 1.2750 us/op 1.4670 us/op 0.87
mainnet_e58758 - phase0 processHistoricalRootsUpdate 135.00 ns/op 143.00 ns/op 0.94
mainnet_e58758 - phase0 processParticipationRecordUpdates 963.00 ns/op 1.2560 us/op 0.77
mainnet_e58758 - phase0 afterProcessEpoch 33.016 ms/op 33.400 ms/op 0.99
phase0 processEffectiveBalanceUpdates - 250000 normalcase 970.85 us/op 1.0346 ms/op 0.94
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 1.5050 ms/op 1.6172 ms/op 0.93
altair processInactivityUpdates - 250000 normalcase 10.357 ms/op 10.671 ms/op 0.97
altair processInactivityUpdates - 250000 worstcase 11.141 ms/op 10.624 ms/op 1.05
phase0 processRegistryUpdates - 250000 normalcase 2.1720 us/op 2.7270 us/op 0.80
phase0 processRegistryUpdates - 250000 badcase_full_deposits 137.68 us/op 145.36 us/op 0.95
phase0 processRegistryUpdates - 250000 worstcase 0.5 59.040 ms/op 61.207 ms/op 0.96
altair processRewardsAndPenalties - 250000 normalcase 15.351 ms/op 16.432 ms/op 0.93
altair processRewardsAndPenalties - 250000 worstcase 15.301 ms/op 16.050 ms/op 0.95
phase0 getAttestationDeltas - 250000 normalcase 5.3297 ms/op 5.4007 ms/op 0.99
phase0 getAttestationDeltas - 250000 worstcase 5.2896 ms/op 5.5772 ms/op 0.95
phase0 processSlashings - 250000 worstcase 61.430 us/op 62.729 us/op 0.98
altair processSyncCommitteeUpdates - 250000 10.269 ms/op 10.298 ms/op 1.00
BeaconState.hashTreeRoot - No change 172.00 ns/op 172.00 ns/op 1.00
BeaconState.hashTreeRoot - 1 full validator 77.127 us/op 99.767 us/op 0.77
BeaconState.hashTreeRoot - 32 full validator 898.63 us/op 1.1776 ms/op 0.76
BeaconState.hashTreeRoot - 512 full validator 8.5132 ms/op 10.068 ms/op 0.85
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 103.39 us/op 118.88 us/op 0.87
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.4702 ms/op 1.5528 ms/op 0.95
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 21.408 ms/op 27.833 ms/op 0.77
BeaconState.hashTreeRoot - 1 balances 85.525 us/op 102.08 us/op 0.84
BeaconState.hashTreeRoot - 32 balances 762.44 us/op 865.60 us/op 0.88
BeaconState.hashTreeRoot - 512 balances 6.1511 ms/op 8.0146 ms/op 0.77
BeaconState.hashTreeRoot - 250000 balances 136.08 ms/op 205.72 ms/op 0.66
aggregationBits - 2048 els - zipIndexesInBitList 19.889 us/op 19.567 us/op 1.02
regular array get 100000 times 22.992 us/op 23.345 us/op 0.98
wrappedArray get 100000 times 22.960 us/op 23.395 us/op 0.98
arrayWithProxy get 100000 times 9.7539 ms/op 10.240 ms/op 0.95
ssz.Root.equals 21.430 ns/op 21.909 ns/op 0.98
byteArrayEquals 21.181 ns/op 21.721 ns/op 0.98
Buffer.compare 8.7150 ns/op 9.0960 ns/op 0.96
processSlot - 1 slots 9.1860 us/op 10.140 us/op 0.91
processSlot - 32 slots 2.0709 ms/op 2.4631 ms/op 0.84
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 6.4127 ms/op 4.0644 ms/op 1.58
getCommitteeAssignments - req 1 vs - 250000 vc 1.6293 ms/op 1.6848 ms/op 0.97
getCommitteeAssignments - req 100 vs - 250000 vc 3.3740 ms/op 3.4735 ms/op 0.97
getCommitteeAssignments - req 1000 vs - 250000 vc 3.5987 ms/op 3.7456 ms/op 0.96
findModifiedValidators - 10000 modified validators 740.64 ms/op 705.31 ms/op 1.05
findModifiedValidators - 1000 modified validators 605.85 ms/op 482.14 ms/op 1.26
findModifiedValidators - 100 modified validators 318.59 ms/op 283.85 ms/op 1.12
findModifiedValidators - 10 modified validators 249.43 ms/op 271.85 ms/op 0.92
findModifiedValidators - 1 modified validators 145.00 ms/op 180.26 ms/op 0.80
findModifiedValidators - no difference 191.93 ms/op 148.64 ms/op 1.29
migrate state 1500000 validators, 3400 modified, 2000 new 3.0815 s/op 3.8385 s/op 0.80
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 3.5400 ns/op 3.8600 ns/op 0.92
state getBlockRootAtSlot - 250000 vs - 7PWei 405.76 ns/op 391.21 ns/op 1.04
computeProposerIndex 100000 validators 1.3160 ms/op 1.3773 ms/op 0.96
getNextSyncCommitteeIndices 1000 validators 2.7773 ms/op 2.9088 ms/op 0.95
getNextSyncCommitteeIndices 10000 validators 24.641 ms/op 25.640 ms/op 0.96
getNextSyncCommitteeIndices 100000 validators 89.279 ms/op 87.156 ms/op 1.02
computeProposers - vc 250000 594.93 us/op 565.37 us/op 1.05
computeEpochShuffling - vc 250000 38.765 ms/op 41.167 ms/op 0.94
getNextSyncCommittee - vc 250000 9.8627 ms/op 9.7602 ms/op 1.01
nodejs block root to RootHex using toHex 103.57 ns/op 115.16 ns/op 0.90
nodejs block root to RootHex using toRootHex 65.284 ns/op 70.198 ns/op 0.93
nodejs fromHex(blob) 710.32 us/op 805.07 us/op 0.88
nodejs fromHexInto(blob) 616.49 us/op 636.46 us/op 0.97
nodejs block root to RootHex using the deprecated toHexString 348.22 ns/op 495.31 ns/op 0.70
nodejs byteArrayEquals 32 bytes (block root) 25.123 ns/op 26.277 ns/op 0.96
nodejs byteArrayEquals 48 bytes (pubkey) 36.653 ns/op 37.807 ns/op 0.97
nodejs byteArrayEquals 96 bytes (signature) 31.449 ns/op 36.325 ns/op 0.87
nodejs byteArrayEquals 1024 bytes 37.816 ns/op 43.684 ns/op 0.87
nodejs byteArrayEquals 131072 bytes (blob) 1.7271 us/op 1.7854 us/op 0.97
browser block root to RootHex using toHex 139.66 ns/op 146.04 ns/op 0.96
browser block root to RootHex using toRootHex 127.09 ns/op 131.98 ns/op 0.96
browser fromHex(blob) 1.5593 ms/op 1.6270 ms/op 0.96
browser fromHexInto(blob) 623.07 us/op 615.50 us/op 1.01
browser block root to RootHex using the deprecated toHexString 455.49 ns/op 323.94 ns/op 1.41
browser byteArrayEquals 32 bytes (block root) 27.579 ns/op 27.503 ns/op 1.00
browser byteArrayEquals 48 bytes (pubkey) 39.048 ns/op 38.760 ns/op 1.01
browser byteArrayEquals 96 bytes (signature) 72.903 ns/op 72.869 ns/op 1.00
browser byteArrayEquals 1024 bytes 751.76 ns/op 740.73 ns/op 1.01
browser byteArrayEquals 131072 bytes (blob) 94.608 us/op 93.212 us/op 1.01

by benchmarkbot/action

Copy link
Copy Markdown
Contributor

@lodekeeper lodekeeper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice cleanup — this is a significant simplification now that the specs have moved to a single-state model. A few notes:

1. epochIndex empty-set cleanup (agree with Gemini)

In processState() at line ~747, after this.epochIndex.get(epoch)?.delete(rootHex), the empty Set should be removed from the map. The old removeFromEpochIndex did this. Without it, empty sets accumulate over epochs and add overhead to the epoch sorting in getLatest/getOrReloadLatest/findSeedStateToReload. Suggested fix:

const rootSet = this.epochIndex.get(epoch);
if (rootSet) {
  rootSet.delete(rootHex);
  if (rootSet.size === 0) {
    this.epochIndex.delete(epoch);
  }
}

2. Datastore key backward compatibility

Old persisted keys are 41 bytes (Checkpoint + 1-byte payloadPresent suffix). After this PR, datastoreKeyToCheckpoint calls ssz.phase0.Checkpoint.deserialize(key) directly, which expects exactly 40 bytes. This means:

  • Old DB-persisted states become unreachable (new 40-byte lookups won't match old 41-byte keys)
  • getLatestSafeDatastoreKey iterating old keys would hit deserialization errors on the 41-byte entries

If this is intentional (i.e., nodes must checkpoint-sync after upgrade), worth documenting. If not, a migration path that strips the trailing byte from existing keys might be needed.

3. Minor: FileCPStateDatastore file name length

CHECKPOINT_FILE_NAME_LENGTH changed from 84 to 82, which correctly matches the new 40-byte key. Same point as above applies — old 84-char files on disk won't be picked up.

Everything else looks good — the bitmask removal, cache key simplification, regen cleanup, and test updates are all straightforward. The _payloadPresent pass-throughs and remaining CheckpointHexPayload type are fine to clean up in the follow-up PR.

@ensi321
Copy link
Copy Markdown
Contributor Author

ensi321 commented Apr 14, 2026

Datastore key backward compatibility

Regarding this, I think we remove all existed keys (41 byte) on start up? So there shouldn't be any 41 byte key. @lodekeeper

// all checkpoint states from the last run are not trusted, remove them
// otherwise if we have a bad checkpoint state from the last run, the node get stucked
// this was found during mekong devnet, see https://github.com/ChainSafe/lodestar/pull/7255
await Promise.all(persistedKeys.map((key) => this.datastore.remove(key)));

@ensi321 ensi321 marked this pull request as ready for review April 14, 2026 05:41
@ensi321 ensi321 requested a review from a team as a code owner April 14, 2026 05:41
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

.filter((fileName) => fileName.startsWith("0x") && fileName.length === CHECKPOINT_FILE_NAME_LENGTH)

P2 Badge Keep reading legacy checkpoint-state filenames

The filename filter now only accepts 82-character checkpoint keys, so previously persisted files that include the payload-status suffix (84 characters) are no longer returned by readKeys(). In file-datastore mode this prevents init()/readLatestSafe() from seeing existing checkpoint states after upgrade, which both drops recoverable data and leaves stale files undeleted.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

function toCacheKey(cp: CheckpointHexPayload): CacheKey {
return `${cp.rootHex}_${cp.epoch}_${cp.payloadPresent}`;
function toCacheKey(cp: CheckpointHex): CacheKey {
return `${cp.rootHex}_${cp.epoch}`;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve payload status in checkpoint cache keys

Dropping payloadPresent from the cache key collapses distinct Gloas checkpoint variants into one entry, but the node still produces both variants (block-path checkpoint state during block import and payload-path checkpoint state during payload import for the same root/epoch). With this key shape, whichever variant is written last overwrites the other, so regen can no longer select the state that matches ProtoBlock.payloadStatus, and a PENDING branch can be replayed from a FULL checkpoint state.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This concern doesn't apply. The spec moved to a single-state model (consensus-specs#5094) — store.payloads is now Set[Root], not a per-variant map. The node no longer produces distinct FULL/EMPTY checkpoint variants for the same root/epoch, so collapsing the cache key is correct by design. That's the whole point of this PR.

Copy link
Copy Markdown
Contributor

@twoeths twoeths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how to make sure this is the same to pre #8868?
I think we want a git revert commit to make sure there is no issue?
approved if it's already a git revert

Copy link
Copy Markdown
Contributor

@lodekeeper lodekeeper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@twoeths This is not a `git revert` — it's a manual rollback. A clean `git revert` of #8868 wouldn't apply since `unstable` has evolved significantly since that merge (the touched files have had many other changes layered on top).

The PR manually removes the dual-state additions (#8868 introduced `PayloadAvailability` bitmask, `payloadPresent` in cache keys/datastore, `upgradeToGloas`, etc.) while keeping the current codebase shape intact. From my review of all 14 files, the result correctly matches the pre-#8868 semantics — single checkpoint per root/epoch, no payload variant tracking.

The main areas I'd verify are the ones I flagged: epochIndex empty-set cleanup (missing from the manual rollback but was present pre-#8868) and datastore key backward compat (old 41-byte keys become orphaned).

@lodekeeper
Copy link
Copy Markdown
Contributor

Re: Codex review body (legacy filenames) — this is the same backward compatibility concern I raised in my review (points #2 and #3). Old 41-byte DB keys and 84-char file-datastore filenames become orphaned after this change. Either a migration path is needed or it should be documented that checkpoint-sync is required after upgrade.

@twoeths
Copy link
Copy Markdown
Contributor

twoeths commented Apr 15, 2026

Re: Codex review body (legacy filenames) — this is the same backward compatibility concern I raised in my review (points #2 and #3). Old 41-byte DB keys and 84-char file-datastore filenames become orphaned after this change. Either a migration path is needed or it should be documented that checkpoint-sync is required after upgrade.

@lodekeeper all of them are gone after startup, this PR is safe to merge

this.logger.info("Removed persisted checkpoint states from the last run", {

@ensi321 ensi321 mentioned this pull request Apr 15, 2026
5 tasks
@lodekeeper
Copy link
Copy Markdown
Contributor

@twoeths You're right — init() does a full wipe of all persisted checkpoint states on startup (L201-204). For the DB datastore (production), readKeys() returns all keys from the bucket regardless of size, so old 41-byte entries get cleaned up.

Minor note: the file datastore's readKeys() filters by fileName.length === 82, so old 84-char debug files would be orphaned — but that's the debug-only path, not a merge blocker.

Backward-compat concern is resolved. The only remaining item from my review is the epochIndex empty-set cleanup after delete() — still a valid fix needed to avoid empty-set buildup.

@twoeths
Copy link
Copy Markdown
Contributor

twoeths commented Apr 16, 2026

Backward-compat concern is resolved. The only remaining item from my review is the epochIndex empty-set cleanup after delete() — still a valid fix needed to avoid empty-set buildup.

@lodekeeper this was resolved, see 37c688c

@lodekeeper
Copy link
Copy Markdown
Contributor

@twoeths Confirmed — commit 37c688c addresses the epochIndex empty-set cleanup exactly as suggested. All review items are resolved. LGTM.

@ensi321 ensi321 merged commit ade910f into unstable Apr 16, 2026
20 checks passed
@ensi321 ensi321 deleted the nc/revert-epbs-dual-state-cache branch April 16, 2026 06:50
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 16, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 52.53%. Comparing base (6641fd7) to head (37c688c).
⚠️ Report is 3 commits behind head on unstable.

Additional details and impacted files
@@            Coverage Diff            @@
##           unstable    #9215   +/-   ##
=========================================
  Coverage     52.53%   52.53%           
=========================================
  Files           848      848           
  Lines         61424    61423    -1     
  Branches       4528     4527    -1     
=========================================
  Hits          32269    32269           
+ Misses        29090    29089    -1     
  Partials         65       65           
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants