Skip to content

fix: use fork choice for parent payload status in block production#9209

Merged
nflaig merged 19 commits intounstablefrom
nflaig/forkchoice-parent-payload-status
Apr 21, 2026
Merged

fix: use fork choice for parent payload status in block production#9209
nflaig merged 19 commits intounstablefrom
nflaig/forkchoice-parent-payload-status

Conversation

@nflaig
Copy link
Copy Markdown
Member

@nflaig nflaig commented Apr 13, 2026

related to changes in ethereum/consensus-specs#5094, we wanna avoid using is_parent_block_full

@nflaig nflaig requested a review from a team as a code owner April 13, 2026 12:03
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the block production logic to determine if a parent block is full by querying the fork choice instead of using a state-transition utility. This change involves passing the forkChoice object through the prepareExecutionPayload and preparePayloadAttributes functions. A review comment suggests optimizing the check for a full parent block by moving the lookup inside the conditional statement to leverage short-circuit evaluation, ensuring the lookup only occurs for post-Gloas states.

Comment thread packages/beacon-node/src/chain/produceBlock/produceBlockBody.ts Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3e578ab475

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/beacon-node/src/chain/produceBlock/produceBlockBody.ts Outdated
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 13, 2026

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 48b907a Previous: e341cdc Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 1.0205 ms/op 898.83 us/op 1.14
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 39.929 us/op 40.755 us/op 0.98
BLS verify - blst 763.91 us/op 752.97 us/op 1.01
BLS verifyMultipleSignatures 3 - blst 1.3961 ms/op 1.3816 ms/op 1.01
BLS verifyMultipleSignatures 8 - blst 2.2150 ms/op 2.2163 ms/op 1.00
BLS verifyMultipleSignatures 32 - blst 6.9331 ms/op 6.9482 ms/op 1.00
BLS verifyMultipleSignatures 64 - blst 13.516 ms/op 13.890 ms/op 0.97
BLS verifyMultipleSignatures 128 - blst 26.233 ms/op 25.994 ms/op 1.01
BLS deserializing 10000 signatures 658.14 ms/op 649.36 ms/op 1.01
BLS deserializing 100000 signatures 6.5817 s/op 6.4134 s/op 1.03
BLS verifyMultipleSignatures - same message - 3 - blst 784.00 us/op 840.33 us/op 0.93
BLS verifyMultipleSignatures - same message - 8 - blst 925.69 us/op 958.91 us/op 0.97
BLS verifyMultipleSignatures - same message - 32 - blst 1.5822 ms/op 1.5147 ms/op 1.04
BLS verifyMultipleSignatures - same message - 64 - blst 2.3951 ms/op 2.3340 ms/op 1.03
BLS verifyMultipleSignatures - same message - 128 - blst 4.1257 ms/op 3.9720 ms/op 1.04
BLS aggregatePubkeys 32 - blst 17.819 us/op 17.655 us/op 1.01
BLS aggregatePubkeys 128 - blst 63.384 us/op 63.448 us/op 1.00
getSlashingsAndExits - default max 49.308 us/op 46.473 us/op 1.06
getSlashingsAndExits - 2k 337.42 us/op 331.71 us/op 1.02
proposeBlockBody type=full, size=empty 788.62 us/op 1.4129 ms/op 0.56
isKnown best case - 1 super set check 167.00 ns/op 168.00 ns/op 0.99
isKnown normal case - 2 super set checks 164.00 ns/op 164.00 ns/op 1.00
isKnown worse case - 16 super set checks 163.00 ns/op 165.00 ns/op 0.99
validate api signedAggregateAndProof - struct 1.5302 ms/op 1.5492 ms/op 0.99
validate gossip signedAggregateAndProof - struct 1.5318 ms/op 1.5402 ms/op 0.99
batch validate gossip attestation - vc 640000 - chunk 32 107.97 us/op 104.81 us/op 1.03
batch validate gossip attestation - vc 640000 - chunk 64 93.443 us/op 92.422 us/op 1.01
batch validate gossip attestation - vc 640000 - chunk 128 86.672 us/op 86.588 us/op 1.00
batch validate gossip attestation - vc 640000 - chunk 256 84.416 us/op 82.245 us/op 1.03
bytes32 toHexString 280.00 ns/op 298.00 ns/op 0.94
bytes32 Buffer.toString(hex) 181.00 ns/op 178.00 ns/op 1.02
bytes32 Buffer.toString(hex) from Uint8Array 245.00 ns/op 243.00 ns/op 1.01
bytes32 Buffer.toString(hex) + 0x 178.00 ns/op 177.00 ns/op 1.01
Return object 10000 times 0.21000 ns/op 0.21070 ns/op 1.00
Throw Error 10000 times 3.2747 us/op 3.2942 us/op 0.99
toHex 104.14 ns/op 106.73 ns/op 0.98
Buffer.from 85.580 ns/op 91.473 ns/op 0.94
shared Buffer 56.209 ns/op 60.952 ns/op 0.92
fastMsgIdFn sha256 / 200 bytes 1.4550 us/op 1.5030 us/op 0.97
fastMsgIdFn h32 xxhash / 200 bytes 149.00 ns/op 152.00 ns/op 0.98
fastMsgIdFn h64 xxhash / 200 bytes 213.00 ns/op 204.00 ns/op 1.04
fastMsgIdFn sha256 / 1000 bytes 4.7440 us/op 4.8430 us/op 0.98
fastMsgIdFn h32 xxhash / 1000 bytes 244.00 ns/op 251.00 ns/op 0.97
fastMsgIdFn h64 xxhash / 1000 bytes 256.00 ns/op 255.00 ns/op 1.00
fastMsgIdFn sha256 / 10000 bytes 40.406 us/op 43.060 us/op 0.94
fastMsgIdFn h32 xxhash / 10000 bytes 1.2300 us/op 1.2670 us/op 0.97
fastMsgIdFn h64 xxhash / 10000 bytes 827.00 ns/op 820.00 ns/op 1.01
send data - 1000 256B messages 4.2142 ms/op 3.7177 ms/op 1.13
send data - 1000 512B messages 4.2846 ms/op 3.6796 ms/op 1.16
send data - 1000 1024B messages 4.2609 ms/op 4.2670 ms/op 1.00
send data - 1000 1200B messages 4.5340 ms/op 4.5154 ms/op 1.00
send data - 1000 2048B messages 4.7029 ms/op 4.5184 ms/op 1.04
send data - 1000 4096B messages 5.5835 ms/op 4.9165 ms/op 1.14
send data - 1000 16384B messages 12.501 ms/op 29.837 ms/op 0.42
send data - 1000 65536B messages 142.05 ms/op 225.95 ms/op 0.63
enrSubnets - fastDeserialize 64 bits 745.00 ns/op 742.00 ns/op 1.00
enrSubnets - ssz BitVector 64 bits 275.00 ns/op 272.00 ns/op 1.01
enrSubnets - fastDeserialize 4 bits 104.00 ns/op 103.00 ns/op 1.01
enrSubnets - ssz BitVector 4 bits 272.00 ns/op 278.00 ns/op 0.98
prioritizePeers score -10:0 att 32-0.1 sync 2-0 206.13 us/op 214.74 us/op 0.96
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 240.21 us/op 237.76 us/op 1.01
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 360.26 us/op 345.29 us/op 1.04
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 627.03 us/op 612.74 us/op 1.02
prioritizePeers score 0:0 att 64-1 sync 4-1 729.83 us/op 716.40 us/op 1.02
array of 16000 items push then shift 1.2634 us/op 1.3180 us/op 0.96
LinkedList of 16000 items push then shift 7.3430 ns/op 6.8410 ns/op 1.07
array of 16000 items push then pop 67.996 ns/op 66.041 ns/op 1.03
LinkedList of 16000 items push then pop 6.0120 ns/op 6.2580 ns/op 0.96
array of 24000 items push then shift 1.8502 us/op 1.9602 us/op 0.94
LinkedList of 24000 items push then shift 6.9470 ns/op 6.4390 ns/op 1.08
array of 24000 items push then pop 96.655 ns/op 93.124 ns/op 1.04
LinkedList of 24000 items push then pop 5.8470 ns/op 6.2270 ns/op 0.94
intersect bitArray bitLen 8 4.5980 ns/op 4.8170 ns/op 0.95
intersect array and set length 8 28.255 ns/op 30.624 ns/op 0.92
intersect bitArray bitLen 128 23.138 ns/op 24.319 ns/op 0.95
intersect array and set length 128 479.82 ns/op 517.70 ns/op 0.93
bitArray.getTrueBitIndexes() bitLen 128 979.00 ns/op 990.00 ns/op 0.99
bitArray.getTrueBitIndexes() bitLen 248 1.7430 us/op 1.7500 us/op 1.00
bitArray.getTrueBitIndexes() bitLen 512 3.5100 us/op 3.5720 us/op 0.98
Full columns - reconstruct all 6 blobs 169.66 us/op 181.56 us/op 0.93
Full columns - reconstruct half of the blobs out of 6 63.049 us/op 68.478 us/op 0.92
Full columns - reconstruct single blob out of 6 33.913 us/op 33.570 us/op 1.01
Half columns - reconstruct all 6 blobs 404.35 ms/op 395.57 ms/op 1.02
Half columns - reconstruct half of the blobs out of 6 197.74 ms/op 197.21 ms/op 1.00
Half columns - reconstruct single blob out of 6 72.400 ms/op 68.226 ms/op 1.06
Full columns - reconstruct all 10 blobs 200.33 us/op 273.18 us/op 0.73
Full columns - reconstruct half of the blobs out of 10 94.814 us/op 132.49 us/op 0.72
Full columns - reconstruct single blob out of 10 29.444 us/op 34.939 us/op 0.84
Half columns - reconstruct all 10 blobs 646.46 ms/op 644.80 ms/op 1.00
Half columns - reconstruct half of the blobs out of 10 325.11 ms/op 325.51 ms/op 1.00
Half columns - reconstruct single blob out of 10 71.081 ms/op 68.699 ms/op 1.03
Full columns - reconstruct all 20 blobs 1.6329 ms/op 2.0670 ms/op 0.79
Full columns - reconstruct half of the blobs out of 20 238.06 us/op 301.07 us/op 0.79
Full columns - reconstruct single blob out of 20 30.063 us/op 36.604 us/op 0.82
Half columns - reconstruct all 20 blobs 1.2878 s/op 1.2821 s/op 1.00
Half columns - reconstruct half of the blobs out of 20 646.23 ms/op 635.77 ms/op 1.02
Half columns - reconstruct single blob out of 20 69.995 ms/op 67.235 ms/op 1.04
Set add up to 64 items then delete first 2.4008 us/op 2.6578 us/op 0.90
OrderedSet add up to 64 items then delete first 3.1750 us/op 3.4624 us/op 0.92
Set add up to 64 items then delete last 2.2235 us/op 2.4351 us/op 0.91
OrderedSet add up to 64 items then delete last 3.1553 us/op 3.5617 us/op 0.89
Set add up to 64 items then delete middle 2.0011 us/op 2.2293 us/op 0.90
OrderedSet add up to 64 items then delete middle 4.5160 us/op 5.0736 us/op 0.89
Set add up to 128 items then delete first 3.9789 us/op 4.3514 us/op 0.91
OrderedSet add up to 128 items then delete first 6.2909 us/op 6.5374 us/op 0.96
Set add up to 128 items then delete last 3.6748 us/op 4.3313 us/op 0.85
OrderedSet add up to 128 items then delete last 5.9224 us/op 6.4182 us/op 0.92
Set add up to 128 items then delete middle 3.6498 us/op 4.0585 us/op 0.90
OrderedSet add up to 128 items then delete middle 11.529 us/op 12.407 us/op 0.93
Set add up to 256 items then delete first 7.4591 us/op 8.0843 us/op 0.92
OrderedSet add up to 256 items then delete first 12.368 us/op 12.009 us/op 1.03
Set add up to 256 items then delete last 7.2539 us/op 7.9950 us/op 0.91
OrderedSet add up to 256 items then delete last 11.504 us/op 12.531 us/op 0.92
Set add up to 256 items then delete middle 7.2679 us/op 7.9831 us/op 0.91
OrderedSet add up to 256 items then delete middle 34.097 us/op 36.534 us/op 0.93
pass gossip attestations to forkchoice per slot 2.5333 ms/op 2.5308 ms/op 1.00
forkChoice updateHead vc 100000 bc 64 eq 0 383.60 us/op 471.64 us/op 0.81
forkChoice updateHead vc 600000 bc 64 eq 0 2.2914 ms/op 2.8241 ms/op 0.81
forkChoice updateHead vc 1000000 bc 64 eq 0 3.7013 ms/op 4.7062 ms/op 0.79
forkChoice updateHead vc 600000 bc 320 eq 0 2.1982 ms/op 2.8365 ms/op 0.77
forkChoice updateHead vc 600000 bc 1200 eq 0 2.2689 ms/op 2.9105 ms/op 0.78
forkChoice updateHead vc 600000 bc 7200 eq 0 2.9856 ms/op 3.1415 ms/op 0.95
forkChoice updateHead vc 600000 bc 64 eq 1000 3.1287 ms/op 3.5395 ms/op 0.88
forkChoice updateHead vc 600000 bc 64 eq 10000 3.3215 ms/op 3.6458 ms/op 0.91
forkChoice updateHead vc 600000 bc 64 eq 300000 7.1610 ms/op 7.7886 ms/op 0.92
computeDeltas 1400000 validators 0% inactive 13.587 ms/op 13.934 ms/op 0.98
computeDeltas 1400000 validators 10% inactive 12.794 ms/op 12.833 ms/op 1.00
computeDeltas 1400000 validators 20% inactive 11.666 ms/op 12.018 ms/op 0.97
computeDeltas 1400000 validators 50% inactive 8.8790 ms/op 9.4268 ms/op 0.94
computeDeltas 2100000 validators 0% inactive 20.449 ms/op 20.660 ms/op 0.99
computeDeltas 2100000 validators 10% inactive 19.169 ms/op 19.370 ms/op 0.99
computeDeltas 2100000 validators 20% inactive 17.561 ms/op 18.186 ms/op 0.97
computeDeltas 2100000 validators 50% inactive 13.367 ms/op 14.160 ms/op 0.94
altair processAttestation - 250000 vs - 7PWei normalcase 2.0311 ms/op 1.7794 ms/op 1.14
altair processAttestation - 250000 vs - 7PWei worstcase 2.9564 ms/op 2.4651 ms/op 1.20
altair processAttestation - setStatus - 1/6 committees join 96.916 us/op 106.54 us/op 0.91
altair processAttestation - setStatus - 1/3 committees join 194.94 us/op 200.57 us/op 0.97
altair processAttestation - setStatus - 1/2 committees join 277.26 us/op 288.79 us/op 0.96
altair processAttestation - setStatus - 2/3 committees join 357.32 us/op 366.82 us/op 0.97
altair processAttestation - setStatus - 4/5 committees join 506.21 us/op 524.17 us/op 0.97
altair processAttestation - setStatus - 100% committees join 572.12 us/op 632.11 us/op 0.91
altair processBlock - 250000 vs - 7PWei normalcase 3.0738 ms/op 3.2399 ms/op 0.95
altair processBlock - 250000 vs - 7PWei normalcase hashState 18.863 ms/op 15.459 ms/op 1.22
altair processBlock - 250000 vs - 7PWei worstcase 20.693 ms/op 20.938 ms/op 0.99
altair processBlock - 250000 vs - 7PWei worstcase hashState 46.823 ms/op 40.015 ms/op 1.17
phase0 processBlock - 250000 vs - 7PWei normalcase 1.5204 ms/op 1.2982 ms/op 1.17
phase0 processBlock - 250000 vs - 7PWei worstcase 16.979 ms/op 17.552 ms/op 0.97
altair processEth1Data - 250000 vs - 7PWei normalcase 285.15 us/op 308.75 us/op 0.92
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:16 3.1360 us/op 6.1570 us/op 0.51
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:220 21.252 us/op 26.399 us/op 0.81
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:43 5.7290 us/op 6.2150 us/op 0.92
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:19 3.4530 us/op 5.3390 us/op 0.65
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1021 92.717 us/op 101.75 us/op 0.91
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11778 1.3986 ms/op 1.4450 ms/op 0.97
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 1.8044 ms/op 1.9058 ms/op 0.95
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 1.7495 ms/op 1.8595 ms/op 0.94
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 3.6394 ms/op 3.7360 ms/op 0.97
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.0917 ms/op 2.1361 ms/op 0.98
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 4.1311 ms/op 4.1163 ms/op 1.00
Tree 40 250000 create 353.18 ms/op 324.77 ms/op 1.09
Tree 40 250000 get(125000) 91.569 ns/op 103.66 ns/op 0.88
Tree 40 250000 set(125000) 998.72 ns/op 1.0938 us/op 0.91
Tree 40 250000 toArray() 18.243 ms/op 9.8617 ms/op 1.85
Tree 40 250000 iterate all - toArray() + loop 17.009 ms/op 10.352 ms/op 1.64
Tree 40 250000 iterate all - get(i) 40.798 ms/op 41.281 ms/op 0.99
Array 250000 create 2.1276 ms/op 2.1001 ms/op 1.01
Array 250000 clone - spread 691.74 us/op 659.53 us/op 1.05
Array 250000 get(125000) 0.30300 ns/op 0.29800 ns/op 1.02
Array 250000 set(125000) 0.30000 ns/op 0.30300 ns/op 0.99
Array 250000 iterate all - loop 58.024 us/op 57.707 us/op 1.01
phase0 afterProcessEpoch - 250000 vs - 7PWei 40.730 ms/op 50.453 ms/op 0.81
Array.fill - length 1000000 2.3892 ms/op 2.2517 ms/op 1.06
Array push - length 1000000 8.7352 ms/op 8.0223 ms/op 1.09
Array.get 0.20964 ns/op 0.20985 ns/op 1.00
Uint8Array.get 0.24609 ns/op 0.25268 ns/op 0.97
phase0 beforeProcessEpoch - 250000 vs - 7PWei 15.793 ms/op 14.090 ms/op 1.12
altair processEpoch - mainnet_e81889 339.64 ms/op 242.95 ms/op 1.40
mainnet_e81889 - altair beforeProcessEpoch 34.791 ms/op 14.664 ms/op 2.37
mainnet_e81889 - altair processJustificationAndFinalization 6.2940 us/op 4.6840 us/op 1.34
mainnet_e81889 - altair processInactivityUpdates 3.6327 ms/op 3.7728 ms/op 0.96
mainnet_e81889 - altair processRewardsAndPenalties 19.076 ms/op 17.676 ms/op 1.08
mainnet_e81889 - altair processRegistryUpdates 533.00 ns/op 514.00 ns/op 1.04
mainnet_e81889 - altair processSlashings 132.00 ns/op 132.00 ns/op 1.00
mainnet_e81889 - altair processEth1DataReset 130.00 ns/op 133.00 ns/op 0.98
mainnet_e81889 - altair processEffectiveBalanceUpdates 3.5762 ms/op 1.3259 ms/op 2.70
mainnet_e81889 - altair processSlashingsReset 698.00 ns/op 688.00 ns/op 1.01
mainnet_e81889 - altair processRandaoMixesReset 1.3120 us/op 1.0100 us/op 1.30
mainnet_e81889 - altair processHistoricalRootsUpdate 132.00 ns/op 131.00 ns/op 1.01
mainnet_e81889 - altair processParticipationFlagUpdates 439.00 ns/op 420.00 ns/op 1.05
mainnet_e81889 - altair processSyncCommitteeUpdates 107.00 ns/op 107.00 ns/op 1.00
mainnet_e81889 - altair afterProcessEpoch 42.726 ms/op 41.668 ms/op 1.03
capella processEpoch - mainnet_e217614 990.34 ms/op 767.72 ms/op 1.29
mainnet_e217614 - capella beforeProcessEpoch 62.255 ms/op 58.611 ms/op 1.06
mainnet_e217614 - capella processJustificationAndFinalization 9.2290 us/op 4.9010 us/op 1.88
mainnet_e217614 - capella processInactivityUpdates 17.702 ms/op 12.833 ms/op 1.38
mainnet_e217614 - capella processRewardsAndPenalties 107.16 ms/op 93.788 ms/op 1.14
mainnet_e217614 - capella processRegistryUpdates 4.4590 us/op 4.5130 us/op 0.99
mainnet_e217614 - capella processSlashings 135.00 ns/op 139.00 ns/op 0.97
mainnet_e217614 - capella processEth1DataReset 127.00 ns/op 125.00 ns/op 1.02
mainnet_e217614 - capella processEffectiveBalanceUpdates 6.6770 ms/op 5.6838 ms/op 1.17
mainnet_e217614 - capella processSlashingsReset 692.00 ns/op 672.00 ns/op 1.03
mainnet_e217614 - capella processRandaoMixesReset 1.4390 us/op 1.0370 us/op 1.39
mainnet_e217614 - capella processHistoricalRootsUpdate 132.00 ns/op 129.00 ns/op 1.02
mainnet_e217614 - capella processParticipationFlagUpdates 475.00 ns/op 421.00 ns/op 1.13
mainnet_e217614 - capella afterProcessEpoch 113.97 ms/op 109.17 ms/op 1.04
phase0 processEpoch - mainnet_e58758 329.51 ms/op 296.79 ms/op 1.11
mainnet_e58758 - phase0 beforeProcessEpoch 65.470 ms/op 56.821 ms/op 1.15
mainnet_e58758 - phase0 processJustificationAndFinalization 5.7820 us/op 5.5530 us/op 1.04
mainnet_e58758 - phase0 processRewardsAndPenalties 15.435 ms/op 15.270 ms/op 1.01
mainnet_e58758 - phase0 processRegistryUpdates 2.3460 us/op 2.2650 us/op 1.04
mainnet_e58758 - phase0 processSlashings 128.00 ns/op 133.00 ns/op 0.96
mainnet_e58758 - phase0 processEth1DataReset 187.00 ns/op 127.00 ns/op 1.47
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 821.23 us/op 1.1411 ms/op 0.72
mainnet_e58758 - phase0 processSlashingsReset 906.00 ns/op 813.00 ns/op 1.11
mainnet_e58758 - phase0 processRandaoMixesReset 1.4640 us/op 1.0840 us/op 1.35
mainnet_e58758 - phase0 processHistoricalRootsUpdate 137.00 ns/op 133.00 ns/op 1.03
mainnet_e58758 - phase0 processParticipationRecordUpdates 1.2210 us/op 990.00 ns/op 1.23
mainnet_e58758 - phase0 afterProcessEpoch 34.997 ms/op 34.728 ms/op 1.01
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.0254 ms/op 1.1306 ms/op 0.91
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 1.5322 ms/op 1.8202 ms/op 0.84
altair processInactivityUpdates - 250000 normalcase 11.026 ms/op 10.637 ms/op 1.04
altair processInactivityUpdates - 250000 worstcase 13.439 ms/op 10.607 ms/op 1.27
phase0 processRegistryUpdates - 250000 normalcase 3.1770 us/op 2.2450 us/op 1.42
phase0 processRegistryUpdates - 250000 badcase_full_deposits 162.20 us/op 151.10 us/op 1.07
phase0 processRegistryUpdates - 250000 worstcase 0.5 77.300 ms/op 57.646 ms/op 1.34
altair processRewardsAndPenalties - 250000 normalcase 17.536 ms/op 13.533 ms/op 1.30
altair processRewardsAndPenalties - 250000 worstcase 16.085 ms/op 12.996 ms/op 1.24
phase0 getAttestationDeltas - 250000 normalcase 5.4969 ms/op 5.2939 ms/op 1.04
phase0 getAttestationDeltas - 250000 worstcase 5.5753 ms/op 5.3651 ms/op 1.04
phase0 processSlashings - 250000 worstcase 71.432 us/op 61.323 us/op 1.16
altair processSyncCommitteeUpdates - 250000 12.475 ms/op 10.202 ms/op 1.22
BeaconState.hashTreeRoot - No change 166.00 ns/op 170.00 ns/op 0.98
BeaconState.hashTreeRoot - 1 full validator 110.48 us/op 57.637 us/op 1.92
BeaconState.hashTreeRoot - 32 full validator 1.3408 ms/op 657.14 us/op 2.04
BeaconState.hashTreeRoot - 512 full validator 8.8487 ms/op 6.1004 ms/op 1.45
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 119.84 us/op 71.276 us/op 1.68
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 2.8797 ms/op 1.0493 ms/op 2.74
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 30.997 ms/op 13.225 ms/op 2.34
BeaconState.hashTreeRoot - 1 balances 106.60 us/op 57.284 us/op 1.86
BeaconState.hashTreeRoot - 32 balances 908.26 us/op 645.01 us/op 1.41
BeaconState.hashTreeRoot - 512 balances 6.3507 ms/op 4.7525 ms/op 1.34
BeaconState.hashTreeRoot - 250000 balances 195.53 ms/op 95.710 ms/op 2.04
aggregationBits - 2048 els - zipIndexesInBitList 19.825 us/op 20.323 us/op 0.98
regular array get 100000 times 22.375 us/op 23.260 us/op 0.96
wrappedArray get 100000 times 22.278 us/op 23.164 us/op 0.96
arrayWithProxy get 100000 times 13.842 ms/op 10.250 ms/op 1.35
ssz.Root.equals 21.020 ns/op 21.744 ns/op 0.97
byteArrayEquals 20.845 ns/op 21.572 ns/op 0.97
Buffer.compare 8.5910 ns/op 9.4400 ns/op 0.91
processSlot - 1 slots 10.531 us/op 7.9950 us/op 1.32
processSlot - 32 slots 2.0227 ms/op 1.5360 ms/op 1.32
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 4.7778 ms/op 3.4434 ms/op 1.39
getCommitteeAssignments - req 1 vs - 250000 vc 1.6089 ms/op 1.7049 ms/op 0.94
getCommitteeAssignments - req 100 vs - 250000 vc 3.2811 ms/op 3.4811 ms/op 0.94
getCommitteeAssignments - req 1000 vs - 250000 vc 3.5530 ms/op 3.7410 ms/op 0.95
findModifiedValidators - 10000 modified validators 823.54 ms/op 824.74 ms/op 1.00
findModifiedValidators - 1000 modified validators 493.42 ms/op 442.16 ms/op 1.12
findModifiedValidators - 100 modified validators 263.54 ms/op 302.18 ms/op 0.87
findModifiedValidators - 10 modified validators 147.13 ms/op 237.04 ms/op 0.62
findModifiedValidators - 1 modified validators 160.54 ms/op 164.00 ms/op 0.98
findModifiedValidators - no difference 186.72 ms/op 153.31 ms/op 1.22
migrate state 1500000 validators, 3400 modified, 2000 new 3.5725 s/op 2.7488 s/op 1.30
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 3.7400 ns/op 3.7000 ns/op 1.01
state getBlockRootAtSlot - 250000 vs - 7PWei 420.37 ns/op 289.35 ns/op 1.45
computeProposerIndex 100000 validators 1.3291 ms/op 1.3365 ms/op 0.99
getNextSyncCommitteeIndices 1000 validators 2.8688 ms/op 2.8963 ms/op 0.99
getNextSyncCommitteeIndices 10000 validators 25.298 ms/op 25.320 ms/op 1.00
getNextSyncCommitteeIndices 100000 validators 85.008 ms/op 85.323 ms/op 1.00
computeProposers - vc 250000 547.21 us/op 551.07 us/op 0.99
computeEpochShuffling - vc 250000 39.547 ms/op 39.208 ms/op 1.01
getNextSyncCommittee - vc 250000 9.4382 ms/op 9.5553 ms/op 0.99
nodejs block root to RootHex using toHex 102.79 ns/op 102.67 ns/op 1.00
nodejs block root to RootHex using toRootHex 64.410 ns/op 65.511 ns/op 0.98
nodejs fromHex(blob) 810.88 us/op 810.22 us/op 1.00
nodejs fromHexInto(blob) 591.17 us/op 670.78 us/op 0.88
nodejs block root to RootHex using the deprecated toHexString 434.32 ns/op 503.24 ns/op 0.86
nodejs byteArrayEquals 32 bytes (block root) 24.488 ns/op 26.293 ns/op 0.93
nodejs byteArrayEquals 48 bytes (pubkey) 35.567 ns/op 37.987 ns/op 0.94
nodejs byteArrayEquals 96 bytes (signature) 31.622 ns/op 41.696 ns/op 0.76
nodejs byteArrayEquals 1024 bytes 39.082 ns/op 44.116 ns/op 0.89
nodejs byteArrayEquals 131072 bytes (blob) 1.6462 us/op 1.8044 us/op 0.91
browser block root to RootHex using toHex 135.21 ns/op 148.91 ns/op 0.91
browser block root to RootHex using toRootHex 124.49 ns/op 133.13 ns/op 0.94
browser fromHex(blob) 1.5413 ms/op 1.6034 ms/op 0.96
browser fromHexInto(blob) 589.61 us/op 666.91 us/op 0.88
browser block root to RootHex using the deprecated toHexString 431.61 ns/op 355.60 ns/op 1.21
browser byteArrayEquals 32 bytes (block root) 26.439 ns/op 28.261 ns/op 0.94
browser byteArrayEquals 48 bytes (pubkey) 37.385 ns/op 40.080 ns/op 0.93
browser byteArrayEquals 96 bytes (signature) 69.660 ns/op 74.997 ns/op 0.93
browser byteArrayEquals 1024 bytes 705.35 ns/op 758.11 ns/op 0.93
browser byteArrayEquals 131072 bytes (blob) 90.137 us/op 95.698 us/op 0.94

by benchmarkbot/action

Comment thread packages/beacon-node/src/chain/produceBlock/produceBlockBody.ts Outdated
@nflaig nflaig marked this pull request as draft April 13, 2026 17:46
@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Apr 13, 2026

converted to draft, let's wait for the spec to be finalized

@nflaig nflaig marked this pull request as ready for review April 13, 2026 21:57
twoeths
twoeths previously approved these changes Apr 14, 2026
@nflaig nflaig marked this pull request as draft April 14, 2026 09:30
@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Apr 14, 2026

@twoeths thanks for reviewing, keep as draft, still debating how to define this on the spec side

@nflaig nflaig mentioned this pull request Apr 15, 2026
5 tasks
Copy link
Copy Markdown
Contributor

@twoeths twoeths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just want to point out that the call of prepareExecutionPayload in prepareNextSlot will not work in PROD because payload is available at the time, so chain.forkChoice.shouldExtendPayload will always false

@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Apr 17, 2026

payload is available at the time

did you mean not available?

@nflaig nflaig marked this pull request as ready for review April 17, 2026 19:37
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ae611fa3d1

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/beacon-node/src/chain/produceBlock/produceBlockBody.ts Outdated
@lodekeeper
Copy link
Copy Markdown
Contributor

I think twoeths' statement is correct as written ("payload is available"), but the conclusion is inverted.

Tracing the prepareNextSlot path: it runs ~2/3 into slot N to prepare for slot N+1, with parentBlockRoot = slotN. At that point:

  • proposerBoostRoot is typically slot N's block root (same as blockRoot)
  • shouldExtendPayload hits condition 3: proposerBoostBlock.parentRoot !== blockRoot → slot N's parent (slot N-1) ≠ slot N → true
  • So shouldExtendPayload returns true in PROD, not false

This means !shouldExtendPayload is always false in the preparePayloadAttributes guard, so prepareNextSlot always takes the normal FULL-parent withdrawal path (fresh getExpectedWithdrawals()). That's correct behavior for the common case.

The gap is: prepareNextSlot can never pre-prepare the EMPTY-parent path (reuse payloadExpectedWithdrawals), but that's an edge case that would be caught at actual block production time anyway. Not a PROD bug — just a limitation of the pre-preparation optimization.

@nflaig nflaig marked this pull request as draft April 17, 2026 20:36
@nflaig nflaig added the spec-gloas Issues targeting the Glamsterdam spec version label Apr 20, 2026
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 52.53%. Comparing base (e341cdc) to head (cc02104).
⚠️ Report is 1 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #9209      +/-   ##
============================================
- Coverage     52.53%   52.53%   -0.01%     
============================================
  Files           848      848              
  Lines         61408    61402       -6     
  Branches       4525     4525              
============================================
- Hits          32262    32256       -6     
  Misses        29081    29081              
  Partials         65       65              
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@nflaig nflaig marked this pull request as ready for review April 20, 2026 16:42
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 52866ebbff

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread packages/beacon-node/src/chain/forkChoice/index.ts
Comment thread packages/beacon-node/src/node/nodejs.ts
Comment thread packages/state-transition/src/stateView/beaconStateView.ts
@nflaig
Copy link
Copy Markdown
Member Author

nflaig commented Apr 20, 2026

@twoeths I am not changing any timings as part of this PR, leaving this up to #9164

if (isEpochTransition) {
this.metrics?.precomputeNextEpochTransition.count.inc({result: "success"}, 1);
const previousHits = this.chain.regen.updatePreComputedCheckpoint(headRoot, nextEpoch);
const previousHits = this.chain.regen.updatePreComputedCheckpoint(updatedHeadRoot, nextEpoch);
Copy link
Copy Markdown
Member Author

@nflaig nflaig Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems like a bug we have on unstable, I don't see a reason why we would pre-compute for the head root which we predict to be re-orged in certain cases due to proposer boost reorg

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, agreed — this looks like a latent bug on unstable, not a gloas-specific regression.

History check: the updatePreComputedCheckpoint(headRoot, nextEpoch) call was introduced in 5fe87f8e44 (#6924 fix: prepareNextEpoch metric) after the proposer-boost-reorg logic had already been added in f6d3bce802 (#6652). At that point an inner updatedHeadRoot already existed inside the if (feeRecipient) block, but the new call used the outer headRoot. That reads like an oversight rather than an intentional choice — there's no load-bearing reason I can find for pre-computing the checkpoint on the root we ourselves predict will be reorged away; the next block is going to build on updatedHeadRoot, so that's also the root whose next-epoch state the rest of the system will actually request.

The fix here is also strictly safe:

  • When no proposer-boost-reorg is predicted, updatedHeadRoot === headRoot (it's initialized to headRoot at the top of the loop and only overwritten on a predicted reorg), so behavior is unchanged.
  • When a proposer-boost-reorg is predicted and happens, we now precompute the right checkpoint instead of wasting it.
  • When a proposer-boost-reorg is predicted but doesn't happen, we lose the precompute — but we would have lost it anyway under the old code in any case where the real head ends up somewhere other than headRoot, so net no worse.

Happy either way on scope: keep it bundled here (it's a 2-line change and thematically adjacent to the parent-payload-status work) or split into a tiny standalone fix against unstable if you'd rather have it land independently of the gloas changes. Marginal preference for splitting, but no strong view.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the proposer boost reorg will not happen at epoch boundary
so headerRoot == updatedHeadRoot in that case
leaving updatedHeaderRoot is misleading
I prefer leaving headerRoot with this comment for later reference

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right, and my earlier reply was wrong on this — I missed the epoch-boundary suppression inside predictProposerHead.

Traced it: predictProposerHeadshouldOverrideForkChoiceUpdategetPreliminaryProposerHead, which at forkChoice.ts:1852-1857 short-circuits with NotReorgedReason.NotShufflingStable whenever slot % SLOTS_PER_EPOCH === 0:

// No reorg if we are at epoch boundary where proposer shuffling could change
const isShufflingStable = slot % SLOTS_PER_EPOCH !== 0;
if (!isShufflingStable) {
  return {prelimProposerHead, prelimNotReorgedReason: NotReorgedReason.NotShufflingStable};
}

So predictProposerHead() returns the original headBlock at epoch boundaries → updatedHeadRoot === headRoot at the point updatePreComputedCheckpoint runs (since that branch is gated on isEpochTransition). The change in ddf56e2d76 is functionally a no-op, and using updatedHeadRoot there reads as if the two could differ when they can't.

+1 on reverting that one line back to headRoot and keeping @nflaig's "this seems like a bug we have on unstable…" note as a doc-only comment for future reference. Sorry for the noisy earlier confirmation.

Comment thread packages/beacon-node/src/chain/prepareNextSlot.ts Outdated
prepareState: updatedPrepareState,
prepareSlot,
parentBlockRoot: fromHex(headRoot),
parentBlockRoot: fromHex(updatedHeadRoot),
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this was wrong previously @twoeths, can you double check before we merge just to be sure

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Traced this too — I do think this one is a real latent bug on unstable, not a cosmetic tweak, and it's a different story from the updatePreComputedCheckpoint line above:

On unstable the SSE block currently does:

const data = getPayloadAttributesForSSE(fork as ForkPostBellatrix, this.chain, {
  prepareState: updatedPrepareState,              // may be the reorged-parent state
  prepareSlot,
  parentBlockRoot: fromHex(headRoot),             // original head
  ...
});

When a proposer-boost-reorg is predicted and we take the branch that re-regens updatedPrepareState from the parent of the weak head, the state being emitted is built on updatedHeadRoot, but the event reports parentBlockRoot = headRoot. So the SSE {prepareState, parentBlockRoot} pair is internally inconsistent for a proposer-boost-reorg prediction.

For symmetry, the same-block prepareExecutionPayload(...) call in the feeRecipient branch already uses fromHex(updatedHeadRoot) as its parentBeaconBlockRoot, so that path is self-consistent — only the SSE emit diverged.

And unlike the updatePreComputedCheckpoint path, this block is not gated on isEpochTransition, so predictProposerHead can genuinely return a different root here (the shuffling-stable short-circuit at forkChoice.ts:1852 only kicks in at epoch boundary). So updatedHeadRoot !== headRoot is a reachable case here.

Change to parentBlockRoot: fromHex(updatedHeadRoot) aligns the SSE payload with both prepareState and the prepareExecutionPayload parent root, and is strictly safe when no reorg is predicted because updatedHeadRoot still initializes to headRoot. Reads correct to me pending @twoeths's double-check.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not so sure about this one
why do we have to emit routes.events.EventType.payloadAttributes for every slot? the value we emit may not be the correct one
I think we should only do it if we're the next proposer, ie do it inside if (feeRecipient) {
and use the value of the head root we're supposed to propose

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we have to emit routes.events.EventType.payloadAttributes for every slot?

yes for builders

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to @nflaig on keeping the per-slot emit. The payload_attributes topic on /eth/v1/events is specified as fired on every head-advance / slot transition precisely so external block builders (MEV relays, local builder clients) can observe attributes they'll be asked to build against, regardless of whether this node is the next proposer. Restricting to if (feeRecipient) would make the event fire only on the subset of slots we happen to know we're proposing, which wouldn't serve the relay use case.

On the value we emit in the non-proposer case: updatedHeadRoot === headRoot there (we only rebind updatedHeadRoot inside the if (feeRecipient) / proposer-boost-reorg branch), and updatedPrepareState === prepareState, so the emit degenerates to "here's the current head's payload attributes" — which is exactly what builders want in that case.

The parentBlockRoot → updatedHeadRoot change here is orthogonal to the larger emit-gating question: it only aligns parentBlockRoot with the prepareState the SSE already emits (and with the parentBeaconBlockRoot the sibling prepareExecutionPayload(..., fromHex(updatedHeadRoot), ...) call uses). Without this change, when we are the proposer and a proposer-boost-reorg is predicted, the SSE event pairs prepareState: updatedPrepareState (regened from the reorged-to parent) with parentBlockRoot: headRoot (the weak head), which is an internal inconsistency a relay would struggle to make sense of.

So my read is the same as before: keep this PR's one-line change on the SSE emit, leave the restructure-or-keep-emitting-every-slot decision out of scope.

Comment thread packages/beacon-node/src/chain/prepareNextSlot.ts Outdated
@nflaig nflaig merged commit 9fa9f08 into unstable Apr 21, 2026
19 checks passed
@nflaig nflaig deleted the nflaig/forkchoice-parent-payload-status branch April 21, 2026 10:12
lodekeeper added a commit to lodekeeper/lodestar that referenced this pull request Apr 21, 2026
Per bot review feedback on PR: removing the `onExecutionPayload(anchor, ...)`
call alongside the PTC vote override broke block production after restart from
a post-Gloas anchor. `shouldExtendPayload(anchorRoot)` (used in ChainSafe#9209) short-
circuits to false via `!hasPayload()`, so `produceBlockBody`/`prepareNextSlot`
pick `latestExecutionPayloadBid.parentBlockHash` instead of `.blockHash`,
producing bids that fail `processExecutionPayloadBid`'s parent-hash check.

Upstream consensus-specs `c7a0a8527` removes only the PTC vote seeding
(`payload_timeliness_vote={}`, `payload_data_availability_vote={}`) — it does
not remove the FULL-payload semantics of the anchor. The anchor state's
`latestBlockHash` is the executed payload hash, so `onExecutionPayload(anchor)`
still matches the spec's intent of `payload_states = {anchor_root: ...}`.

Keep the PTC vote override removal (the actual subject of this PR); restore
the FULL-variant seeding with a focused comment.

🤖 Generated with AI assistance
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

spec-gloas Issues targeting the Glamsterdam spec version

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants