Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement support for validator next-epoch proposer duties #3782

Merged
merged 29 commits into from
May 16, 2022

Conversation

dadepo
Copy link
Contributor

@dadepo dadepo commented Feb 22, 2022

Motivation

Description

Provides the ability to request block proposers one epoch in the future. See the corresponding issue #3746 for more background information

Closes #3746

Steps to test or reproduce

If current Epoch is N,

Make a request to get the proposer duties for epoch N+1:

curl -X GET "http://beacon-node/eth/v1/validator/duties/proposer/N+1" -H "accept: application/json"

Take note of the returned validator indices.

Using a chain explorer like https://prater.beaconcha.in/, wait till the N+1 epoch becomes the current epoch. Then confirm that the validator indices now listed as part of N+1 are the same returned from the previous curl request to the beacon endpoint.

@codecov
Copy link

codecov bot commented Feb 22, 2022

Codecov Report

Merging #3782 (9a51aa4) into master (c754818) will increase coverage by 0.22%.
The diff coverage is n/a.

@@            Coverage Diff             @@
##           master    #3782      +/-   ##
==========================================
+ Coverage   36.26%   36.49%   +0.22%     
==========================================
  Files         324      326       +2     
  Lines        9099     9886     +787     
  Branches     1465     1693     +228     
==========================================
+ Hits         3300     3608     +308     
- Misses       5626     6072     +446     
- Partials      173      206      +33     

@github-actions
Copy link
Contributor

github-actions bot commented Feb 22, 2022

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 2277882 Previous: 7ddd9f5 Ratio
BeaconState.hashTreeRoot - No change 512.00 ns/op 392.00 ns/op 1.31
BeaconState.hashTreeRoot - 1 full validator 69.615 us/op 50.514 us/op 1.38
BeaconState.hashTreeRoot - 32 full validator 684.17 us/op 479.31 us/op 1.43
BeaconState.hashTreeRoot - 512 full validator 7.3546 ms/op 5.4464 ms/op 1.35
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 89.908 us/op 61.627 us/op 1.46
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.2287 ms/op 868.08 us/op 1.42
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 16.087 ms/op 12.372 ms/op 1.30
BeaconState.hashTreeRoot - 1 balances 67.044 us/op 47.948 us/op 1.40
BeaconState.hashTreeRoot - 32 balances 632.74 us/op 409.24 us/op 1.55
BeaconState.hashTreeRoot - 512 balances 5.8139 ms/op 4.1889 ms/op 1.39
BeaconState.hashTreeRoot - 250000 balances 124.11 ms/op 97.027 ms/op 1.28
processSlot - 1 slots 14.550 us/op 8.6990 us/op 1.67
processSlot - 32 slots 2.1927 ms/op 1.4801 ms/op 1.48
getCommitteeAssignments - req 1 vs - 250000 vc 5.3411 ms/op 4.6673 ms/op 1.14
getCommitteeAssignments - req 100 vs - 250000 vc 7.6421 ms/op 6.4828 ms/op 1.18
getCommitteeAssignments - req 1000 vs - 250000 vc 8.1341 ms/op 6.9473 ms/op 1.17
computeProposers - vc 250000 19.542 ms/op 15.510 ms/op 1.26
computeEpochShuffling - vc 250000 159.92 ms/op 144.48 ms/op 1.11
getNextSyncCommittee - vc 250000 315.73 ms/op 256.40 ms/op 1.23
altair processAttestation - 250000 vs - 7PWei normalcase 4.4269 ms/op 3.7136 ms/op 1.19
altair processAttestation - 250000 vs - 7PWei worstcase 6.2661 ms/op 5.1948 ms/op 1.21
altair processAttestation - setStatus - 1/6 committees join 232.29 us/op 177.39 us/op 1.31
altair processAttestation - setStatus - 1/3 committees join 411.44 us/op 342.51 us/op 1.20
altair processAttestation - setStatus - 1/2 committees join 554.54 us/op 470.16 us/op 1.18
altair processAttestation - setStatus - 2/3 committees join 722.96 us/op 604.41 us/op 1.20
altair processAttestation - setStatus - 4/5 committees join 1.0197 ms/op 835.79 us/op 1.22
altair processAttestation - setStatus - 100% committees join 1.2266 ms/op 990.52 us/op 1.24
altair processBlock - 250000 vs - 7PWei normalcase 29.300 ms/op 28.708 ms/op 1.02
altair processBlock - 250000 vs - 7PWei normalcase hashState 40.973 ms/op 36.481 ms/op 1.12
altair processBlock - 250000 vs - 7PWei worstcase 89.131 ms/op 76.735 ms/op 1.16
altair processBlock - 250000 vs - 7PWei worstcase hashState 124.66 ms/op 93.155 ms/op 1.34
altair processEth1Data - 250000 vs - 7PWei normalcase 1.1061 ms/op 913.62 us/op 1.21
altair processEpoch - mainnet_e81889 601.58 ms/op 551.05 ms/op 1.09
mainnet_e81889 - altair beforeProcessEpoch 154.66 ms/op 145.97 ms/op 1.06
mainnet_e81889 - altair processJustificationAndFinalization 76.896 us/op 25.791 us/op 2.98
mainnet_e81889 - altair processInactivityUpdates 11.262 ms/op 10.160 ms/op 1.11
mainnet_e81889 - altair processRewardsAndPenalties 142.46 ms/op 137.51 ms/op 1.04
mainnet_e81889 - altair processRegistryUpdates 18.820 us/op 3.6620 us/op 5.14
mainnet_e81889 - altair processSlashings 6.3770 us/op 965.00 ns/op 6.61
mainnet_e81889 - altair processEth1DataReset 8.3630 us/op 1.6300 us/op 5.13
mainnet_e81889 - altair processEffectiveBalanceUpdates 7.4884 ms/op 6.7390 ms/op 1.11
mainnet_e81889 - altair processSlashingsReset 26.316 us/op 6.2970 us/op 4.18
mainnet_e81889 - altair processRandaoMixesReset 29.759 us/op 8.0190 us/op 3.71
mainnet_e81889 - altair processHistoricalRootsUpdate 8.7650 us/op 1.3120 us/op 6.68
mainnet_e81889 - altair processParticipationFlagUpdates 17.499 us/op 3.4320 us/op 5.10
mainnet_e81889 - altair processSyncCommitteeUpdates 6.5010 us/op 1.2600 us/op 5.16
mainnet_e81889 - altair afterProcessEpoch 182.44 ms/op 165.40 ms/op 1.10
altair processInactivityUpdates - 250000 normalcase 39.384 ms/op 35.032 ms/op 1.12
altair processInactivityUpdates - 250000 worstcase 41.075 ms/op 29.061 ms/op 1.41
altair processRewardsAndPenalties - 250000 normalcase 130.78 ms/op 83.025 ms/op 1.58
altair processRewardsAndPenalties - 250000 worstcase 88.782 ms/op 114.11 ms/op 0.78
altair processSyncCommitteeUpdates - 250000 331.97 ms/op 263.40 ms/op 1.26
Tree 40 250000 create 955.73 ms/op 751.79 ms/op 1.27
Tree 40 250000 get(125000) 319.97 ns/op 255.78 ns/op 1.25
Tree 40 250000 set(125000) 3.4173 us/op 2.1836 us/op 1.57
Tree 40 250000 toArray() 35.296 ms/op 29.410 ms/op 1.20
Tree 40 250000 iterate all - toArray() + loop 35.139 ms/op 29.751 ms/op 1.18
Tree 40 250000 iterate all - get(i) 144.01 ms/op 99.894 ms/op 1.44
MutableVector 250000 create 14.596 ms/op 14.614 ms/op 1.00
MutableVector 250000 get(125000) 13.347 ns/op 11.595 ns/op 1.15
MutableVector 250000 set(125000) 724.82 ns/op 618.16 ns/op 1.17
MutableVector 250000 toArray() 5.9604 ms/op 6.4828 ms/op 0.92
MutableVector 250000 iterate all - toArray() + loop 6.3668 ms/op 6.8520 ms/op 0.93
MutableVector 250000 iterate all - get(i) 3.5800 ms/op 3.0423 ms/op 1.18
Array 250000 create 5.4784 ms/op 6.4719 ms/op 0.85
Array 250000 clone - spread 2.7080 ms/op 3.4849 ms/op 0.78
Array 250000 get(125000) 1.1930 ns/op 1.4480 ns/op 0.82
Array 250000 set(125000) 1.2590 ns/op 1.4690 ns/op 0.86
Array 250000 iterate all - loop 135.16 us/op 148.26 us/op 0.91
effectiveBalanceIncrements clone Uint8Array 300000 98.721 us/op 231.62 us/op 0.43
effectiveBalanceIncrements clone MutableVector 300000 837.00 ns/op 607.00 ns/op 1.38
effectiveBalanceIncrements rw all Uint8Array 300000 179.64 us/op 267.52 us/op 0.67
effectiveBalanceIncrements rw all MutableVector 300000 225.03 ms/op 154.05 ms/op 1.46
aggregationBits - 2048 els - zipIndexesInBitList 30.460 us/op 23.744 us/op 1.28
regular array get 100000 times 52.662 us/op 59.510 us/op 0.88
wrappedArray get 100000 times 54.332 us/op 59.720 us/op 0.91
arrayWithProxy get 100000 times 33.147 ms/op 30.885 ms/op 1.07
ssz.Root.equals 487.00 ns/op 425.00 ns/op 1.15
byteArrayEquals 445.00 ns/op 404.00 ns/op 1.10
phase0 processBlock - 250000 vs - 7PWei normalcase 4.2477 ms/op 3.2026 ms/op 1.33
phase0 processBlock - 250000 vs - 7PWei worstcase 63.916 ms/op 41.952 ms/op 1.52
phase0 afterProcessEpoch - 250000 vs - 7PWei 173.00 ms/op 156.99 ms/op 1.10
phase0 beforeProcessEpoch - 250000 vs - 7PWei 96.346 ms/op 105.20 ms/op 0.92
phase0 processEpoch - mainnet_e58758 580.90 ms/op 472.22 ms/op 1.23
mainnet_e58758 - phase0 beforeProcessEpoch 283.23 ms/op 222.59 ms/op 1.27
mainnet_e58758 - phase0 processJustificationAndFinalization 71.046 us/op 25.382 us/op 2.80
mainnet_e58758 - phase0 processRewardsAndPenalties 131.16 ms/op 71.276 ms/op 1.84
mainnet_e58758 - phase0 processRegistryUpdates 40.051 us/op 12.899 us/op 3.10
mainnet_e58758 - phase0 processSlashings 8.7990 us/op 1.4060 us/op 6.26
mainnet_e58758 - phase0 processEth1DataReset 7.3080 us/op 1.7380 us/op 4.20
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 5.8406 ms/op 4.8259 ms/op 1.21
mainnet_e58758 - phase0 processSlashingsReset 24.912 us/op 7.1220 us/op 3.50
mainnet_e58758 - phase0 processRandaoMixesReset 29.971 us/op 7.8460 us/op 3.82
mainnet_e58758 - phase0 processHistoricalRootsUpdate 9.3900 us/op 1.5980 us/op 5.88
mainnet_e58758 - phase0 processParticipationRecordUpdates 26.403 us/op 6.9420 us/op 3.80
mainnet_e58758 - phase0 afterProcessEpoch 147.54 ms/op 139.88 ms/op 1.05
phase0 processEffectiveBalanceUpdates - 250000 normalcase 6.6336 ms/op 5.6028 ms/op 1.18
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 7.2933 ms/op 6.0906 ms/op 1.20
phase0 processRegistryUpdates - 250000 normalcase 35.512 us/op 9.9810 us/op 3.56
phase0 processRegistryUpdates - 250000 badcase_full_deposits 505.20 us/op 369.46 us/op 1.37
phase0 processRegistryUpdates - 250000 worstcase 0.5 241.45 ms/op 229.32 ms/op 1.05
phase0 getAttestationDeltas - 250000 normalcase 16.901 ms/op 14.806 ms/op 1.14
phase0 getAttestationDeltas - 250000 worstcase 17.930 ms/op 14.926 ms/op 1.20
phase0 processSlashings - 250000 worstcase 7.0877 ms/op 4.9005 ms/op 1.45
shuffle list - 16384 els 10.985 ms/op 9.4326 ms/op 1.16
shuffle list - 250000 els 151.34 ms/op 136.71 ms/op 1.11
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 1.0394 ms/op 858.26 us/op 1.21
pass gossip attestations to forkchoice per slot 3.9292 ms/op 3.1819 ms/op 1.23
computeDeltas 4.2263 ms/op 3.1013 ms/op 1.36
computeProposerBoostScoreFromBalances 450.86 us/op 445.95 us/op 1.01
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 2.3551 ms/op 2.2607 ms/op 1.04
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 85.490 us/op 67.894 us/op 1.26
BLS verify - blst-native 2.5482 ms/op 1.6391 ms/op 1.55
BLS verifyMultipleSignatures 3 - blst-native 5.1839 ms/op 3.3763 ms/op 1.54
BLS verifyMultipleSignatures 8 - blst-native 11.448 ms/op 7.2232 ms/op 1.58
BLS verifyMultipleSignatures 32 - blst-native 40.187 ms/op 26.179 ms/op 1.54
BLS aggregatePubkeys 32 - blst-native 55.548 us/op 35.242 us/op 1.58
BLS aggregatePubkeys 128 - blst-native 220.30 us/op 135.63 us/op 1.62
getAttestationsForBlock 57.708 ms/op 58.228 ms/op 0.99
CheckpointStateCache - add get delete 12.434 us/op 9.8050 us/op 1.27
validate gossip signedAggregateAndProof - struct 5.8031 ms/op 3.7654 ms/op 1.54
validate gossip attestation - struct 2.7855 ms/op 1.8090 ms/op 1.54
altair verifyImport mainnet_s3766816:31 8.1413 s/op 5.5597 s/op 1.46
pickEth1Vote - no votes 2.5602 ms/op 2.1400 ms/op 1.20
pickEth1Vote - max votes 27.968 ms/op 26.402 ms/op 1.06
pickEth1Vote - Eth1Data hashTreeRoot value x2048 14.228 ms/op 11.320 ms/op 1.26
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 26.532 ms/op 23.025 ms/op 1.15
pickEth1Vote - Eth1Data fastSerialize value x2048 1.7941 ms/op 1.5377 ms/op 1.17
pickEth1Vote - Eth1Data fastSerialize tree x2048 18.623 ms/op 18.166 ms/op 1.03
bytes32 toHexString 1.2100 us/op 998.00 ns/op 1.21
bytes32 Buffer.toString(hex) 734.00 ns/op 677.00 ns/op 1.08
bytes32 Buffer.toString(hex) from Uint8Array 1.2680 us/op 898.00 ns/op 1.41
bytes32 Buffer.toString(hex) + 0x 717.00 ns/op 703.00 ns/op 1.02
Object access 1 prop 0.35300 ns/op 0.34300 ns/op 1.03
Map access 1 prop 0.32300 ns/op 0.26600 ns/op 1.21
Object get x1000 16.103 ns/op 15.596 ns/op 1.03
Map get x1000 0.95900 ns/op 0.89200 ns/op 1.08
Object set x1000 99.787 ns/op 113.96 ns/op 0.88
Map set x1000 69.532 ns/op 69.172 ns/op 1.01
Return object 10000 times 0.39460 ns/op 0.33210 ns/op 1.19
Throw Error 10000 times 6.8497 us/op 5.2093 us/op 1.31
enrSubnets - fastDeserialize 64 bits 2.9000 us/op 2.7690 us/op 1.05
enrSubnets - ssz BitVector 64 bits 850.00 ns/op 767.00 ns/op 1.11
enrSubnets - fastDeserialize 4 bits 427.00 ns/op 412.00 ns/op 1.04
enrSubnets - ssz BitVector 4 bits 828.00 ns/op 757.00 ns/op 1.09
prioritizePeers score -10:0 att 32-0.1 sync 2-0 100.59 us/op 89.670 us/op 1.12
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 141.84 us/op 109.67 us/op 1.29
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 247.39 us/op 202.92 us/op 1.22
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 548.93 us/op 430.02 us/op 1.28
prioritizePeers score 0:0 att 64-1 sync 4-1 596.30 us/op 405.03 us/op 1.47
RateTracker 1000000 limit, 1 obj count per request 194.23 ns/op 184.99 ns/op 1.05
RateTracker 1000000 limit, 2 obj count per request 137.56 ns/op 140.06 ns/op 0.98
RateTracker 1000000 limit, 4 obj count per request 113.39 ns/op 108.77 ns/op 1.04
RateTracker 1000000 limit, 8 obj count per request 103.19 ns/op 103.35 ns/op 1.00
RateTracker with prune 4.3320 us/op 4.5090 us/op 0.96
array of 16000 items push then shift 5.0572 us/op 2.7867 us/op 1.81
LinkedList of 16000 items push then shift 25.912 ns/op 23.590 ns/op 1.10
array of 16000 items push then pop 226.75 ns/op 207.32 ns/op 1.09
LinkedList of 16000 items push then pop 22.086 ns/op 18.918 ns/op 1.17
array of 24000 items push then shift 7.7808 us/op 3.9963 us/op 1.95
LinkedList of 24000 items push then shift 26.463 ns/op 24.269 ns/op 1.09
array of 24000 items push then pop 210.30 ns/op 177.98 ns/op 1.18
LinkedList of 24000 items push then pop 21.918 ns/op 19.005 ns/op 1.15

by benchmarkbot/action

@@ -58,6 +59,7 @@ const SYNC_TOLERANCE_EPOCHS = 1;
*/
export function getValidatorApi({chain, config, logger, metrics, network, sync}: ApiModules): routes.validator.Api {
let genesisBlockRoot: Root | null = null;
const nextEpochProposerDutyCache = new Map<Epoch, ValidatorIndex[]>();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keeping caches here is not a good separation of concerns. We must keep all caches attached to the BeaconChain class, this functions should be state-less

twoeths
twoeths previously approved these changes Mar 1, 2022
Copy link
Contributor

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the best architecture. The result futureProposers is dependant on that specific state. If you just cache by epoch you are committing to a "head" that may change latter. Instead this result of future proposers should be cached in the epoch context, and computed lazily on demand. @tuyennhv What do you think?

@wemeetagain
Copy link
Member

The result futureProposers is dependant on that specific state. If you just cache by epoch you are committing to a "head" that may change latter. Instead this result of future proposers should be cached in the epoch context, and computed lazily on demand.

I agree. We shouldn't attach the cache to the chain directly, because a reorg can make the proposer cache invalid, and that's currently not handled here. Caching at the level of the beacon state (in our epoch-long cache object, "epoch contect") would be the way to go 👍

@twoeths
Copy link
Contributor

twoeths commented Mar 7, 2022

Since the result of next-epoch proposer duties computation is not final at all and it's only consumed by Rocket Pool, initially I didn't want to cache in beacon-state-transition because we may make it confused to cache both current and next epoch's proposers.

@dadepo if the direction is to cache in beacon-state-transition, I guess we just need to name it carefully and make a comment that it's not the final result of next epoch's proposers, at each epoch transition we still need to compute proposers for current epoch again

wemeetagain
wemeetagain previously approved these changes Mar 7, 2022
@@ -400,6 +417,7 @@ export class EpochContext {
const nextEpoch = currEpoch + 1;
this.nextShuffling = computeEpochShuffling(state, epochProcess.nextEpochShufflingActiveValidatorIndices, nextEpoch);
this.proposers = computeProposers(state, this.currentShuffling, this.effectiveBalanceIncrements);
this.nextEpochProposers = computeProposers(state, this.nextShuffling, this.effectiveBalanceIncrements);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should only calculate this on demand (when the api call) and cache it, instead of on every epoch transition

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually thought about that also, but decided to go with this implementation due to its simplicity. (No need for code to only generate this after the first call, and then code to purge previous values when a new epoch is reached) It also does not introduce any performance regression, that is why I went with it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually thought about that also, but decided to go with this implementation due to its simplicity

Should not be complicated at all to implement. Just type as T | null and compute if requested and null.

It also does not introduce any performance regression, that is why I went with it.

Our performance CI only rejects x3 fn run time changes. computeProposers() has a cost of ~100ms over the state transition which takes 1000ms, so a ballpark 10% increase aprox.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should not be complicated at all to implement. Just type as T | null and compute if requested and null.

The complexity is not about the typing though. It's more about passing CachedBeaconStateAllForks on the http requests to computeProposers - something which is already handled in the transition phase. Also adding a prune call to remove past caches - something which is not needed with this approach as current and next proposers are always refreshed on state transition.

Our performance CI only rejects x3 fn run time changes. computeProposers() has a cost of ~100ms over the state transition which takes 1000ms, so a ballpark 10% increase aprox.

Okay. Avoiding a ~10% performance hit not a bad thing. I'll update to have it computed on demand.

Copy link
Contributor Author

@dadepo dadepo Mar 14, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Together with @tuyennhv we looked at how to get the state needed by computeProposers which makes implementing the suggestion to compute on demand possible as can be seen in this commit here.

A couple of observation where I think this approach makes the code less straightforward:

  • getNextEpochBeaconProposer has to be moved to BeaconStateContext in other to have access to the allForks.BeaconState needed for computeProposers (while the other similar method getBeaconProposer remains on EpochContext)
  • A getState has to be added to BeaconStateContext to get value of allForks.BeaconState
  • Even though getNextEpochBeaconProposer is on BeaconStateContext it still has to reach to EpochContext to update the cache
    -getBeaconProposer method now need to have knowledge of the previous proposer cache since there is now the need to clear the previous nextEpochProposers in the cache on request for current proposer.

I guess at the end of the day it boils down to whether this is okay and better than incurring the ~10% decrease in performace - which I am fine with not incurring.

Also open to other suggestions that could simply the compute at demand approach.

@dapplion
Copy link
Contributor

@g11tech Clone nethermind interop branch Sim merge tests step failed, can you take a look?

@g11tech
Copy link
Contributor

g11tech commented Mar 14, 2022

@g11tech Clone nethermind interop branch Sim merge tests step failed, can you take a look?

@dadepo can you rebase your branch and push again?

@@ -389,7 +418,10 @@ export class EpochContext {
const nextEpoch = currEpoch + 1;

this.nextShuffling = computeEpochShuffling(state, epochProcess.nextEpochShufflingActiveValidatorIndices, nextEpoch);
this.proposers = computeProposers(state, this.currentShuffling, this.effectiveBalanceIncrements);
this.currentProposerSeed = getSeed(state, this.currentShuffling.epoch, DOMAIN_BEACON_PROPOSER);
this.nextProposerSeed = getSeed(state, this.nextShuffling.epoch, DOMAIN_BEACON_PROPOSER);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we only need to call getSeed() once for this.nextProposerSeed, this.currrentProposerSeed = this.nextPrroposerSeed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The JSDoc for nextProposerSeed must be placed in the variable declaration (here), here it has no effect. You can duplicate the comment here too if appropiate but use regular // comment syntax. Also try to fit the comment to 120 wide characters for consistent formatting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also here you can comment (with //) that the cost of getSeed is negligible provided that this path runs once per epoch.

@philknows philknows removed the status-blocked This is blocked by another issue that requires resolving first. label May 2, 2022
@dapplion
Copy link
Contributor

dapplion commented May 4, 2022

packages/beacon-state-transition/src/cache/cachedBeaconState.ts is an empty file

@@ -150,6 +163,9 @@ export class EpochContext {
epoch: Epoch;
syncPeriod: SyncPeriod;

currentProposerSeed: Uint8Array;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does needs to be persisted? What's the advantage of keeping this around?

@@ -150,6 +163,9 @@ export class EpochContext {
epoch: Epoch;
syncPeriod: SyncPeriod;

currentProposerSeed: Uint8Array;
nextProposerSeed: Uint8Array;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, what's the advantage of keeping this around? Why not compute on demand and discard after caching nextEpochProposers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue in that, is related to what you observed in the comment here

getNextEpochBeaconProposer should be in the epoch context and use only data available there immutable during an epoch

Computing the seed via getSeed method, requires the state which makes it impossible to compute using data available in the context.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then please add comments 🙏 all non-obvious decisions must be rationalized in committed comments

* balances are sampled to adjust the probability of the next selection (32 per epoch on average). So to invalidate
* the prediction the effective of one of those 32 samples should change and change the random_byte inequality.
*/
getBeaconProposersNextEpoch(): ValidatorIndex[] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a rationale on when and why the proposer prediction stands or is invalidated

this.effectiveBalanceIncrements
);
this.proposersNextEpoch = {computed: true, indexes};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now computation of proposers is deferred AND cached when computed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dapplion this won't cache though, and on every request to get next proposers, the this.proposersNextEpoch.computed check here will always return false, and the if branch check always run.

You can confirm this by running the application, and trigger the end point multiple times, the if branch will always run.

I noticed this while working on the implementation and I think the reason why this is the case is how the chain.getHeadStateAtCurrentEpoch here works, but I did not go ahead with caching when I realised that the request to the endpoint itself was actually pretty fast and no need to even cache.

Copy link
Contributor

@dapplion dapplion May 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed this while working on the implementation and I think the reason why this is the case is how the chain.getHeadStateAtCurrentEpoch here works

If this is true, then it's a big issue! Can you open a dedicated issue for it? It should be investigated latter

I did not go ahead with caching when I realised that the request to the endpoint itself was actually pretty fast and no need to even cache.

This code runs on the main thread and according to benchmarks it takes ~100ms. That's still a significant amount of time to block the main thread and repeated computations must be avoided if possible

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And please link back to this context after opening a new dedicated issue. :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dapplion dapplion merged commit 4827b29 into master May 16, 2022
@dapplion dapplion deleted the dadepo/next_epoch_proposer_lookahead branch May 16, 2022 11:11
dapplion added a commit that referenced this pull request May 30, 2022
* New metric filtering missed blocks (#3927)

* Log block delay second

* Add elappsedTimeTillBecomeHead metric

* Add 'till become head' metric to dashboard

* chore: correct the metric name to elapsedTimeTillBecomeHead

* Add and use secFromSlot to clock

* Track block source

* Revert "Track block source"

This reverts commit 5fe6220.

* Update bucket values

* Limit how old blocks are tracked in elapsedTimeTillBecomeHead

* Simplify secFromSlot

Co-authored-by: dapplion <35266934+dapplion@users.noreply.github.com>

* Fix the terminal validations of the merge block (#3984)

* Fix the terminal validations of the merge block

* activate merge transition block spec tests

* some comments to explain the merge block validations movement

* Extend error messages when voluntary exit errors because of present of lockfile (#3935)

* Extend error and Clean up

* Only showing the message to use --force to override in case of voluntary exit

* Simplify gitData and version guessing (#3992)

Don't print double slash in version string

Dont add git-data.json to NPM releases

Write git-data.json only in from source docker build

Remove numCommits

Test git-data.json generation from within the test

Move comment

Revert "Dont add git-data.json to NPM releases"

This reverts commit 5fe2d38.

Simplify gitData and version guessing

Run cmd

* Activate ex-ante fork-choice spec tests (#4003)

* Prepare custom version on next release (#3990)

* Prepare custom version on next release

* Test in branch

* Don't set version in advance

* Remove --canary flag

* Change and commit version

* Setup git config

* Revert temp changes

* Lightclient e2e: increase validator client (#4006)

* Bump to v0.37.0 nightly builds (#4013)

* Guarantee full spec tests coverage (#4012)

* Ensure all spec tests are run

* Fix general bls tests

* Improve docs of specTestIterator

* Fix fork_choice tests

* Remove Check spec tests step

* Add merge transition/finalization banners (#3963)

* Add merge transition/finalization banners

* fix signatures

* Benchmark initial sync (#3995)

* Basic range sync perf test

* Benchmark initial sync

* Add INFURA_ETH2_CREDENTIALS to benchmark GA

* Download test cache file from alternative source

* Re-org beforeValue and testCase helpers

* Break light-client - state-transition test dependency

* Revert adding downloadTestCacheFile

* Download files from a Github release

* Clarify #3977 with unbounded uint issue (#4018)

* Update mainnet-shadow-5 configs (#4021)

* Bump moment from 2.29.1 to 2.29.2 (#3901)

Bumps [moment](https://github.com/moment/moment) from 2.29.1 to 2.29.2.
- [Release notes](https://github.com/moment/moment/releases)
- [Changelog](https://github.com/moment/moment/blob/develop/CHANGELOG.md)
- [Commits](moment/moment@2.29.1...2.29.2)

---
updated-dependencies:
- dependency-name: moment
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Implement support for validator next-epoch proposer duties (#3782)

* Implementation to be able to get block proposer an epoch ahead - still need optimization

* revert changes made to waitForSlot

* caching the results of computing future proposers. Also extended test

* using effectiveBalanceIncrements from state instead of recomputing it

* fix lint errors

* revert check not needed in getBeaconProposer

* Update tests to include assertion messages

* Move caching of next proposer duties to BeaconChain class

* Delete the block proposer previously cached when next proposer was requested at current epoch

* moved next epoch proposers from the chain to the state

* Compute next proposer on demand and cache

* Fix lint errors

* update implementation to work with changes from master

* caching epoch seed in context so that getNextEpochBeaconProposer can be independent of state

* Revert "caching epoch seed in context so that getNextEpochBeaconProposer can be independent of state"

This reverts commit 02a722a.

* caching epoch seed in context so that getNextEpochBeaconProposer can be independent of state

* removing the need to delete from nextEpochProposers in call to getBeaconProposer

* no need to recompute currrentProposerSeed again

* Revert "no need to recompute currrentProposerSeed again"

This reverts commit b6b1b8c.

* removed empty file left after fixing merge conflicts

* remove some unnecessary variable from the epoch context.

* add some comments

* Fix lint

* import from the right location

* Review PR

* Merge imports

* Delete get proposers api impl test

* Remove duplicated comment

Co-authored-by: dapplion <35266934+dapplion@users.noreply.github.com>

* Extend timeout for gitData unit test (#4026)

* Fix readAndGetGitData (#4025)

* Ensure light client update is in a single period (#4029)

* Handle merge block fetch error (#4016)

* Handle merge block fetch error

* Log errors on fetch errors for terminal pow

* docs: Update nodeJS minimum requirement (#4037)

* Remove child_process call in gitData before step (#4033)

* Oppool aggregates use BitArray only for set logic (#4034)

* Use BitArrays for aggregate merging

* Test intersectUint8Arrays

* Review PR

* Update tests

* Remove un-used code

* Modify gossipsub params following consensus spec v1.1.10 (#4011)

* Modify gossipsub params following consensus spec v1.1.10

* Specify GOSSIPSUB_HEARTBEAT_INTERVAL as a constant

* Throw a more informative error on invalid keystore (#4022)

* Throw a more informative error on invalid keystore

* Make error more descriptive

* Use template string

* Update keys.ts

* Update keys.ts

Co-authored-by: Lion - dapplion <35266934+dapplion@users.noreply.github.com>

* Ignore gossip AggregateAndProof if aggregate is seen (#4019)

* Ignore gossip AggregateAndProof if aggregate is seen

* Check for non-strict superset of seen attestation data

* Fix validateGossipAggregateAndProof benchmark test

* Fix import

* Ultilize intersectUint8Arrays()

* Implement SeenContributionAndProof.participantsKnown

* Add metrics to seen cache

* Add perf tests

* Change method name to isSuperSetOrEqual()

* Refactor metric names

* Specify lerna exact version for release-nightly workflow (#4049)

* Add ropsten network (#4051)

* Force all packages to be versioned for exact (#4052)

* Update discv5 to v0.7.1 (#4044)

* Add ability to update the fee recipient for execution via beacon and/or validator defaults (#3958)

* Add and use a default fee recipient for a validator process

* transfer the proposer cache to beacon chain

* mock chain fixes

* test and perf fixes

* fee recipient validation change

* track and use free recipient as string instead of ExecutionAddress

* fix unit test

* fix merge test

* use dummy address

* refac and add proposer cache pruning

* tests for beacon proposer cache

* merge interop fee recipient check

* fix the optional

* feeRecipient confirmation and small refac

* add the missing map

* add flag to enable strict fee recipient check

* Small refactor to setup merge for ropsten using baked in configs (#4053)

* Issue advance fcU for builing the EL block (#3965)

rebaseing to the refactored prepare beacon proposer

refac payload id cache as separate class and add pruning

issue payload fcus if synced

rename issueNext.. to maybeIssueNext...

* Simplify release process (#4030)

* Simplify release process

* Remove old postrelease script

* Add lerna version check

* Tweak RELEASE.md

* Add force-publish to lerna version command

* Update the proposer boost percentage to 40% (#4055)

* ESM Support (#3978)

* ESM changes

* Fix root lodestar script

* Fix some linter errors

* trying directly re-exporting under an alias from networks module

* Fix types exports

* Fix more linter errors

* Fix spec test download

* Update bls to 7.1.0

* Fix spec tests

* temp reverting eslint parser option to 10 and disabling the check of .js file extenstion. Should fix lint errors

* temp commented out file-extension-in-import

* Disable readme checks

* Fix check-build

* Fix params e2e tests

* Bump @chainsafe/threads

* Bump bls to v7.1.1

* Add timeouts after node initialization but before sim test run

* Tweak timeouts

* Tweak timeout

* Tweak sim merge timeout

* Tweak sim merge timeout

* Tweak sim merge timeout

* Tweak sim merge timeout

* Add more timeouts

* Add another timeout

* Fix linter errors

* Fix some tests

* Fix some linter errors and spec tests

* Fix benchmarks

* Fix linter errors

* Update each bls dependency

* Tweak timeouts

* Add another timeout

* More timeouts

* Fix bls pool size

* Set root package.json to ESM

* Remove old linter comment

* Revert "Set root package.json to ESM"

This reverts commit 347b0fd.

* Remove stray file (probably old)

* Undo unnecessary diff

* Add comment on __dirname replacement

* Import type @chainsafe/bls/types

* Use lodestar path imports

* Revert multifork to lodestar package

* Format .mocharc.yaml

* Use same @chainsafe/as-sha256 version

* Fix lodash path imports

* Use src instead of lib

* Load db metrics

* Remove experimental-specifier-resolution

* Remove lodestat/chain export

* Add stray missing file extension

* Revert ValidatorDir changes

* Fix stray missing file extensions

* Fix check-types

Co-authored-by: Dadepo Aderemi <dadepo@gmail.com>
Co-authored-by: dapplion <35266934+dapplion@users.noreply.github.com>

* chore(release): v0.37.0-beta.0

* Bump to v0.37.0

Co-authored-by: tuyennhv <vutuyen2636@gmail.com>
Co-authored-by: g11tech <76567250+g11tech@users.noreply.github.com>
Co-authored-by: dadepo <dadepo@gmail.com>
Co-authored-by: Cayman <caymannava@gmail.com>
Co-authored-by: Phil Ngo <58080811+philknows@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: g11tech <gajinder@g11.in>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support validator next-epoch proposer duties
6 participants