Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate to libp2p > 0.35.8 and gossipsub 0.12.2 #3661

Merged
merged 5 commits into from
Mar 10, 2022
Merged

Conversation

twoeths
Copy link
Contributor

@twoeths twoeths commented Jan 24, 2022

Motivation

We want to migrate to latest versions of libp2p and gossipsub which contains some improvements to be applied to lodestar

Description

Closes #3548

TODO:

  • wait for libp2p to be released, latest master fixed the typing issue
  • use datastore-core 7.0.1
  • Test on a node

@codeclimate
Copy link

codeclimate bot commented Jan 24, 2022

Code Climate has analyzed commit 5040e37 and detected 0 issues on this pull request.

View more on Code Climate.

@codecov
Copy link

codecov bot commented Jan 24, 2022

Codecov Report

Merging #3661 (058096f) into master (69ce81e) will decrease coverage by 0.29%.
The diff coverage is n/a.

❗ Current head 058096f differs from pull request most recent head 20c7356. Consider uploading reports for the commit 20c7356 to get more accurate results

@@            Coverage Diff             @@
##           master    #3661      +/-   ##
==========================================
- Coverage   36.41%   36.12%   -0.30%     
==========================================
  Files         324      325       +1     
  Lines        8952     9041      +89     
  Branches     1403     1419      +16     
==========================================
+ Hits         3260     3266       +6     
- Misses       5549     5632      +83     
  Partials      143      143              

@github-actions
Copy link
Contributor

github-actions bot commented Jan 24, 2022

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 058096f Previous: 69ce81e Ratio
BeaconState.hashTreeRoot - No change 643.00 ns/op 653.00 ns/op 0.98
BeaconState.hashTreeRoot - 1 full validator 164.15 us/op 127.58 us/op 1.29
BeaconState.hashTreeRoot - 32 full validator 2.3989 ms/op 1.9146 ms/op 1.25
BeaconState.hashTreeRoot - 512 full validator 32.287 ms/op 25.103 ms/op 1.29
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 167.71 us/op 128.30 us/op 1.31
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 3.0363 ms/op 2.2717 ms/op 1.34
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 34.384 ms/op 27.797 ms/op 1.24
BeaconState.hashTreeRoot - 1 balances 116.07 us/op 90.021 us/op 1.29
BeaconState.hashTreeRoot - 32 balances 1.0120 ms/op 761.28 us/op 1.33
BeaconState.hashTreeRoot - 512 balances 9.1357 ms/op 7.2186 ms/op 1.27
BeaconState.hashTreeRoot - 250000 balances 169.85 ms/op 133.39 ms/op 1.27
processSlot - 1 slots 69.336 us/op 47.085 us/op 1.47
processSlot - 32 slots 4.0291 ms/op 2.8913 ms/op 1.39
getCommitteeAssignments - req 1 vs - 250000 vc 5.4576 ms/op 5.2958 ms/op 1.03
getCommitteeAssignments - req 100 vs - 250000 vc 7.5838 ms/op 7.3609 ms/op 1.03
getCommitteeAssignments - req 1000 vs - 250000 vc 8.1141 ms/op 7.8921 ms/op 1.03
computeProposers - vc 250000 25.602 ms/op 20.354 ms/op 1.26
computeEpochShuffling - vc 250000 198.14 ms/op 188.59 ms/op 1.05
getNextSyncCommittee - vc 250000 415.28 ms/op 334.12 ms/op 1.24
altair processAttestation - 250000 vs - 7PWei normalcase 41.293 ms/op 34.217 ms/op 1.21
altair processAttestation - 250000 vs - 7PWei worstcase 38.689 ms/op 32.236 ms/op 1.20
altair processAttestation - setStatus - 1/6 committees join 14.265 ms/op 9.9666 ms/op 1.43
altair processAttestation - setStatus - 1/3 committees join 30.169 ms/op 21.967 ms/op 1.37
altair processAttestation - setStatus - 1/2 committees join 44.755 ms/op 33.216 ms/op 1.35
altair processAttestation - setStatus - 2/3 committees join 61.703 ms/op 43.671 ms/op 1.41
altair processAttestation - setStatus - 4/5 committees join 72.729 ms/op 53.559 ms/op 1.36
altair processAttestation - setStatus - 100% committees join 89.013 ms/op 67.946 ms/op 1.31
altair processAttestation - updateEpochParticipants - 1/6 committees join 14.806 ms/op 11.445 ms/op 1.29
altair processAttestation - updateEpochParticipants - 1/3 committees join 32.120 ms/op 23.044 ms/op 1.39
altair processAttestation - updateEpochParticipants - 1/2 committees join 21.950 ms/op 20.279 ms/op 1.08
altair processAttestation - updateEpochParticipants - 2/3 committees join 23.544 ms/op 24.871 ms/op 0.95
altair processAttestation - updateEpochParticipants - 4/5 committees join 27.915 ms/op 22.539 ms/op 1.24
altair processAttestation - updateEpochParticipants - 100% committees join 25.176 ms/op 24.393 ms/op 1.03
altair processAttestation - updateAllStatus 20.723 ms/op 19.320 ms/op 1.07
altair processBlock - 250000 vs - 7PWei normalcase 37.447 ms/op 35.874 ms/op 1.04
altair processBlock - 250000 vs - 7PWei worstcase 133.04 ms/op 100.38 ms/op 1.33
altair processEpoch - mainnet_e81889 886.69 ms/op 805.48 ms/op 1.10
mainnet_e81889 - altair beforeProcessEpoch 366.56 ms/op 337.48 ms/op 1.09
mainnet_e81889 - altair processJustificationAndFinalization 124.28 us/op 93.479 us/op 1.33
mainnet_e81889 - altair processInactivityUpdates 17.887 ms/op 18.011 ms/op 0.99
mainnet_e81889 - altair processRewardsAndPenalties 108.89 ms/op 98.414 ms/op 1.11
mainnet_e81889 - altair processRegistryUpdates 23.768 us/op 10.079 us/op 2.36
mainnet_e81889 - altair processSlashings 6.9450 us/op 1.9070 us/op 3.64
mainnet_e81889 - altair processEth1DataReset 6.6510 us/op 1.9690 us/op 3.38
mainnet_e81889 - altair processEffectiveBalanceUpdates 7.0469 ms/op 6.2828 ms/op 1.12
mainnet_e81889 - altair processSlashingsReset 39.812 us/op 9.3400 us/op 4.26
mainnet_e81889 - altair processRandaoMixesReset 47.547 us/op 23.671 us/op 2.01
mainnet_e81889 - altair processHistoricalRootsUpdate 8.0730 us/op 2.5970 us/op 3.11
mainnet_e81889 - altair processParticipationFlagUpdates 75.048 ms/op 70.255 ms/op 1.07
mainnet_e81889 - altair processSyncCommitteeUpdates 5.8860 us/op 1.9540 us/op 3.01
mainnet_e81889 - altair afterProcessEpoch 238.03 ms/op 222.35 ms/op 1.07
altair processInactivityUpdates - 250000 normalcase 88.612 ms/op 67.137 ms/op 1.32
altair processInactivityUpdates - 250000 worstcase 84.594 ms/op 68.252 ms/op 1.24
altair processParticipationFlagUpdates - 250000 anycase 66.458 ms/op 66.378 ms/op 1.00
altair processRewardsAndPenalties - 250000 normalcase 108.77 ms/op 91.863 ms/op 1.18
altair processRewardsAndPenalties - 250000 worstcase 108.08 ms/op 95.623 ms/op 1.13
altair processSyncCommitteeUpdates - 250000 420.26 ms/op 341.43 ms/op 1.23
Tree 40 250000 create 840.14 ms/op 605.60 ms/op 1.39
Tree 40 250000 get(125000) 340.24 ns/op 324.09 ns/op 1.05
Tree 40 250000 set(125000) 2.7769 us/op 1.9445 us/op 1.43
Tree 40 250000 toArray() 48.401 ms/op 41.782 ms/op 1.16
Tree 40 250000 iterate all - toArray() + loop 46.280 ms/op 41.673 ms/op 1.11
Tree 40 250000 iterate all - get(i) 132.76 ms/op 122.59 ms/op 1.08
MutableVector 250000 create 24.280 ms/op 19.768 ms/op 1.23
MutableVector 250000 get(125000) 15.622 ns/op 13.162 ns/op 1.19
MutableVector 250000 set(125000) 827.62 ns/op 525.01 ns/op 1.58
MutableVector 250000 toArray() 9.5273 ms/op 8.4599 ms/op 1.13
MutableVector 250000 iterate all - toArray() + loop 9.4888 ms/op 8.6415 ms/op 1.10
MutableVector 250000 iterate all - get(i) 3.7244 ms/op 3.3253 ms/op 1.12
Array 250000 create 6.0763 ms/op 5.1810 ms/op 1.17
Array 250000 clone - spread 2.7854 ms/op 2.2644 ms/op 1.23
Array 250000 get(125000) 1.3040 ns/op 1.0490 ns/op 1.24
Array 250000 set(125000) 1.3010 ns/op 1.0560 ns/op 1.23
Array 250000 iterate all - loop 139.47 us/op 167.81 us/op 0.83
effectiveBalanceIncrements clone Uint8Array 300000 233.21 us/op 62.470 us/op 3.73
effectiveBalanceIncrements clone MutableVector 300000 529.00 ns/op 678.00 ns/op 0.78
effectiveBalanceIncrements rw all Uint8Array 300000 186.27 us/op 302.33 us/op 0.62
effectiveBalanceIncrements rw all MutableVector 300000 228.49 ms/op 179.88 ms/op 1.27
aggregationBits - 2048 els - readonlyValues 209.63 us/op 181.27 us/op 1.16
aggregationBits - 2048 els - zipIndexesInBitList 35.788 us/op 37.023 us/op 0.97
regular array get 100000 times 56.202 us/op 67.411 us/op 0.83
wrappedArray get 100000 times 57.049 us/op 67.400 us/op 0.85
arrayWithProxy get 100000 times 39.047 ms/op 28.815 ms/op 1.36
ssz.Root.equals 1.3060 us/op 1.0710 us/op 1.22
ssz.Root.equals with valueOf() 1.4830 us/op 1.2940 us/op 1.15
byteArrayEquals with valueOf() 1.4530 us/op 1.2600 us/op 1.15
phase0 processBlock - 250000 vs - 7PWei normalcase 10.662 ms/op 7.9045 ms/op 1.35
phase0 processBlock - 250000 vs - 7PWei worstcase 100.55 ms/op 73.795 ms/op 1.36
phase0 afterProcessEpoch - 250000 vs - 7PWei 225.05 ms/op 207.46 ms/op 1.08
phase0 beforeProcessEpoch - 250000 vs - 7PWei 737.99 ms/op 601.78 ms/op 1.23
phase0 processEpoch - mainnet_e58758 970.59 ms/op 774.67 ms/op 1.25
mainnet_e58758 - phase0 beforeProcessEpoch 575.42 ms/op 430.79 ms/op 1.34
mainnet_e58758 - phase0 processJustificationAndFinalization 119.50 us/op 94.739 us/op 1.26
mainnet_e58758 - phase0 processRewardsAndPenalties 109.45 ms/op 114.16 ms/op 0.96
mainnet_e58758 - phase0 processRegistryUpdates 90.895 us/op 56.364 us/op 1.61
mainnet_e58758 - phase0 processSlashings 6.5910 us/op 1.7300 us/op 3.81
mainnet_e58758 - phase0 processEth1DataReset 5.9590 us/op 1.8080 us/op 3.30
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 5.6969 ms/op 5.4153 ms/op 1.05
mainnet_e58758 - phase0 processSlashingsReset 33.061 us/op 13.211 us/op 2.50
mainnet_e58758 - phase0 processRandaoMixesReset 44.692 us/op 17.211 us/op 2.60
mainnet_e58758 - phase0 processHistoricalRootsUpdate 8.1990 us/op 2.3310 us/op 3.52
mainnet_e58758 - phase0 processParticipationRecordUpdates 32.001 us/op 13.438 us/op 2.38
mainnet_e58758 - phase0 afterProcessEpoch 202.52 ms/op 179.85 ms/op 1.13
phase0 processEffectiveBalanceUpdates - 250000 normalcase 6.5451 ms/op 5.9800 ms/op 1.09
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 6.9767 ms/op 6.6577 ms/op 1.05
phase0 processRegistryUpdates - 250000 normalcase 95.401 us/op 66.411 us/op 1.44
phase0 processRegistryUpdates - 250000 badcase_full_deposits 4.0169 ms/op 3.1472 ms/op 1.28
phase0 processRegistryUpdates - 250000 worstcase 0.5 2.2293 s/op 1.6028 s/op 1.39
phase0 getAttestationDeltas - 250000 normalcase 14.784 ms/op 13.955 ms/op 1.06
phase0 getAttestationDeltas - 250000 worstcase 14.591 ms/op 13.025 ms/op 1.12
phase0 processSlashings - 250000 worstcase 47.545 ms/op 35.046 ms/op 1.36
shuffle list - 16384 els 14.152 ms/op 12.970 ms/op 1.09
shuffle list - 250000 els 207.54 ms/op 187.05 ms/op 1.11
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 1.2083 ms/op 459.06 us/op 2.63
pass gossip attestations to forkchoice per slot 19.588 ms/op 14.255 ms/op 1.37
computeDeltas 3.4968 ms/op 3.4677 ms/op 1.01
computeProposerBoostScoreFromBalances 475.21 us/op 502.65 us/op 0.95
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 2.4134 ms/op 2.3386 ms/op 1.03
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 802.96 us/op 684.63 us/op 1.17
BLS verify - blst-native 2.6863 ms/op 1.8597 ms/op 1.44
BLS verifyMultipleSignatures 3 - blst-native 5.5836 ms/op 3.8150 ms/op 1.46
BLS verifyMultipleSignatures 8 - blst-native 11.402 ms/op 8.2235 ms/op 1.39
BLS verifyMultipleSignatures 32 - blst-native 41.888 ms/op 29.837 ms/op 1.40
BLS aggregatePubkeys 32 - blst-native 57.416 us/op 39.282 us/op 1.46
BLS aggregatePubkeys 128 - blst-native 226.21 us/op 153.19 us/op 1.48
getAttestationsForBlock 65.886 ms/op 57.338 ms/op 1.15
CheckpointStateCache - add get delete 22.174 us/op 17.256 us/op 1.29
validate gossip signedAggregateAndProof - struct 6.5977 ms/op 4.4429 ms/op 1.48
validate gossip signedAggregateAndProof - treeBacked 6.3376 ms/op 4.4108 ms/op 1.44
validate gossip attestation - struct 3.0498 ms/op 2.0910 ms/op 1.46
validate gossip attestation - treeBacked 3.0369 ms/op 2.1193 ms/op 1.43
pickEth1Vote - no votes 9.4948 ms/op 8.1255 ms/op 1.17
pickEth1Vote - max votes 58.962 ms/op 47.827 ms/op 1.23
pickEth1Vote - Eth1Data hashTreeRoot value x2048 30.619 ms/op 23.933 ms/op 1.28
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 11.219 ms/op 9.4762 ms/op 1.18
pickEth1Vote - Eth1Data fastSerialize value x2048 5.9584 ms/op 4.9982 ms/op 1.19
pickEth1Vote - Eth1Data fastSerialize tree x2048 30.347 ms/op 21.774 ms/op 1.39
bytes32 toHexString 2.1000 us/op 1.5890 us/op 1.32
bytes32 Buffer.toString(hex) 781.00 ns/op 667.00 ns/op 1.17
bytes32 Buffer.toString(hex) from Uint8Array 1.0720 us/op 927.00 ns/op 1.16
bytes32 Buffer.toString(hex) + 0x 770.00 ns/op 661.00 ns/op 1.16
Object access 1 prop 0.40900 ns/op 0.31600 ns/op 1.29
Map access 1 prop 0.35400 ns/op 0.29500 ns/op 1.20
Object get x1000 16.290 ns/op 17.187 ns/op 0.95
Map get x1000 0.95000 ns/op 1.0510 ns/op 0.90
Object set x1000 106.19 ns/op 99.359 ns/op 1.07
Map set x1000 75.784 ns/op 60.037 ns/op 1.26
Return object 10000 times 0.41020 ns/op 0.36750 ns/op 1.12
Throw Error 10000 times 6.7547 us/op 5.7882 us/op 1.17
enrSubnets - fastDeserialize 64 bits 1.5540 us/op 1.2000 us/op 1.29
enrSubnets - ssz BitVector 64 bits 18.819 us/op 16.471 us/op 1.14
enrSubnets - fastDeserialize 4 bits 555.00 ns/op 436.00 ns/op 1.27
enrSubnets - ssz BitVector 4 bits 3.3400 us/op 2.8220 us/op 1.18
RateTracker 1000000 limit, 1 obj count per request 205.09 ns/op 171.70 ns/op 1.19
RateTracker 1000000 limit, 2 obj count per request 157.66 ns/op 127.42 ns/op 1.24
RateTracker 1000000 limit, 4 obj count per request 124.22 ns/op 105.66 ns/op 1.18
RateTracker 1000000 limit, 8 obj count per request 113.63 ns/op 95.070 ns/op 1.20
RateTracker with prune 4.8980 us/op 3.6760 us/op 1.33
array of 16000 items push then shift 5.0952 us/op 3.1589 us/op 1.61
LinkedList of 16000 items push then shift 19.031 ns/op 17.319 ns/op 1.10
array of 16000 items push then pop 222.01 ns/op 204.87 ns/op 1.08
LinkedList of 16000 items push then pop 17.736 ns/op 16.666 ns/op 1.06
array of 24000 items push then shift 7.6218 us/op 4.5532 us/op 1.67
LinkedList of 24000 items push then shift 19.062 ns/op 19.940 ns/op 0.96
array of 24000 items push then pop 207.41 ns/op 177.65 ns/op 1.17
LinkedList of 24000 items push then pop 18.070 ns/op 18.505 ns/op 0.98

by benchmarkbot/action

@wemeetagain
Copy link
Member

js-libp2p v0.36.0 just released

dapplion
dapplion previously approved these changes Feb 8, 2022
Copy link
Contributor

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall! Some minor comments

@twoeths
Copy link
Contributor Author

twoeths commented Feb 8, 2022

warning: we'll get the below error when starting the node if we use the old peerstore dir:

Feb-08 11:46:06.834 [API]              info: Started REST api server address=http://0.0.0.0:9596, namespaces=["beacon","config","debug","events","lightclient","lodestar","node","validator"]
 ✖ Error: Unable to decode multibase string "addrs", base32 decoder only supports inputs prefixed with b
    at Decoder.decode (/root/lodestar/node_modules/libp2p/node_modules/multiformats/cjs/src/bases/base.js:35:17)
    at Codec.decode (/root/lodestar/node_modules/libp2p/node_modules/multiformats/cjs/src/bases/base.js:84:25)
    at PersistentStore.all (/root/lodestar/node_modules/libp2p/src/peer-store/store.js:256:26)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at DefaultPeerStore.getPeers (/root/lodestar/node_modules/libp2p/src/peer-store/index.js:52:24)
    at NodejsNode._onDidStart (/root/lodestar/node_modules/libp2p/src/index.js:698:22)
    at NodejsNode.start (/root/lodestar/node_modules/libp2p/src/index.js:389:7)
    at Network.start (/root/lodestar/packages/lodestar/src/network/network.ts:134:5)
    at Function.init (/root/lodestar/packages/lodestar/src/node/nodejs.ts:210:5)
    at Object.beaconHandler [as handler] (/root/lodestar/packages/cli/src/cmds/beacon/handler.ts:84:18)

removing it or pointing to a new peerstore dir should get us through the error

@dapplion
Copy link
Contributor

dapplion commented Feb 8, 2022

removing it or pointing to a new peerstore dir should get us through the error

What the keys that changed encoding? Should there be a migration routine?

@dapplion
Copy link
Contributor

dapplion commented Feb 8, 2022

I suggest to just change the data key and abandon the old data. No need to prune that

@twoeths
Copy link
Contributor Author

twoeths commented Feb 9, 2022

removing it or pointing to a new peerstore dir should get us through the error

What the keys that changed encoding? Should there be a migration routine?

0.36.x stores all peer data in a single common name space https://github.com/libp2p/js-libp2p/blob/v0.36.0/src/peer-store/store.js#L50
0.32.x stores different peer data in different name spaces https://github.com/libp2p/js-libp2p/blob/v0.32.0/src/peer-store/persistent/index.js#L14

so I don't see we can migrate easily, is peer store data is so valuable to do a migration?
@wemeetagain do you have any ideas regarding the data migration?

@wemeetagain
Copy link
Member

I think we should do something like:

  • On startup, check for old peer store. If found, destroy the db.

@dapplion
Copy link
Contributor

I think we should do something like:

* On startup, check for old peer store. If found, destroy the db.

How do you differentiate between the old peerstore and the new peerstore? With your idea now Lodestar has to mantain this legacy code migration forever which is annoying

@twoeths
Copy link
Contributor Author

twoeths commented Feb 10, 2022

I think we should do something like:

* On startup, check for old peer store. If found, destroy the db.

How do you differentiate between the old peerstore and the new peerstore? With your idea now Lodestar has to mantain this legacy code migration forever which is annoying

the other thing is we don't want to maintain old libp2p logic of store/load peer data in lodestar code

Can we do this migration step manually? For each release, if there are migration guide we should note it somewhere and include it in the release.

I don't see if's critical to keep the existing peer store, I'd go with removing the peerstore folder manually before we deploy the new version.

@dapplion
Copy link
Contributor

What's the problem with abandoning the previous data? Can't we do that?

@wemeetagain
Copy link
Member

the other thing is we don't want to maintain old libp2p logic of store/load peer data in lodestar code

Right, I was thinking we would use some heuristic to determine if its the old version. Eg: Before startup, fetch a few keys and attempt a pattern match.

Can we do this migration step manually? For each release, if there are migration guide we should note it somewhere and include it in the release.
I don't see if's critical to keep the existing peer store, I'd go with removing the peerstore folder manually before we deploy the new version.

Definitely not critical to keep the existing peer store. Its just a matter of what we expect the UX of running and upgrading lodestar to be.
I had thought that we want folks to be able to just npm install / docker pull the next version and have it "just continue to work" -- to the best of our abilities.
If we go with manual peerstore removal, we will be breaking this workflow. Maybe thats not a problem if we just write a migration guide and publish it alongside the release. But it will probably waste a few people's time.

What's the problem with abandoning the previous data? Can't we do that?

Doing nothing results in the fatal error @tuyennhv posted above. Some intervention is needed to get lodestar to run.

@philknows
Copy link
Member

From a UX perspective, breaking this workflow will just require some good communication through the release notes/blog/etc. and it's better to do this now while most of our current users are generally still technical people. I'm ok with either method of a migration guide or @wemeetagain 's script to determine and delete the old peerstore. We should just get moving on this because it's a blocker on some of the other problems we are seeing.

@twoeths twoeths marked this pull request as draft February 14, 2022 04:36
@twoeths twoeths force-pushed the tuyen/libp2p-0.35 branch 2 times, most recently from 88c20e8 to 515e687 Compare February 16, 2022 01:45
@twoeths twoeths marked this pull request as ready for review February 16, 2022 10:04
@twoeths
Copy link
Contributor Author

twoeths commented Feb 16, 2022

have tested this branch in contabo-19, it shows good result regarding "Gossip Block Processed Delay" metric, no Unknown Block sync happened
Screen Shot 2022-02-16 at 17 04 56

while other nodes (contabo-5, same number of validators connected) need helps a lot from UnknownBlock sync
Screen Shot 2022-02-16 at 17 07 14

Memory is stable (ranges from 2GB - 2.7GB)

@twoeths twoeths force-pushed the tuyen/libp2p-0.35 branch 2 times, most recently from 2dc3fb5 to 97e9114 Compare February 17, 2022 04:06
@twoeths twoeths marked this pull request as draft February 17, 2022 11:12
@twoeths twoeths force-pushed the tuyen/libp2p-0.35 branch 2 times, most recently from 2268356 to 9be8df8 Compare March 2, 2022 06:56
@wemeetagain wemeetagain marked this pull request as ready for review March 7, 2022 17:02
wemeetagain
wemeetagain previously approved these changes Mar 7, 2022
Copy link
Member

@wemeetagain wemeetagain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving but not merging

Copy link
Contributor

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm really disliking having to convert so many functions to async due to the peerStore. We keep a lot of stuff in memory: all operations pool, state caches, etc. And now we are introducing eager writes and reads for so little data like the peerstore. It doesn't make sense when gossip keeps all its scores in memory.

  • score: It's just a number. You can keep thousands and not cause a memory issue. Prune after X time, like 1 hour after disconnection.
  • addressBook: Only relevant while the peer is connected, prune after disconnection so bounded by peer count
  • metadata: Same as above, only relevant for connected peers

Copy link
Contributor

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm down to merge as is. @tuyennhv can you complete the fixes for in-memory db on a future PR?

@dapplion dapplion merged commit e30eca0 into master Mar 10, 2022
@dapplion dapplion deleted the tuyen/libp2p-0.35 branch March 10, 2022 17:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Migrate to libp2p-gossipsub and libp2p
4 participants