-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - sync: use randomized peers from the set of 20 peers with good latency #5263
Conversation
4c8bdbb
to
0c85be5
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## develop #5263 +/- ##
=======================================
Coverage 78.0% 78.0%
=======================================
Files 266 266
Lines 31977 31985 +8
=======================================
+ Hits 24964 24978 +14
+ Misses 5499 5493 -6
Partials 1514 1514 ☔ View full report in Codecov by Sentry. |
bors merge |
…#5263) this change improves peer selection logic to split the load between 20 peers with good latency, additionally it avoids suboptimal behavior when node will stick to the same peer when it would benefit switching to another: - peer had the best latency in the past, but temporarily faulty due to being overloaded - peer fails to negotiate protocols, but we didn't collect latency info to prioritize peers yet in second case we should disconnect such peer, but our software registers protocol asynchronously so we actually may drop honest peer, therefore we only workaround such problem.
Pull request successfully merged into develop. Build succeeded: |
…#5263) this change improves peer selection logic to split the load between 20 peers with good latency, additionally it avoids suboptimal behavior when node will stick to the same peer when it would benefit switching to another: - peer had the best latency in the past, but temporarily faulty due to being overloaded - peer fails to negotiate protocols, but we didn't collect latency info to prioritize peers yet in second case we should disconnect such peer, but our software registers protocol asynchronously so we actually may drop honest peer, therefore we only workaround such problem.
this change improves peer selection logic to split the load between 20 peers with good latency,
additionally it avoids suboptimal behavior when node will stick to the same peer when it would benefit switching to another:
in second case we should disconnect such peer, but our software registers protocol asynchronously
so we actually may drop honest peer, therefore we only workaround such problem.