-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
go/worker/storage: Transition storage sync to P2P #4459
Conversation
cea6119
to
70776d6
Compare
Codecov Report
@@ Coverage Diff @@
## master #4459 +/- ##
==========================================
- Coverage 68.83% 68.80% -0.04%
==========================================
Files 415 423 +8
Lines 46799 47241 +442
==========================================
+ Hits 32214 32502 +288
- Misses 10625 10740 +115
- Partials 3960 3999 +39
Continue to review full report at Codecov.
|
return nil, fmt.Errorf("call failed on all peers") | ||
} | ||
|
||
func (c *client) CallMulti( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this doesn't propagate errors? Does it need to?
The more I think about this, the more I am incredibly against exposing the actual private key to handle libp2p's garbage bullshit. Instead I propose something like this:
When initializing the file signer, load (or generate) cryptographic entropy and persist it to |
Agreed, that sounds like a better solution, especially given that the only use case for this is to generate the QUIC reset key. |
We are pretty good about separating concerns and using different keys for basically everything, but deriving stuff like that off a signing key makes my tinfoil hat crinkle, even if the derivation procedure looks "ok". |
70776d6
to
f6ad97f
Compare
@Yawning I just pushed a version that uses the static entropy provider, please take a look. |
f6ad97f
to
68bd9a3
Compare
if ps.successes+ps.failures > 0 { | ||
// We have some history for this peer. | ||
failRate := float64(ps.failures) / float64(ps.failures+ps.successes) | ||
return float64(ps.avgRequestLatency) + failRate*float64(avgRequestLatency) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So a peer failing all requests quickly, will have a similar (slightly worse) score than an average node returning all correct results? Hm, I guess that does make sense, since if the peer is failing quickly that's not really all that bad, as not a lot of time is spent on it.
Maybe the penalty part for failing could have an additional constant factor? But I have no reason to suggest anything other than 1
.
That looks about like how I would do it. |
68bd9a3
to
2edb626
Compare
2edb626
to
eef3409
Compare
eef3409
to
eaf491f
Compare
TODO