Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

go/worker/storage: Transition storage sync to P2P #4459

Merged
merged 5 commits into from
Feb 9, 2022

Conversation

kostko
Copy link
Member

@kostko kostko commented Feb 3, 2022

TODO

  • Go over storage sync metrics and consider adding some more metrics.

@kostko kostko added c:p2p Category: P2P c:breaking/runtime Category: breaking runtime changes labels Feb 3, 2022
@kostko
Copy link
Member Author

kostko commented Feb 3, 2022

@codecov
Copy link

codecov bot commented Feb 3, 2022

Codecov Report

Merging #4459 (eaf491f) into master (cb18563) will decrease coverage by 0.03%.
The diff coverage is 74.23%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4459      +/-   ##
==========================================
- Coverage   68.83%   68.80%   -0.04%     
==========================================
  Files         415      423       +8     
  Lines       46799    47241     +442     
==========================================
+ Hits        32214    32502     +288     
- Misses      10625    10740     +115     
- Partials     3960     3999      +39     
Impacted Files Coverage Δ
go/registry/api/api.go 58.48% <ø> (ø)
go/worker/common/p2p/rpc/types.go 0.00% <0.00%> (ø)
go/worker/storage/committee/utils.go 92.30% <ø> (ø)
...n/crypto/signature/signers/memory/memory_signer.go 57.14% <40.00%> (-4.68%) ⬇️
...ommon/crypto/signature/signers/file/file_signer.go 60.74% <65.21%> (+3.59%) ⬆️
go/worker/storage/committee/checkpoint_sync.go 65.27% <65.62%> (-11.16%) ⬇️
go/worker/common/p2p/peermgmt.go 78.29% <68.00%> (-5.13%) ⬇️
go/worker/common/p2p/p2p.go 67.00% <69.69%> (-1.54%) ⬇️
go/worker/storage/committee/node.go 76.49% <72.72%> (+0.65%) ⬆️
go/worker/common/p2p/rpc/client.go 75.78% <75.78%> (ø)
... and 47 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5041b59...eaf491f. Read the comment docs.

go/worker/common/p2p/crypto.go Outdated Show resolved Hide resolved
return nil, fmt.Errorf("call failed on all peers")
}

func (c *client) CallMulti(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this doesn't propagate errors? Does it need to?

go/common/crypto/signature/signers/file/file_signer.go Outdated Show resolved Hide resolved
@Yawning
Copy link
Contributor

Yawning commented Feb 4, 2022

The more I think about this, the more I am incredibly against exposing the actual private key to handle libp2p's garbage bullshit. Instead I propose something like this:

// In the signature package.

// StaticEntropyProvider is the interface implemented by signers that support providing
// persistent static entropy that is entirely independent from a keypair.
type StaticEntropyProvider interface {
  // StaticEntropy returns PrivateKeySize bytes of cryptographic entropy that
  // is independent from the Signer's private key.  The value of this entropy
  // is constant for the lifespan of the signer's underlying key pair.
  StaticEntropy() []byte
}

When initializing the file signer, load (or generate) cryptographic entropy and persist it to libp2p_is_dumb.pem or something, and use that instead of exposing the private key.

@kostko
Copy link
Member Author

kostko commented Feb 4, 2022

Agreed, that sounds like a better solution, especially given that the only use case for this is to generate the QUIC reset key.

@Yawning
Copy link
Contributor

Yawning commented Feb 4, 2022

Agreed, that sounds like a better solution, especially given that the only use case for this is to generate the QUIC reset key.

We are pretty good about separating concerns and using different keys for basically everything, but deriving stuff like that off a signing key makes my tinfoil hat crinkle, even if the derivation procedure looks "ok".

go/worker/storage/p2p/protocol.go Outdated Show resolved Hide resolved
go/worker/common/p2p/rpc/client.go Outdated Show resolved Hide resolved
go/worker/common/p2p/rpc/client.go Outdated Show resolved Hide resolved
go/worker/common/p2p/rpc/peermgmt.go Outdated Show resolved Hide resolved
go/worker/common/p2p/rpc/peermgmt.go Outdated Show resolved Hide resolved
go/worker/common/p2p/rpc/peermgmt.go Outdated Show resolved Hide resolved
@kostko
Copy link
Member Author

kostko commented Feb 4, 2022

@Yawning I just pushed a version that uses the static entropy provider, please take a look.

@kostko kostko marked this pull request as ready for review February 4, 2022 12:43
if ps.successes+ps.failures > 0 {
// We have some history for this peer.
failRate := float64(ps.failures) / float64(ps.failures+ps.successes)
return float64(ps.avgRequestLatency) + failRate*float64(avgRequestLatency)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So a peer failing all requests quickly, will have a similar (slightly worse) score than an average node returning all correct results? Hm, I guess that does make sense, since if the peer is failing quickly that's not really all that bad, as not a lot of time is spent on it.

Maybe the penalty part for failing could have an additional constant factor? But I have no reason to suggest anything other than 1.

go/worker/common/p2p/rpc/peermgmt.go Show resolved Hide resolved
go/worker/storage/committee/checkpoint_sync.go Outdated Show resolved Hide resolved
@Yawning
Copy link
Contributor

Yawning commented Feb 4, 2022

@Yawning I just pushed a version that uses the static entropy provider, please take a look.

That looks about like how I would do it.

go/worker/common/p2p/rpc/client.go Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c:breaking/runtime Category: breaking runtime changes c:p2p Category: P2P
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants