-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: tbs #4009
refactor: tbs #4009
Conversation
a30c8d6
to
0955a87
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #4009 +/- ##
==========================================
+ Coverage 58.35% 58.82% +0.47%
==========================================
Files 194 192 -2
Lines 42581 42117 -464
==========================================
- Hits 24847 24777 -70
+ Misses 17734 17340 -394 ☔ View full report in Codecov by Sentry. |
0955a87
to
ac33b5c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think removing some of the type-level complexity that I introduced in an overzealous attempt at avoiding code duplication is good, but we should not expose functions that are easy to mis-use. The tbs
crate should be kinda idiot-proof. We might not get around exposing some of the lower level stuff for the DKG, which we might not be able to move into the tbs crate (but would be worth a try), but the mint module should not need to know that it cannot accumulate more than a given number of notes.
crypto/tbs/src/lib.rs
Outdated
.map(|(lm, y)| lm * y.0) | ||
.reduce(|a, b| a + b) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use more descriptive names, this ain't a math paper 😜, e.g. lagrange_m
instead of lm
(whatever m stands for xD)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO if the scope of a symbol is tiny enough, it's kind of OK to go supa-short-names, as it helps readability.
Also reduce(Add::add)
works and avoids naming anything.
And isn't the whole thing just a .sum()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree for add
(why doesn't .sum
work btw?), but I had to stop and think what lm
meant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sum does not work because in case of an empty iterator it returns the zero type but we want to panic
.zip(shares.values()) | ||
.map(|(lm, y)| lm * y.0) | ||
.reduce(|a, b| a + b) | ||
.expect("We have at least one share") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we handle the single-sig case? Probably elsewhere?
crypto/tbs/src/lib.rs
Outdated
lagrange_multiplier(shares.keys().cloned().map(Scalar::from).collect()) | ||
.into_iter() | ||
.zip(shares.values()) | ||
.map(|(lm, y)| lm * y.0) | ||
.reduce(|a, b| a + b) | ||
.expect("We have at least one share") | ||
.to_affine(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feels like parts of this could be refactored out and de-duplicated xD but I guess your goal was to simplify and accept redundancy.
crypto/tbs/src/poly.rs
Outdated
pub fn coefficients(&self) -> Iter<G> { | ||
self.coefficients.iter() | ||
} | ||
pub fn random_scalar_coefficients(degree: usize, rng: &mut impl RngCore) -> Vec<Scalar> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically we probably also want to require CryptoRng
to be implemented by the rng
crypto/tbs/src/poly.rs
Outdated
_pd: PhantomData, | ||
} | ||
} | ||
pub fn evaluate_polynomial_scalar(coefficients: &[Scalar], x: &Scalar) -> Scalar { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documentation needed: what are assumptions about inputs (at least one coeff, …)
b8a1fc6
to
38b8464
Compare
I think there a two further conceptual issues with the tbs crate that relate to some of the comments here: The aggregation of public key shares: after the dkg every peer obtains the same polynomial that it uses to calculate the public key shares for every peer. Then the shares are stored in the MintConsensusConfig and are later aggregated (in two different places even) into the public keys with the same Lagrange interpolation as the blind signatures which causes the code duplication @elsirion commented on. However the public key is just the polynomial obtained from the dkg evaluated at zero so the information is trivially available when creating the MintConsensusConfig. Therefore the aggregated public keys should really be stored in the MintConsensusConfig. Aggregating the pk shares is just weird. The main question here, it seems, is how to do it not if? The amalgamation of key generation with tbs. Both tbs and the threshold point encryption for the ng ln module are quite simple, the design space is small. However, both of them require the same dkg while the dkg is comparatively complicated and has a larger design space as well. That's why I think it makes sense to split the dkg from tbs and tpe - then we would also not have to expose low level functions like evaluate_polynomial in the tbs crate. The dealer_keygen in tbs would be private for testing reasons only. Though this would lead to some code duplication of the helper functions, like dealer_keygen and evaluate_polynomial_scalar it would allow for clean conceptual separation between the crates. This could be done with this pr and the question is if we want to do it at all? |
797a224
to
76889e1
Compare
crypto/tbs/benches/tbs.rs
Outdated
|
||
bencher.iter(move || combine_valid_shares(shares.clone(), 4)); | ||
bencher.iter(move || aggregate_signature_shares(&shares.clone())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both &
and .clone()
looks unnecessary, but I am probably missing something.
Maybe we want to save the polynomial in the cfg instead of the key sahres then and calculate both aggregated pub key and key shares on startup? That should be much more straight forward (and is also more in line with the design of the old threshold crypto crate). As long as we don't have redundancy in the config I'm ok with it. |
I'm a bit confused: if the |
Yes that would be a clean way to do it. However we would have to write a db migration that recovers the original polynomial coefficients from the evaluations. Though this is possible we would have to implement new crypto code to do it. Therefore I would suggest to just add the aggregated pk as we already have the code for the db migration this would require. Though this technically adds redundancy to the config please keep in mind that the config already has redundancy by saving the public key shares for every peer instead of just the polynomial. |
0e86892
to
36fa9b6
Compare
I think a common poly crate is overkill as the shared code from polynomial evaluation are just a few lines. Right now I opted for duplication of the poly eval code and also duplicated the dealer_keygen function such that tbs actually only exposes threshold blind signing as the name implies. In the future we may create a keygen crypto crate for dealer_keygen and all cryptographic primitives for dkg such as Feldman or Pedersen commitments, polynomial evaluation. Then we could remove the duplicated code again. However as I do not consider the keygen code stable as tbs I would hold of on this. |
bac8105
to
e247f03
Compare
let keys = (1_u64..) | ||
.zip(keys.into_iter().cloned()) | ||
.take(cfg.consensus.peer_tbs_pks.threshold()) | ||
.collect(); | ||
|
||
(amt, aggregate_public_key_shares(&keys)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More a critique of my existing code: the key index being implicit seems like a bad idea.
20449d0
to
f6b8767
Compare
/// **IMPORTANT**: `from_bytes` includes a tag in the hash, this doesn't | ||
pub fn from_hash(hash: impl Digest<OutputSize = U32>) -> Message { | ||
Message(hash_to_curve::<G1Projective, _>(hash).to_affine()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we never used this?
.enumerate() | ||
.collect::<Vec<_>>(); | ||
})) | ||
.take(server_cfgs.len() - ((server_cfgs.len() - 1) / 3)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otoh, it might be good to independently verify that our thresholds are correct, otoh we already have a function to calculate the threshold.
f6b8767
to
2eeb143
Compare
Dev call: Maybe add a reference to "D. L. Vo, F. Zhang, and K. Kim, “A new threshold blind signature scheme from pairings,” 2003." |
2eeb143
to
2155907
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but my expertise here is shallow.
We remove generics to make the code more explicit and adhere to a consistent functional style.