Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Try and Increment when converting hash to bigint #128

Merged
merged 5 commits into from
Aug 26, 2021

Conversation

elichai
Copy link
Contributor

@elichai elichai commented Aug 8, 2021

This algorithm should improve performance in the regular case (no conversion to big int and serializing that), and should also improve security in curves where 2^n-q > ε.

Note that now if you try to convert a small hash into a bigger scalar this will panic,
we could return an error instead but I think if this happens then it is a programmer error that requires fixing the code, you can't really "handle" an error like that. so I think a panic is the right thing.

@@ -38,8 +42,22 @@ impl Blake {
}

pub fn result_scalar<E: Curve>(&self) -> Scalar<E> {
let n = self.result_bigint();
Scalar::from_bigint(&n)
let scalar_len = E::Scalar::SCALAR_LENGTH;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it sucks that we need this code duplication,
I think we should remove those implementations and make the code that requires hashes generic over Digest and that way the user can choose their own hash function (and RustCrypto already has a blake2b implementation with simd acceleration: https://github.com/RustCrypto/hashes/tree/master/blake2/src)

);
// Try and increment.
for i in 0u32.. {
let starting_state = self.clone();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that we have to add + Clone in a lot of places because of this .clone(). It's not really critical, but we can eliminate this by finding first H(data || 0x1 || 0x2 || ... || i) instead of H(data || i). Are there any downsides of this approach?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good question, I'll try to think if this can introduce any attacks.
(small note, AFAIK all hash functions implement Clone)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, all hashes must implement Clone. My suggestion is only about saving few keystrokes. If you think it might introduce any attacks, let's leave it as is

Copy link
Contributor

@survived survived Aug 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw do we even need to specify + Clone constraint? According to the documentation, Digest trait is implemented for any D that D: Clone + FixedOutput + .... Can't Rust deduce that any Digest implements Clone? See https://docs.rs/digest/0.9.0/digest/trait.Digest.html#impl-Digest

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's implemented for it doesn't inherit it :/
I can instead do D: FixedOutput + Update + Clone but sadly I can't remove clone

@elichai
Copy link
Contributor Author

elichai commented Aug 26, 2021

@survived Your changes look great :)

@survived survived merged commit fce3eab into master Aug 26, 2021
@elichai elichai deleted the try-and-increment branch August 27, 2021 15:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants