Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce allocations in group::Wnaf #33

Merged
merged 3 commits into from
Sep 8, 2022
Merged

Reduce allocations in group::Wnaf #33

merged 3 commits into from
Sep 8, 2022

Conversation

str4d
Copy link
Member

@str4d str4d commented Sep 7, 2022

No description provided.

For a scalar with a 32-byte representation, this reduces the number of
allocations in `Wnaf::new().scalar(..).base(..)` from 9 to 3. The
amortized allocation cost (for a reused `Wnaf::new()`) remains at 1.
Copy link
Contributor

@daira daira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK.

I notice that there is no public API to do the precomputations for the scalar and for the base separately. There should be no obstacle to allowing that because the precomputations are independent. That would be precisely what we need to avoid excess computation and allocation when doing scalar muls for the cross product of a set of scalars and a set of bases. (See zcash/librustzcash#593.)

Filed as #34.

This also removes the `byteorder` dependency, as we can depend solely on
APIs available in our MSRV.
src/wnaf.rs Show resolved Hide resolved
Copy link
Contributor

@daira daira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utACK

Co-authored-by: Daira Hopwood <daira@jacaranda.org>
@str4d str4d merged commit de0767a into main Sep 8, 2022
@str4d str4d deleted the wnaf-reduce-allocations branch September 8, 2022 11:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants