Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Carry along an updateable universal_prover during synthesis and execution #2145

Open
wants to merge 13 commits into
base: reduce_pk_size_2
Choose a base branch
from

Conversation

vicsn
Copy link
Contributor

@vicsn vicsn commented Nov 2, 2023

Motivation

Howard noticed that in the previous PR certain tests would end up loading the SRS twice: first during synthesis and then during execution. To avoid this, this PR proposes carrying around the trimmed SRS (UniversalProver::committer_key ) and updating it as needed.

Notes:

  • trim(...) is renamed update(...), to reflect that we can shrink or grow the trimmed SRS as needed. The current implementation of this update function is not atomic. If it fails halfway through, we might have an inconsistent UniversalProver. It seems ok to me to always expect that we first need to succesfully update before creating a new proof. Though this might not play nicely with parallelization.
  • To avoid generating the universalProver every time we synthesizes proving keys ánd every time we prove and execution, I turned the UniversalProver into a global object in Process. Individual threads can acquire a write lock to use it for their specific proof. Depending on the context, this might incur a significant slowdown if we want to synthesize and prove in parallel. However, I reckon parallel proving within a single Process is not a usecase we're aiming to support at the moment anyway.
  • We take one power of two less for the SRS, it appeared that it was superfluous.
  • For a large circuit (say, 2^25), reduce_pk_size_{2,3} reduce proving time by ~8% on a 16 CPU, 128GB RAM machine.

Test Plan

I manually ran test_multiple_deployments_and_multiple_executions to see if we load the expected amount of powers.

Use naïve write lock to acquire universal_prover for the entire duration
of proving.
We previously requested max_degree+1. Given that max_degree was rounded up to a nearest power of two, we would subsequently load an entire power of two too much. With a few minor tweaks, max_degree gives us enough powers.
let lowest_shift_degree = max_degree - new_highest_degree_bound;
let highest_shift_degree = max_degree + 1 - highest_degree_bound;
let mut new_powers = srs.powers_of_beta_g(lowest_shift_degree, highest_shift_degree)?;
new_powers.append(&mut shifted_powers_of_beta_g);
Copy link
Contributor

@ljedrz ljedrz Nov 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
new_powers.append(&mut shifted_powers_of_beta_g);
new_powers.extend_from_slice(&shifted_powers_of_beta_g);

since we overwrite shifted_powers_of_beta_g right afterwards, there is no need to clear that vector; unless those elements are moved, in which case feel free to ignore this suggestion

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My rationale for using append was indeed that it moves elements, so I thought this would be faster + of course consumes less memory.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as long as the elements are not Copy, this is correct 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good point. Just read that moves may be copies under the hood anyway, so here you go: f52cbd3

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice; also, a small addition to what I said before: even if the values were not Copy, append would not consume less memory, because the vector would still retain its allocated space unless shrink_to_fit was called afterwards

Copy link
Contributor

@ljedrz ljedrz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 👌.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants