Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiproof optimization: Move squared norm equality check to a separate FLP #53

Open
cjpatton opened this issue Jan 14, 2024 · 1 comment
Assignees

Comments

@cjpatton
Copy link
Collaborator

cjpatton commented Jan 14, 2024

The FLP circuit evaluation is the most expensive part PINE. Since we might generate multiple proofs, we want to de-duplicate any redundant computation that we can.

Our current FLP circuit is composed of three sub-circuits, which produce five intermediate outputs:

        bit_checks_result = self.eval_bit_checks(r_bit_check,
                                                 bit_checked,
                                                 shares_inv)

        (wr_checks_result, wr_success_count_check_result) = \
            self.eval_wr_checks(r_wr_check,
                                wr_check_v_bits,
                                wr_check_g,
                                wr_check_results,
                                shares_inv)

        (sq_norm_equality_check_result, sq_norm_range_check_result) = \
            self.eval_norm_check(encoded_gradient,
                                 sq_norm_v_bits,
                                 sq_norm_u_bits,
                                 shares_inv)

Consider the computation of each intermediate output:

  • bit_checks_result
  • wr_checks_result
  • wr_success_count_check_result
  • sq_norm_equality_check_result
  • sq_norm_range_check_result

Observation 1: We can't avoid repeating any computation that depends on the joint randomness, as the joint randomness needs to be fresh for each proof. Only bit_checks_result and wr_checks_result depend on the joint randomness; the others need only be computed once.

Observation 2: Re-computing an intermediate output is only expensive if the computation is non-linear, as this requires calling the gadget, which increases the size of the proof and verifier. Linear computations do not have this side-effect. Only
bit_checks_result, wr_checks_result, and sq_norm_equality_check_result are non-linear.

Idea: We can avoid re-computing sq_norm_equality_check_result since it does not depend on the joint randomness. Plus, doing so is worth it because it is non-linear (and particularly expensive because the number of gadget calls depends on the dimension).

Concretely, we can compute the squared norm check with a separate FLP that is evaluated once and does not take joint randomness. During VDAF preparation, the Aggregators would need to verify two FLPs: one for sq_norm_equality_check_result and another for all the intermediate outputs and does the reduction.

@cjpatton
Copy link
Collaborator Author

Question (raised by @junyechen1996 on 2024/1/16 sync): Is the soundness error for the squared norm FLP good enough if we don't do multiproof? Perhaps since we don't lose any soundness in the circuit and this part is not vulnearble to pre-computation attacks. More analysis needed.

@junyechen1996 junyechen1996 self-assigned this Apr 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants