You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The FLP circuit evaluation is the most expensive part PINE. Since we might generate multiple proofs, we want to de-duplicate any redundant computation that we can.
Our current FLP circuit is composed of three sub-circuits, which produce five intermediate outputs:
Consider the computation of each intermediate output:
bit_checks_result
wr_checks_result
wr_success_count_check_result
sq_norm_equality_check_result
sq_norm_range_check_result
Observation 1: We can't avoid repeating any computation that depends on the joint randomness, as the joint randomness needs to be fresh for each proof. Only bit_checks_result and wr_checks_result depend on the joint randomness; the others need only be computed once.
Observation 2: Re-computing an intermediate output is only expensive if the computation is non-linear, as this requires calling the gadget, which increases the size of the proof and verifier. Linear computations do not have this side-effect. Only bit_checks_result, wr_checks_result, and sq_norm_equality_check_result are non-linear.
Idea: We can avoid re-computing sq_norm_equality_check_result since it does not depend on the joint randomness. Plus, doing so is worth it because it is non-linear (and particularly expensive because the number of gadget calls depends on the dimension).
Concretely, we can compute the squared norm check with a separate FLP that is evaluated once and does not take joint randomness. During VDAF preparation, the Aggregators would need to verify two FLPs: one for sq_norm_equality_check_result and another for all the intermediate outputs and does the reduction.
The text was updated successfully, but these errors were encountered:
Question (raised by @junyechen1996 on 2024/1/16 sync): Is the soundness error for the squared norm FLP good enough if we don't do multiproof? Perhaps since we don't lose any soundness in the circuit and this part is not vulnearble to pre-computation attacks. More analysis needed.
The FLP circuit evaluation is the most expensive part PINE. Since we might generate multiple proofs, we want to de-duplicate any redundant computation that we can.
Our current FLP circuit is composed of three sub-circuits, which produce five intermediate outputs:
Consider the computation of each intermediate output:
bit_checks_result
wr_checks_result
wr_success_count_check_result
sq_norm_equality_check_result
sq_norm_range_check_result
Observation 1: We can't avoid repeating any computation that depends on the joint randomness, as the joint randomness needs to be fresh for each proof. Only
bit_checks_result
andwr_checks_result
depend on the joint randomness; the others need only be computed once.Observation 2: Re-computing an intermediate output is only expensive if the computation is non-linear, as this requires calling the gadget, which increases the size of the proof and verifier. Linear computations do not have this side-effect. Only
bit_checks_result
,wr_checks_result
, andsq_norm_equality_check_result
are non-linear.Idea: We can avoid re-computing
sq_norm_equality_check_result
since it does not depend on the joint randomness. Plus, doing so is worth it because it is non-linear (and particularly expensive because the number of gadget calls depends on the dimension).Concretely, we can compute the squared norm check with a separate FLP that is evaluated once and does not take joint randomness. During VDAF preparation, the Aggregators would need to verify two FLPs: one for
sq_norm_equality_check_result
and another for all the intermediate outputs and does the reduction.The text was updated successfully, but these errors were encountered: