You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now we are using ark-groth16 as native prover. This is in Rust and simplifies a lot of things. At the same time, rapidsnark is more mature and with recent optimizations for mobile it might be more performant.
We want to understand the performance diff better, understand where it makes sense and possibly have it as a feature that can be used (switching between provers to use).
Note that currently witness calculation is bottleneck (wasm), not prover. This means this issue isn't super high on the priority list right now. It may become more relevant in the future, especially as prover time is a bottleneck and if it has a bi impact on e.g. older Android devices.
Polygon Identity might also do some benchmarking of Keccak256 using their stack so we get something roughly comparable (albeit perhaps with different devices).
Acceptance criteria
Better data on performance diff for using rapidsnark vs ark-groth16.
The text was updated successfully, but these errors were encountered:
oskarth
changed the title
Benchmark and compare with other prover
Benchmark and compare with rapidsnark prover
Dec 5, 2023
This may be relevant with 1.6-1.7m constraints circuits taking ~20s raw prover time (excl witness gen). If this would be cut in half then the additional complexity.
It'd be useful to have intuition for if this is likely or not. Right now I'm about 50/50.
Still not high-prio compared to other perf-related issues.
That native witness generation is faster isn't surprising (and we'd expect this to be faster with circom-witness-rs, once mature enough). I'm a little bit surprised that arkworks is (marginally) faster than rapidsnark.
Problem
Right now we are using ark-groth16 as native prover. This is in Rust and simplifies a lot of things. At the same time, rapidsnark is more mature and with recent optimizations for mobile it might be more performant.
We want to understand the performance diff better, understand where it makes sense and possibly have it as a feature that can be used (switching between provers to use).
Details
See https://github.com/iden3/rapidsnark
Note that currently witness calculation is bottleneck (wasm), not prover. This means this issue isn't super high on the priority list right now. It may become more relevant in the future, especially as prover time is a bottleneck and if it has a bi impact on e.g. older Android devices.
Polygon Identity might also do some benchmarking of Keccak256 using their stack so we get something roughly comparable (albeit perhaps with different devices).
Acceptance criteria
Better data on performance diff for using rapidsnark vs ark-groth16.
The text was updated successfully, but these errors were encountered: