Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Bn256/Grumpkin curves as a default setting #1039

Merged
merged 27 commits into from
Jan 26, 2024
Merged

Use Bn256/Grumpkin curves as a default setting #1039

merged 27 commits into from
Jan 26, 2024

Conversation

storojs72
Copy link
Member

@storojs72 storojs72 commented Jan 11, 2024

Partially fixes #1018

I think PR is finally ready for review.

LurkField is changed to Bn256 in the majority of the unit-tests where it could be potentially useful for solidity-verifier to produce the test-vectors, so the field has not been changed in benches and examples - as they are more client-facing programs that somebody can already rely on, so this left for a future PR(s).

Special thanks to @gabriel-barrett !

@storojs72 storojs72 changed the title [WIP] Use Bn256/Grumpkin curves as a default setting Use Bn256/Grumpkin curves as a default setting Jan 24, 2024
@huitseeker
Copy link
Member

!gpu-benchmark

Copy link
Contributor

Benchmark for 8046da0

Click to view benchmark
Test Base PR %
LEM Fibonacci Prove - rc = 100/fib/num-100 4.6±0.01s 4.5±0.05s -2.17%
LEM Fibonacci Prove - rc = 100/fib/num-200 9.1±0.05s 9.0±0.03s -1.10%
LEM Fibonacci Prove - rc = 600/fib/num-100 4.0±0.08s 3.9±0.08s -2.50%
LEM Fibonacci Prove - rc = 600/fib/num-200 6.8±0.11s 6.5±0.10s -4.41%

Copy link
Member

@huitseeker huitseeker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot! I think this is a great improvement, but I would like two issues open to merge this, just to track this milestone relative to future work:

  1. a list of pending items that still need to be converted from pasta to grumpkin,
  2. a specific issue to shift us from our PastaEngine (which implicitly comes with IPA on each side) to Bn256EngineKZG(which has MLKZG on Bn256, IPA on grumpkin), and bench, specifically, proof compression for that PR (we don't do it in general, just recursion).

Indeed, we've checked that the grumpkin cycle performs strictly better if we also switch the PCS, but that may not hold true if we stay on IPA. Does that sound fair?

@storojs72
Copy link
Member Author

@huitseeker , I have created an issue for the first item (#1078), but could you elaborate more on the second one?

@huitseeker
Copy link
Member

huitseeker commented Jan 25, 2024

@storojs72 we have an issue ( #1018 ) for switching fields (pasta to grumpkin). I would like us to be conscious about switching PCSes (IPA to MLKZG), which in Nova terms means switching from :
https://github.com/lurk-lab/arecibo/blob/5da330e6ed2c39679f2cf591edf25543d7a0399b/src/provider/mod.rs#L39-L40
to:
https://github.com/lurk-lab/arecibo/blob/5da330e6ed2c39679f2cf591edf25543d7a0399b/src/provider/mod.rs#L79-L81

and that might mean merging this PR, but tracking the Engine switch in an issue, because I expect that switch to change the following code:

impl CurveCycleEquipped for Bn256Scalar {
type EE1 = nova::provider::ipa_pc::EvaluationEngine<Self::E1>;
type EE2 = nova::provider::ipa_pc::EvaluationEngine<Self::E2>;
type E1 = Bn256Engine;
type E2 = GrumpkinEngine;
}

@storojs72
Copy link
Member Author

@huitseeker, I have created issue for point 2: #1079

@arthurpaulino
Copy link
Member

Why do benchmarks show improvement if no sensible part was touched (or was it?)?

@huitseeker
Copy link
Member

@arthurpaulino all our computations, including benches, depend on the speed of the implementation of the underlying field arithmetic. This is what changes here.

@huitseeker huitseeker added this pull request to the merge queue Jan 26, 2024
@arthurpaulino
Copy link
Member

@huitseeker but benches aren't changed so I believe they are still using Pasta curves

@huitseeker
Copy link
Member

@arthurpaulino ah, yes, I forgot (#1078 / #1079 should address that)!
It seems this underlying Arecibo commit intervened (see bench at the bottom): lurk-lab/arecibo@5da330e#comments
I'm fascinated by how much of an impact (and which?!) it has, if @winston-h-zhang @samuelburnham or @porcuquine have an idea.

@huitseeker
Copy link
Member

@arthurpaulino the bench comment at 8b4c10e shows no perf improvement and hence confirms the source of the bench result above is Arecibo.

Merged via the queue into main with commit 8b4c10e Jan 26, 2024
11 checks passed
@huitseeker huitseeker deleted the issue-1018 branch January 26, 2024 14:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[LUR-22] Replace pallas::Scalar as the default test field with BN256
5 participants