Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MultiExp bench seems a bit "optimistic" #205

Closed
0x0ece opened this issue Jun 3, 2022 · 4 comments · Fixed by #206 or #237
Closed

MultiExp bench seems a bit "optimistic" #205

0x0ece opened this issue Jun 3, 2022 · 4 comments · Fixed by #206 or #237

Comments

@0x0ece
Copy link

0x0ece commented Jun 3, 2022

See this line:
https://github.com/ConsenSys/gnark-crypto/blob/master/ecc/bls12-377/multiexp_test.go#L209

All points are equal, so the MSM algo is doing a ton of dbl that in ExtJac are considerably faster than add.
As a result the numbers one gets running go test -bench are extremely optimistic.

I'd recommend generating a somewhat random set of points.

@gbotrel
Copy link
Collaborator

gbotrel commented Jun 3, 2022

No the MSM is not doing a ton of dbl; the ExtJac happens on the buckets, not on the points. The bucket values are filled depending on the scalar, which are "reasonably" mixed (not random but still representative).

You can try for your self the benchmarks with randomizing the points, I doubt the result will vary. (it doesn't on my machine for the reference 2 >> 20 size).

edit: not randomizing may indeed trigger more doubling, for the first operation on each bucket index. So ~80ns * len(buckets) (* time chunks) delta, which may bias the result within a ~2% margin (eye balling) for 1 << 20 and c= 16

@0x0ece
Copy link
Author

0x0ece commented Jun 4, 2022

I did try, with random points it's almost 50%-2x slower on my mac. I run both your parallel code and a "single thread" version of the same algo.

When you add to the buckets:

  • first point you add, the bucket is empty so you're doing a bucket.Set(point)
  • second point you add, for every bucket, it's a dbl if all points are the same

I couldn't test on amd yet, maybe the bias is only 2% like you say, but I just thought it'd might be more fair to just use random points.

@gbotrel
Copy link
Collaborator

gbotrel commented Jun 4, 2022

You're right, I stand corrected :-). On mac (arm) the difference is ~40%, on x86 ~33%, definitely worth it to correct that and not measure a best case scenario.

Thanks for raising the issue 👍

@0x0ece
Copy link
Author

0x0ece commented Jun 5, 2022

🙌

@gbotrel gbotrel linked a pull request Jun 6, 2022 that will close this issue
gbotrel added a commit that referenced this issue Jun 6, 2022
* test: fix #205 - msm bench with different bases

* style: factorize some benchmarking code

* test: faster bench data vector generation (not on curve)
@gbotrel gbotrel closed this as completed Jun 6, 2022
@gbotrel gbotrel mentioned this issue Aug 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants