Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized Reed-Solomon code recovery based on danksharding #132

Merged
merged 6 commits into from Nov 21, 2022

Conversation

qizhou
Copy link
Contributor

@qizhou qizhou commented Oct 24, 2022

The major optimization is that given the sample size = 16, we can reduce the recovery problem from size 8192 to 16 problems with size 512 (=8192/512) with proper pre-processing. This means that the cost such as Z(x) and Q2(x) can be amortized over 16 recoveries with size 512.

Some benchmark numbers on my MacBook (8192 recovery):

  • Original: 1.07s
  • Optimized zpoly: 0.500s
  • Optimized RS: 0.4019s

mimc_stark/test_recovery.py Outdated Show resolved Hide resolved
@dankrad
Copy link
Collaborator

dankrad commented Nov 6, 2022

Shall we make a new folder for this stuff? mimc_stark does not seem appropriate. (There is also polynomial_reconstruction which is my own implementation of this stuff, so putting your version somewhere there would make sense)

@qizhou
Copy link
Contributor Author

qizhou commented Nov 7, 2022

Shall we make a new folder for this stuff? mimc_stark does not seem appropriate. (There is also polynomial_reconstruction which is my own implementation of this stuff, so putting your version somewhere there would make sense)

Great suggestion! I will migrate the code there. Thanks!

@qizhou
Copy link
Contributor Author

qizhou commented Nov 7, 2022

Shall we make a new folder for this stuff? mimc_stark does not seem appropriate. (There is also polynomial_reconstruction which is my own implementation of this stuff, so putting your version somewhere there would make sense)

I have moved to code to polynomial_reconstruction based on reconstruct_polynomial_from_samples() function. The performance number of 8192 data points with sample size 16 is now (on my MacBook):

  • Original: total 1.4794s with 0.79s for zpoly
  • Optimized zpoly: total 0.7755s with 0.06s for zpoly
  • Optimized recovery: total 0.3202 with 0.04s for preparation of zpoly/shifted_zpoly/etc

Please know if you have any questions. Thank you!

@dankrad dankrad merged commit a84fab2 into ethereum:master Nov 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants