Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicating Daedaluzz Benchmarks #424

Open
cvick32 opened this issue Feb 1, 2024 · 2 comments
Open

Replicating Daedaluzz Benchmarks #424

cvick32 opened this issue Feb 1, 2024 · 2 comments

Comments

@cvick32
Copy link

cvick32 commented Feb 1, 2024

Hey, I was wondering if there are any scripts for replicating the Daedaluzz and Verismart benchmark results with. I've tried running ityfuzz on one of the Daedaluzz benchmarks, but I get the error that there is nothing to fuzz. I've also added in the Fuzzland bug() definition and put it at some of the bug locations in the Daedaluzz benchmarks, but I get the same error.

Any ideas on what to do?

Thanks

@shouc
Copy link
Contributor

shouc commented Feb 1, 2024

running Daedaluzz shall be straightforward, you can directly use their script https://github.com/Consensys/daedaluzz/blob/master/run-campaigns.py

python3 run-campaigns.py --fuzzer-name ityfuzz

@cvick32
Copy link
Author

cvick32 commented Feb 5, 2024

Cool, thanks so much! Is there a similar script for running the VeriSmart benchmarks?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants