-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigating performance #41
Comments
Hi Dario, |
I made a mistake and there's actually 124 module entries per iteration :| That could explain the slowdown? I have confirmed that the target is NOT restarting every iteration, so should be ok on that front. |
Can you paste your command line (feel free to censor anything target-specific) |
|
Still want me to run litecov? |
When running litecov, I get a bunch of "Target function returned normally". Here is the output when I set the number of iterations to a 100.
|
Cool, thanks! I don't see anything out of the ordinary in the command line. Does the performance from litecov match wat you get from fuzzing (at ~10 iterations per second 100 iterations above should take about 10 seconds to complete). Just to make sure it's not a problem on the fuzzer side. 124 entries per iteration would be noticeable, but I'm still thinking even with that the performance drop seems excessive. Can you rerun litecov with
|
I see a big difference when running with litecov. I am testing with 10 000 iterations for both the fuzzer and litecov. I get 870 executions per second with litecov.
|
Looking at the jackalope logs, it seems that it doesn't always complete the expected number of iterations. When configuring it to 1000 iterations, I rarely see it do as much. I am using the "process exit" log line to judge when it is restarting the process and I count "persistence method ended" as an iteration. I am using "trace_debug_events" but I can't see why the process exits. No target crash is logged. |
Ok cool we are getting somewhere! Note that Jackalope sometimes restarts the target for reasons other than crashes and hangs. Specifically, it's going to restart a target when new coverage is detected (in order to make sure new coverage it saw is sample-dependent). This overhead goes away eventually as the corpus becomes more stable over time, but you can still disable this via If that doesn't work, could you paste a section of Jackalope's output. Above you mentioned "Only 5 module entries". Does this mean you are getting different number of entries when fuzzing vs running under litecov? (This would mean mutated samples are causing additional entries somehow?) |
Ok I will try with clean_target_on_coverage=0. Yes I am getting a different number of module entries when fuzzing. While fuzzing, that number is not constant. When fuzzing with winafl, I would get ~80-90% stability. Is that related? |
With clean_target_on_coverage=0, I always get a 1000 iterations before a process exit so things seem fine on that front. |
Did that help with the performance? |
Here's the output from a run with clean_target_on_coverage=0
|
When resuming a session, does jackalope need to re-run all the samples first? |
Thanks!
|
Oh, I guess in your case extra iterations could be caused by the minimizer. Since your samples are pretty large it can take the minimizer some time to handle them (if they can be minimized to a small size). You can disable minimizer via |
One other performance idea - does your harness support sample delivery via shared memory or if it needs to be a file on the disk. See https://github.com/googleprojectzero/Jackalope/blob/main/test.cpp for shared memory target example. I'm wondering if the file IO is to blame to at least some of the performance loss in fuzzing vs litecov. |
Do you know what could cause this? Is my harness using I/O and slowing down the fuzzing while doing so?
|
Hmm, strange, I just grepped the Jackalope/TinyInst source and I don't see where the empty warning could come from (there are only several case where there is "Warning" in titlecase, and nowhere is it empty). Is it printed by the harness/target perhaps? For the sample that runs with ~21 execs/s in litecov,
|
Yes it is printed by the target. Sorry about that! Does jackalope support an option to suppress the target's output perhaps?
|
Output suppression: There's code for it, but it hasn't been enabled/tested properly. But you can try switching https://github.com/googleprojectzero/TinyInst/blob/master/Windows/debugger.cpp#L1829 to true. Does |
Thanks, I'll try that out.
|
Hmm if I suspect low performance is the combination of number of entrypoints and low performance of file IO, but again, without actually looking at target it's difficult to tell and do fixes... |
Sent you more info via email. |
Hello,
I am trying out Jackalope and seeing big performance difference compared to winafl, with the same harness. ~250 execs/s with winafl vs ~10-12/s with jackalope.
I saw that "module entries" have a big performance hit on tinyinst. I see ~14 module entries per iteration. Is that a lot?
If not, any tips on how to figure out the cause of this big difference?
My inputs files are 50-70kb.
The text was updated successfully, but these errors were encountered: