Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix timeout problem #1544

Closed
wants to merge 5 commits into from
Closed

fix timeout problem #1544

wants to merge 5 commits into from

Conversation

Mem2019
Copy link
Contributor

@Mem2019 Mem2019 commented Oct 6, 2022

No description provided.

@vanhauser-thc vanhauser-thc changed the base branch from stable to dev October 6, 2022 06:14
@vanhauser-thc
Copy link
Member

first, please always direct PRs to the dev branch, not stable, fixed that for you.

IMHO this overall impacts negatively fuzzing performance, although not to a large degree. if there is an outlier because of something happening on the system (e.g. plugging in hardware), a single execution can be much longer than it should be.

The -t milliseconds command line parameter is for cases where the target has long execution times.

Why do you think the -t option doesn't help you in your target?

@vanhauser-thc
Copy link
Member

ah sorry, it is too early and I didnt have a coffee yet ;-)
if the execution is a fork(), then the timeout might not be enough, that is why you send this PR ...

the thing is, only on a timeout, crash or number of execs for the loop reached a new fork() has to be performed. this is a very rare occurrence, and yes there a tiimeout can happen. afl-fuzz knows when this is the case in most occasions (because of first execution ever, crash or timeout), and then use the max_us instead.

could you add code to implement this?

@Mem2019
Copy link
Contributor Author

Mem2019 commented Oct 6, 2022

Yes, -t can solve the problem without this PR, but if there is no -t or there is -t ms+, this problem occurs, because they calculate the timeout according to execution time of initial corpus.
And the main problem I solve here is that max_us is not actually the maximum execution speed of each executions in dry run of initial corpus, instead it is the maximum value of average execution time of each seed in corpus, which cannot reflect the time needed to execute the fork() run.

@vanhauser-thc
Copy link
Member

Thinking about this I dont think this is the right approach to fix as it has side effects.
how about for the very first run, which requires the fork, the time is measured and that delta is used for known fork server launches? (after timeouts and crashes)

@Mem2019
Copy link
Contributor Author

Mem2019 commented Oct 7, 2022

Thinking about this I dont think this is the right approach to fix as it has side effects. how about for the very first run, which requires the fork, the time is measured and that delta is used for known fork server launches? (after timeouts and crashes)

This way to fix indeed makes more sense but also looks more complicated. I will try later if I have time. Thanks.

@Mem2019 Mem2019 closed this Oct 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants