Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crashers in seed corpus? #168

Closed
alex opened this issue Dec 11, 2016 · 16 comments
Closed

Crashers in seed corpus? #168

alex opened this issue Dec 11, 2016 · 16 comments
Assignees

Comments

@alex
Copy link
Contributor

alex commented Dec 11, 2016

I've got a case where there's a crasher in the seed corpus (because we're using another project's seed corpus) -- but ClusterFuzz doesn't appear to be finding that, even though just starting the fuzzer with the corpus in my local environment triggers it reliably.

Is this intentional?

@inferno-chromium
Copy link
Collaborator

We should be finding this. What is the fuzz target name. Can you also try running "python infra/helper.py run_fuzzer $PROJECT_NAME <fuzz_target>" since that should run/read from seed corpus. If that crashes, i can check on why clusterfuzz is not finding this. Maybe add the crash stack if it is not sensitive here. Otherwise, can file this in protected tracker - https://bugs.chromium.org/p/oss-fuzz/issues/list

@alex
Copy link
Contributor Author

alex commented Dec 11, 2016

This is for the gnutls ASAN x509_parser fuzzer. (Trying with helper.py now)

@alex
Copy link
Contributor Author

alex commented Dec 11, 2016

Yes, running python infra/helper.py run_fuzzer gnutls gnutls_x509_parser_fuzzer exits immediately with the crash.

@alex
Copy link
Contributor Author

alex commented Dec 11, 2016

Is it possible this caused by the seed corpus being changed after the fuzzer was originally added?

@inferno-chromium
Copy link
Collaborator

Ok so the bug seems to be we are not archiving the seed corpus.

aarya@clusterfuzz-external-linux-0498:/mnt/scratch0/clusterfuzz/slave-bot/builds/clusterfuzz-builds_gnutls_7679aa0e59b24ed63ab362aea60b8fc3a34a955a/revisions$ ls
gnutls_client_fuzzer gnutls_x509_parser_fuzzer
gnutls_client_fuzzer_seed_corpus.zip REVISION

And the reason for that we declare the build as failed if there is a crashing input in seed corpus.
https://oss-fuzz-build-logs.storage.googleapis.com/build_logs/gnutls/latest.txt

@mikea @kcc - should we do this ? lets discuss in tmrw's meeting. ClusterFuzz can handle crashing input in corpus and quarantine them, create testcase, etc. So, maybe we can relax this requirement.

@alex
Copy link
Contributor Author

alex commented Dec 11, 2016

👍 good find!

@Dor1s
Copy link
Contributor

Dor1s commented Dec 12, 2016

I'm afraid that we cannot quarantine testcases from seed corpus, so the requirement makes sense. I believe that such cases should be fixed ASAP, and having a "broken" build is a good motivator for maintainers to fix it, isn't it?

@inferno-chromium
Copy link
Collaborator

@Dor1s - We do quarantine crashing testcases from seed corpus during corpus pruning task. I think it will be great if we can detect and file bug for this in clusterfuzz. I think build step shouldn't fail for this.

@Dor1s
Copy link
Contributor

Dor1s commented Dec 12, 2016

Do you mean seed corpus stored in the repository or seed corpus downloaded from _static GCS subdirectory?

I'm speaking about the first one. Even if we detect a crashing input and remove it during pruning, we don't remove it from the repository, so we'll crash on it on the next fuzzing session, since this seed corpus will be taken from .*_seed_corpus.zip archive, which is read-only from CF side.

@alex
Copy link
Contributor Author

alex commented Dec 12, 2016

Definitely working to get the bug fixed in gnutls.

I think it'd be useful if the seed corpus could crash, because right now we use dynamic corpuses: we take OpenSSL and boringssl's corpuses, and if either of them find a crasher, suddenly we'll stop building, which is unfortunate.

@inferno-chromium
Copy link
Collaborator

@alex - yes @mikea plans to fix this so that we can continue the build and then ClusterFuzz can catch this. But just a fyi, since this issue is in seed corpus, we will keep hitting this crash and won't find new crashes. So, good to get this fixed once you see the ClusterFuzz report.

@mikea mikea closed this as completed in 598c8ba Dec 13, 2016
@alex
Copy link
Contributor Author

alex commented Dec 13, 2016

Awesome, thanks for the quick turnaround!

@inferno-chromium
Copy link
Collaborator

Just a fyi @alex - there are many leaks in seed corpus and so all those testcases are already created and hitting first, before it can reach the other one you are looking for. this can be checked with ASAN_OPTIONS=detect_leaks=1

We need to fix run_fuzzer command to use the exact ASAN_OPTIONS etc that ClusterFuzz uses. So, reopening before we forget that.

@mikea
Copy link
Contributor

mikea commented Dec 13, 2016

Some details need to be figured out, but gnutls build is uploading and CF has successfully filed an issue.

@alex
Copy link
Contributor Author

alex commented Dec 13, 2016

This seems to be working well now -- working on getting the issues actually resolved :-)

@inferno-chromium
Copy link
Collaborator

All gnutls leaks were filed previously by CF.

https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=285 is the bug you are looking for. The corpus pruning task which runs once a day tests every testcase independently, so it has found the bug you are looking for. The autofiler would have filed this in an hour, but i couldn't wait due to pure excitement!

Closing this, leak detection in run_fuzzer is tracked via #9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants