Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fuzzing report for pnm #868

Closed
HeroicKatora opened this issue Feb 12, 2019 · 6 comments · Fixed by #869
Closed

Fuzzing report for pnm #868

HeroicKatora opened this issue Feb 12, 2019 · 6 comments · Fixed by #869

Comments

@HeroicKatora
Copy link
Member

HeroicKatora commented Feb 12, 2019

Updated the (autogenerated) fuzzer harness to a reasonable state, limiting the image size to 4MB. After 2 hours of running, there are 5 unique crashes reported, all within the first two minutes. The format is simple enough that this seems reasonably plausible to get near full path coverage (more than an hour since the last unique path), even though the initial seed consists only of a single pbm image already used in other tests.

Concrete example

Quite embarrassing crash:

P7<two null bytes>

All the other follow the same construction, an unfinished P7 header. Fix is therefore hopefully straightforward (I'll assign this to myself).

Thanks

@Shnatsel for inspiring this, maybe interested in the effort as well.

Running

Note that with current llvm, you'll likely hit linker errors, use RUSTFLAGS="-Clink-arg=-fuse-ld=gold" cargo afl build as a workaround.

@HeroicKatora
Copy link
Member Author

HeroicKatora commented Feb 12, 2019

Found it already, I think:

https://github.com/PistonDevelopers/image/blob/c142a61df80a3dba8be626289488d37b2dadeeb1/src/pnm/decoder.rs#L284-L285

This line does not validate that actual data was read. The other test cases are apparently different paths to get to this line.

@HeroicKatora
Copy link
Member Author

HeroicKatora commented Feb 12, 2019

That was it, no more crashes after the update. I'll tidy up the fuzzing harness changes, then go ahead with merging.

@Shnatsel
Copy link
Contributor

For future reference, cargo afl fuzz mostly runs in deterministic mode. The design of afl assumes one deterministic process and many non-deterministic ones running in parallel, so a practical setup should look something like:

cargo afl fuzz -M master with -i , -o, other options as usual
and then a bunch of
cargo afl fuzz -S slave1 ...
cargo afl fuzz -S slave2 ...
etc.

This makes master instance run deterministic checks and the rest try stuff at random. The latter is actually very good for quick and dirty checks, such as CI. You can also try cargo afl fuzz -d to run a single non-deterministic instance.

@Shnatsel
Copy link
Contributor

Is that a panic or a different kind of crash? Anyway, nice find! You get a spot in https://github.com/rust-fuzz/trophy-case - please open a PR!

@HeroicKatora
Copy link
Member Author

@Shnatsel

cargo afl fuzz -M master with -i , -o, other options as usual
and then a bunch of
cargo afl fuzz -S slave1 ...
cargo afl fuzz -S slave2 ...
etc.

Ah yes, now the warning about 25% cpu utilization also makes more sense. For CI purposes, do you happen to know a way to automatically scale to the given number of cores or would you just run one random instance?

Is that a panic or a different kind of crash? Anyway, nice find! You get a spot in https://github.com/rust-fuzz/trophy-case - please open a PR!

Just a panic, when accessing the invalid out-of-bounds index. A trophy, for finding a bug in my own code? How sweet 🚀 . I'll open it after finishing the PR here.

@Shnatsel
Copy link
Contributor

Well, I'd just roll my own shell script that reads /proc/cpuinfo and spawns a bunch of random instances accordingly. Just don't forget to put the already discovered test files in there and use them as a starting point. You can get the minimal set of files that exercise all discovered paths with cargo afl cmin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants