Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix unbound allocation in fuzzer_script_exr #1781

Merged
merged 3 commits into from
Sep 7, 2022

Conversation

anfedotoff
Copy link
Contributor

Fix for #1778

Comment on lines 30 to 31
let decoded_size = usize::try_from(decoder.total_bytes()).unwrap();
if total_size - current_pos < decoded_size as u64 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the logic we want. The EXR format supports compression, so the decoded size is usually going to be larger than the file size. Instead, for fuzzing purposes we could just impose a maximum size of say 256 MB.

(As a smaller point, we should probably check decoded_size before calling usize::try_from(...).unwrap() so that we don't fail there on 32-bit systems)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we upper bound the decoded size against anything, then comparing against the limits configured for the reader would also make sense (i.e. `Limits::max_alloc). We are doing another immediate allocation afterall.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's unclear, I'm suggesting that properly supporting <OpenExrDecoder as ImageDecoder>::set_limits and then using that function to configure the limit would be a much more thorough fix. That way, we get better feedback if all allocations are accounted for or if the fuzzer manages to find additional ones inside the decoder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is the logic we want. The EXR format supports compression, so the decoded size is usually going to be larger than the file size. Instead, for fuzzing purposes we could just impose a maximum size of say 256 MB.

(As a smaller point, we should probably check decoded_size before calling usize::try_from(...).unwrap() so that we don't fail there on 32-bit systems)

Thanks, I didn't know that there is a compression:). I changed fix on what you offer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's unclear, I'm suggesting that properly supporting <OpenExrDecoder as ImageDecoder>::set_limits and then using that function to configure the limit would be a much more thorough fix. That way, we get better feedback if all allocations are accounted for or if the fuzzer manages to find additional ones inside the decoder.

Sounds pretty good, I need a little bit more time to look at code, maybe find some examples for this

Copy link
Contributor Author

@anfedotoff anfedotoff Sep 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've provided an implementation for OpenExrDecoder::set_limits function. In this function I check some common cases and a case with total_bytes(). Maybe we need to save Limits in Decoder, and check them sometimes. But I don't familiar with exr decoding for now:). What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HeroicKatora the unbounded allocation that's being fixed here is actually in the fuzz harness itself, which set_limits isn't designed to address

src/codecs/openexr.rs Outdated Show resolved Hide resolved
fuzz/fuzzers/fuzzer_script_exr.rs Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants