Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The fuzzer ignores too many inputs #1079

Closed
setharnold opened this issue Feb 14, 2018 · 5 comments

Comments

Projects
None yet
2 participants
@setharnold
Copy link
Contributor

commented Feb 14, 2018

// Reject too big images since that will require allocating a lot of

Hello. There are many places in openjpeg where multiplications are performed without checking that the results will not overflow. I've found them in memory allocations and loop bounds, and there's a good chance there's some that aren't very obvious.

Sadly, the fuzzer does have integer overflow bounds checks, as well as draconian input size restrictions, that basically guarantee oss-fuzz cannot spot many of the inputs that may cause openjpeg to behave unsafely.

Please consider removing most of these checks, or all, so that the library will be confronted with inputs that are actually hostile -- larger than 65k pixels in both dimensions, or larger than that in one, etc., so that UBSAN can point out the unsafe multiplications.

Thanks

    // Reject too big images since that will require allocating a lot of
    // memory
    if (width != 0 && psImage->numcomps != 0 &&
            (width > INT_MAX / psImage->numcomps ||
             height > INT_MAX / (width * psImage->numcomps * sizeof(OPJ_UINT32)))) {
        opj_stream_destroy(pStream);
        opj_destroy_codec(pCodec);
        opj_image_destroy(psImage);

        return 0;
    }

    // Also reject too big tiles.
    // TODO: remove this limitation when subtile decoding no longer imply
    // allocation memory for whole tile
    opj_codestream_info_v2_t* pCodeStreamInfo = opj_get_cstr_info(pCodec);
    OPJ_UINT32 nTileW, nTileH;
    nTileW = pCodeStreamInfo->tdx;
    nTileH = pCodeStreamInfo->tdy;
    opj_destroy_cstr_info(&pCodeStreamInfo);
    if (nTileW > 2048 || nTileH > 2048) {
        opj_stream_destroy(pStream);
        opj_destroy_codec(pCodec);
        opj_image_destroy(psImage);

        return 0;
    }

    OPJ_UINT32 width_to_read = width;
    if (width_to_read > 1024) {
        width_to_read = 1024;
    }
    OPJ_UINT32 height_to_read = height;
    if (height_to_read > 1024) {
        height_to_read = 1024;
}
@rouault

This comment has been minimized.

Copy link
Collaborator

commented Feb 25, 2018

oss-fuzz doesn't like more than 2GB allocations even when the library properly handle them, so for now those checks must remain. untiil we manage to make sure that all memory allocations in the library are of "reasonable" size, which is a untrivial effort

@setharnold

This comment has been minimized.

Copy link
Contributor Author

commented Feb 27, 2018

Hello Even,

This is unfortunate. I can appreciate the desire to keep the tests to
something very small and very quick.

Based on my reading of the oss-fuzz FAQ, I don't think this would actually
create a huge flood of new bugs:

https://github.com/google/oss-fuzz/blob/master/docs/faq.md#how-do-you-handle-timeouts-and-ooms

So, we report only one timeout and only one OOM bug per fuzz target.
Once that bug is fixed, we will file another one, and so on.

Thus I think these checks should be relaxed. If the fuzzer created an
input that is 40,000 x 40,000 pixels, it'll trip the two gigabyte limit
and die. You'll get a useless bug report about it, and ignore it. If
you never fix it, it should be the last you see.

But an input that is 66,000 x 66,000 pixels would succeed in creating
an allocation roughly one megabyte in size, and then trip ASAN checks.

It may not be the most efficient use of Google's infrastructure to run
tests that we know will trip the OOM checks but I believe it's within
the spirit of the offering if it helps to uncover inputs that would
cause real trouble in the wild.

Thanks

@setharnold

This comment has been minimized.

Copy link
Contributor Author

commented Apr 16, 2018

Hello Even, all,

I did a quick bit of looking through the past year's worth of oss-fuzz reports for libreoffice. There's roughly 260 "out of memory" reports out of roughly 800 filed issues, and as far as I know there haven't been any nastygrams from Google.

Please consider removing these constraints in the fuzzer so that the similar flaws in the deployed openjpeg library code can be exposed.

Thanks

@setharnold

This comment has been minimized.

Copy link
Contributor Author

commented Jun 14, 2019

Hello, while discussing this issue with Michael Catanzaro, he mentioned that I could have been more clear. So please allow me to restate my suggestions:

First, the fuzzer has security checks that the library is missing. Because of these security checks, the fuzzer is prematurely throwing away inputs that would expose probably exploitable memory errors.

Second, because the library is missing these security checks, malicious inputs can almost certainly allow remote code execution.

I strongly recommend moving the integer overflow checks from the fuzzer to the library.

I also recommend removing the image size constraints in the fuzzer. The nice people at oss-fuzz have never once complained to the LibreOffice team due to the handful of "out of memory" reports. I can't speak for the oss-fuzz team but I suspect they would much rather the fuzzer be looking for actual issues than discarding the inputs most likely to expose crashes.

Thanks

@rouault rouault closed this in 8db9d25 Jun 15, 2019

@setharnold

This comment has been minimized.

Copy link
Contributor Author

commented Jun 15, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.