Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File upload is limited to 1GB when using MultipartBody #10666

Closed
loicmathieu opened this issue Mar 29, 2024 · 4 comments
Closed

File upload is limited to 1GB when using MultipartBody #10666

loicmathieu opened this issue Mar 29, 2024 · 4 comments
Assignees
Labels

Comments

@loicmathieu
Copy link

Expected Behavior

We ca upload files of anysize when using MultipartBody

Actual Behaviour

I have an endpoint that can handle multipart request with multiple parts of unknown names, some are files, some are not.
The only way to deal with such endpoint is to bind the body to @Body MultiparBody body then doing something like that:

Flux.from(body)
            .subscribeOn(Schedulers.boundedElastic())
            .map(part -> {
                if (part instanceof CompletedFileUpload fileUpload) {
                    File tempFile = File.createTempFile(fileUpload.getFilename() + "_", ".upl");
                    try (var inputStream = fileUpload.getInputStream();
                        var outputStream = new FileOutputStream(tempFile)) {
                        long transferredBytes = inputStream.transferTo(outputStream);
                        System.out.println(fileUpload.getSize()); // 1681355024 => 1.7GB: OK
                        System.out.println(transferredBytes); // 1073708906 => 1GB !!!
                    }
                    return new AbstractMap.SimpleEntry<>(
                        fileUpload.getFilename(),
                        tempFile.getName()
                    );

                } else {
                    return new AbstractMap.SimpleEntry<>(
                        input.getName(),
                        new String(input.getBytes())
                    );
                }
            })
            .collectMap(AbstractMap.SimpleEntry::getKey, AbstractMap.SimpleEntry::getValue)
            .block();

However, when I create the file, it only store 1GB whereas my file in the part is approx 1.7GB.
When I call fileUpload.getSize() it correctly reports the correct file size (1.7GB in my case) but the generated file is only 1GB long, in fact it seems to be the underlying InputStream that limit to 1BG.

Steps To Reproduce

No response

Environment Information

No response

Example Application

No response

Version

4.3.4

@yawkat
Copy link
Member

yawkat commented Apr 2, 2024

probably a bug in MicronautHttpData

@sdelamo sdelamo added the type: bug Something isn't working label Apr 19, 2024
@loicmathieu
Copy link
Author

I have a reproducer, it happens only with micronaut.server.multipart.disk=true or micronaut.server.multipart.mixed=true, with the default it works but the request is very slow to answer (multiple minutes whereas it took a few seconds with disk or mixed).

This is the reproducer project:
issue-multipart-body.zip

It can be reproduced using the following cURL:

curl -XPOST -F "files=@1536m.txt;filename=file1" -F "string1=value1" http://localhost:8080/upload/file1

@loicmathieu
Copy link
Author

Just checked Micronaut Platform 4.4.1 and the issue still exists.

yawkat added a commit that referenced this issue May 3, 2024
The code path was not covered by tests and was broken.

I also replaced the ReentrantLock by a NonReentrantLock that is based on a Semaphore. This is because there are some semantic checks in Chunk that might not work with a reentrant lock (e.g. repeated deallocate calls could lead to ref count errors). But this is unrelated to the bug.

Fixes #10666
@sdelamo sdelamo closed this as completed in aeaec05 May 9, 2024
sdelamo added a commit that referenced this issue May 9, 2024
* Fix file uploads beyond MMAP_SEGMENT_SIZE
The code path was not covered by tests and was broken.

I also replaced the ReentrantLock by a NonReentrantLock that is based on a Semaphore. This is because there are some semantic checks in Chunk that might not work with a reentrant lock (e.g. repeated deallocate calls could lead to ref count errors). But this is unrelated to the bug.

Fixes #10666

* Update http-server-netty/src/main/java/io/micronaut/http/server/netty/NonReentrantLock.java

Co-authored-by: Sergio del Amo <sergio.delamo@softamo.com>

* make badssl more resilient to timeouts

* make badssl more resilient to timeouts

---------

Co-authored-by: Sergio del Amo <sergio.delamo@softamo.com>
@yawkat
Copy link
Member

yawkat commented May 13, 2024

Thanks for the report!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
Development

No branches or pull requests

4 participants