-
-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HttpPostMultipartRequestDecoder may not add content to an existing upload after being offered data #11143
Comments
@jameskleeh It tries to find out the
So I see nothing wrong there. |
@fredericBregier I don't have confidence I could create something outside of Micronaut to reproduce the issue. I'm not sure what you mean by "It is quite too heavy to check in this huge git". I clone and run the tests in that codebase everyday so surely you can as well. I don't understand the importance of CRLF/LF in this context so I really can't comment on whether it makes sense or not. All I can say is that it worked correctly prior to this change. |
I verified that LF is found at position 0, however your claim of "nothing can be added (the delimiter could be just after in the next chunk)" seems invalid to me. Can the contents of a file not contain line feeds? How can you assume the delimiter is coming after a LF? Is that in the RFC for multipart requests somewhere? |
@jameskleeh Hi, your code is quite heavy, so difficult to search for the reason behind and to be able to reproduce and fixing it. So the reason if you have a reproducer, with simple code (one client and one server code), then we can extract a reproducer and fix it. On the RFC part, on multipart side, each part is separated by a CRLF or LF following by the delimiter string. Of course a part can contains a CRLF/LF. But if this is found, there is a risk such that the next bytes are the delimiter but not yet there (due to HTTP chunk by chunk reception). Maybe your issue is that the upload is beginning by a CRLF/LF, but not fitting within one HTTP chunk, so the delimiter will come in a later chunk. If this is that, then, once all chunks concerning this upload will arrive on the server side, it will find out the delimite preceeded by CRLF/LF, and therefore ending the part (the upload). I believe we could try to seak for the "last" CRLF/LF, not the first one when no delimiter is found. It might be better from a memory point of view, but it will not change the logic. |
The idea would be to change: Line 1163 in 1529ef1
by something that will search for "last" CRLF/LF: (not yet implemented)
We might even check if the new posDelimiter has more space than delimiter size, in order to ensure that the delimiter is not splitten across 2 HTTP chunks as:
It might takes
The delimiter is then found fully and the file is filled with a 0 length buffer. Currently, on first step, it will gives: But not that this will not change the behaviour: if there is never a CRLF/LF + delimiter, then the part is never ending. The difference is that, if there is a CRLF/LF in the middle (or start) of the file, it will however populate the file behind, but still waiting to find out the CRLF/LF+delimiter. |
@fredericBregier I'd be happy to test any local branch with the changes |
OK, I will try to make it, but I'm quite sure it will not change anything since I do not have a simple reproducer test code to test it. In other words, it seems your upload does not end up with a CRLF/LF + delimiter... ? But as I don't know the code, I cannot say for sure of course. |
@fredericBregier I'm not quite sure but its not getting to the end of the file. I'm uploading 15 megabytes and I'm seeing this behavior on the second chunk of data being received. I'm fairly confident all of the data in the buffer when |
If you'd like to do a screen share I'd be glad to show you |
Well, could you at least point me out directly to the code:
Of course, I could donwload all your project and following your original guidance, but perhaps I could read the code itself first ? For the screen sharing, might be useful next, if I don't get it ;-) |
OK, I think I got it. So I believe that maybe https://github.com/micronaut-projects/micronaut-core/blob/e67ca50cf2a778cb6c7354b1ecd2c7fe7d2910ed/http-server-netty/src/main/java/io/micronaut/http/server/netty/FormDataHttpContentProcessor.java#L134 is wrong. Current For instance, let say the last chunk was ending like this:
Such that the next Part from multipart is not yet available to know what it is. Another example:
Then the second one I don't know why you are adding like this I'm on my way however to try to improve the decoder part to opimize it, but I feel like you are not using correctly the decoder however. |
Because we need to notify the downstream that new data is available on the upload. Users can read and release pieces of uploads as they become available. It is often the case we don't want to buffer any data beyond a single chunk
If it's null it wouldn't be passed through, so I don't think its wrong. This code has been in place for some time and working well
It's possible this is a valid case and we need to handle it, however I don't think its relevant to this issue |
You can checkout the 2.4.x branch of micronaut-core to see the difference in behavior. In Netty 4.1.59 the partial upload is populated with a buffer on the second invocation and with 4.1.60+ it is not. |
Yes I understood. So the "API behavior" change. But what I'm saying, is that the goal of the decoder is not to give you a "partial HttpData" but a full one when ready. You have made assumption which are internal operations. I will try to make the current API to "looks like" what is was before, but my feeling IMHO is that you rely too much on the underlying implementation and not the API contract (which is not the implementation). |
If that is the case then why is there a method to retrieve the partial data? |
Good point ;-) However, I was able to reproduce easily this bug (changing behavior), by setting the first bytes of the new file to be CRLF. Note that from a Disk based I fix also the default behavior such that when a Memory based |
@jameskleeh You can check out my proposal to fix the behaviour. I believe it will run as expected now, even if it is not really a bug. |
Yeah I’ll give it a go today
…On Thu, Apr 8, 2021 at 7:56 AM Frédéric Brégier ***@***.***> wrote:
@jameskleeh <https://github.com/jameskleeh> Could you try
https://github.com/fredericBregier/netty/tree/improveMultipartAddingBuffer
?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11143 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMCVLIOCP4ZVP3MLRIIBS3THWKVVANCNFSM42PB7GYA>
.
|
@fredericBregier This is better, but still not ideal. The very first time the decoder is offered data results in the upload being created with a composite buffer where the first buffer is empty and the second one contains the data. This is a deviation from previous behavior where it would be a non composite buffer. The buffer being a composite would not necessarily be a problem, however having 2 components is because I'm relying on a chunk of data being offered to the decoder only resulting in a single chunk of data being added to any single upload. |
Hmm, I see. I will try to optimize one more time. |
@fredericBregier Another step in the right direction, however I'm now finding that the upload never completes. |
@jameskleeh Very strange since the only difference is the following with the previous version:
I will check but I don't understand. Have you a trace (log error) ? It might be the release of the first empty buffer, but ver strange then. |
Moreover, all Junit tests are passing... So the reason I don't get it. I double check, and I see nothing wrong. |
Just in case I understand well your code: https://github.com/micronaut-projects/micronaut-core/blob/e67ca50cf2a778cb6c7354b1ecd2c7fe7d2910ed/http-server-netty/src/main/java/io/micronaut/http/server/netty/FormDataHttpContentProcessor.java#L111
In 1) you are checking if the FileUpload is completed, and if so, add it to the messages list. Could those being an issue ? |
@fredericBregier That isn't an issue. I believe the check at 1) is actually redundant given only completed items get passed through the iterable of the decoder. Basically if you put a breakpoint at
You will find that |
@fredericBregier I've done some debugging and found this to be the issue: The readable bytes in this case was 6, but for some reason it is subtracting the delimiter length. That doesn't make sense to me since the delimiter wasn't found. The 6 bytes should be added to the upload yes? |
@fredericBregier I changed this specific section of code back to how it is on the netty repo and my test is green Line 1161 in 6724786
|
@jameskleeh Thanks ! I will check and update ;-) |
@fredericBregier Sorry I forgot to mention I did change it to |
@jameskleeh Could you give me the changes you've made ? The reason of substracting delimiter size is the following:
I may have make an over hypothesis, so I've changed from now: in loadDataMultipartOptimized
Indeed, if the lastPosition is < 0, then it means the buffer has less than delimiter length, so we can afford to check for CRLF/LF within from relative position 0. |
@jameskleeh With this change, I've got the following:
|
@fredericBregier I'll try with your updated branch now |
@jameskleeh Use the next one ;-) Bad commit |
@fredericBregier All tests green on my end. I had to do a couple tweaks to handle no data being added to the upload, however that was probably something that should have been done regardless. |
Note that there are 2 other issues we've found during testing, but I haven't gotten to the bottom of them yet and they aren't related to this. |
@jameskleeh OK, thanks for your feedback ! Thank you for your help ! It was really helpful!! |
@fredericBregier I have the other issues resolved. I appreciate your help getting this resolved. I hope this change can go into 4.1.64 |
@jameskleeh Great ! We will close this one as soon as the merge is done after review. |
@jameskleeh Current review introduces some changes. When you have time, maybe you can try again to ensure it does not change again the behavior (should not) ? |
@fredericBregier I tried with the latest commit and its still good |
@jameskleeh Thanks a lot! |
Motivation: When Memory based Factory is used, if the first chunk starts with Line Break, the HttpData is not filled with the current available buffer if the delimiter is not found yet, while it may add some. Fix JavaDoc to note potential wrong usage of content() or getByteBuf() if HttpDatais has a huge content with the risk of Out Of Memory Exception. Fix JavaDoc to explain how to release properly the Factory, whatever it is in Memory, Disk or Mixed mode. Fix issue netty#11143 Modifications: First, when the delimiter is not found, instead of searching Line Break from readerIndex(), we should search from readerIndex() + readableBytes() - delimiter size, since this is the only part where usefull Line Break could be searched for, except if readableBytes is less than delimiter size (then we search from readerIndex). Second, when a Memory HttpData is created, it should be assigned an empty buffer to be consistent with the other implementations (Disk or Mixed mode). We cannot change the default behavior of the content() or getByteBuf() of the Memory based HttpData since the ByteBuf is supposed to be null when released, but not empty. When a new ByteBuf is added, one more check verifies if the current ByteBuf is empty, and if so, it is released and replaced by the new one, without creating a new CompositeByteBuf. Result: In the tests testBIgFileUploadDelimiterInMiddleChunkDecoderMemoryFactory and related for other modes, the buffers are starting with a CRLF. When we offer only the prefix part of the multipart (no data at all), the current Partial HttpData has an empty buffer. The first time we offer the data starting with CRLF to the decoder, it now has a correct current Partial HttpData with a buffer not empty. The Benchmark was re-run against this new version. Old Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms New Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 5,604 ± 0,415 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 6,058 ± 0,111 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,914 ± 0,031 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 6,053 ± 0,051 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,636 ± 0,141 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 3,033 ± 0,181 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,178 ± 0,006 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,859 ± 0,189 ops/ms So +20 to +40% improvement due to not searching for CRLF/LF into the full buffer when no delimiter is found, but only from the end and delimiter size + 2 (CRLF).
…ory (#11145) Motivation: When Memory based Factory is used, if the first chunk starts with Line Break, the HttpData is not filled with the current available buffer if the delimiter is not found yet, while it may add some. Fix JavaDoc to note potential wrong usage of content() or getByteBuf() if HttpDatais has a huge content with the risk of Out Of Memory Exception. Fix JavaDoc to explain how to release properly the Factory, whatever it is in Memory, Disk or Mixed mode. Fix issue #11143 Modifications: First, when the delimiter is not found, instead of searching Line Break from readerIndex(), we should search from readerIndex() + readableBytes() - delimiter size, since this is the only part where usefull Line Break could be searched for, except if readableBytes is less than delimiter size (then we search from readerIndex). Second, when a Memory HttpData is created, it should be assigned an empty buffer to be consistent with the other implementations (Disk or Mixed mode). We cannot change the default behavior of the content() or getByteBuf() of the Memory based HttpData since the ByteBuf is supposed to be null when released, but not empty. When a new ByteBuf is added, one more check verifies if the current ByteBuf is empty, and if so, it is released and replaced by the new one, without creating a new CompositeByteBuf. Result: In the tests testBIgFileUploadDelimiterInMiddleChunkDecoderMemoryFactory and related for other modes, the buffers are starting with a CRLF. When we offer only the prefix part of the multipart (no data at all), the current Partial HttpData has an empty buffer. The first time we offer the data starting with CRLF to the decoder, it now has a correct current Partial HttpData with a buffer not empty. The Benchmark was re-run against this new version. Old Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms New Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 5,604 ± 0,415 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 6,058 ± 0,111 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,914 ± 0,031 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 6,053 ± 0,051 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,636 ± 0,141 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 3,033 ± 0,181 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,178 ± 0,006 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,859 ± 0,189 ops/ms So +20 to +40% improvement due to not searching for CRLF/LF into the full buffer when no delimiter is found, but only from the end and delimiter size + 2 (CRLF).
…ory (#11145) Motivation: When Memory based Factory is used, if the first chunk starts with Line Break, the HttpData is not filled with the current available buffer if the delimiter is not found yet, while it may add some. Fix JavaDoc to note potential wrong usage of content() or getByteBuf() if HttpDatais has a huge content with the risk of Out Of Memory Exception. Fix JavaDoc to explain how to release properly the Factory, whatever it is in Memory, Disk or Mixed mode. Fix issue #11143 Modifications: First, when the delimiter is not found, instead of searching Line Break from readerIndex(), we should search from readerIndex() + readableBytes() - delimiter size, since this is the only part where usefull Line Break could be searched for, except if readableBytes is less than delimiter size (then we search from readerIndex). Second, when a Memory HttpData is created, it should be assigned an empty buffer to be consistent with the other implementations (Disk or Mixed mode). We cannot change the default behavior of the content() or getByteBuf() of the Memory based HttpData since the ByteBuf is supposed to be null when released, but not empty. When a new ByteBuf is added, one more check verifies if the current ByteBuf is empty, and if so, it is released and replaced by the new one, without creating a new CompositeByteBuf. Result: In the tests testBIgFileUploadDelimiterInMiddleChunkDecoderMemoryFactory and related for other modes, the buffers are starting with a CRLF. When we offer only the prefix part of the multipart (no data at all), the current Partial HttpData has an empty buffer. The first time we offer the data starting with CRLF to the decoder, it now has a correct current Partial HttpData with a buffer not empty. The Benchmark was re-run against this new version. Old Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms New Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 5,604 ± 0,415 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 6,058 ± 0,111 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,914 ± 0,031 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 6,053 ± 0,051 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,636 ± 0,141 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 3,033 ± 0,181 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,178 ± 0,006 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,859 ± 0,189 ops/ms So +20 to +40% improvement due to not searching for CRLF/LF into the full buffer when no delimiter is found, but only from the end and delimiter size + 2 (CRLF).
…ory (netty#11145) Motivation: When Memory based Factory is used, if the first chunk starts with Line Break, the HttpData is not filled with the current available buffer if the delimiter is not found yet, while it may add some. Fix JavaDoc to note potential wrong usage of content() or getByteBuf() if HttpDatais has a huge content with the risk of Out Of Memory Exception. Fix JavaDoc to explain how to release properly the Factory, whatever it is in Memory, Disk or Mixed mode. Fix issue netty#11143 Modifications: First, when the delimiter is not found, instead of searching Line Break from readerIndex(), we should search from readerIndex() + readableBytes() - delimiter size, since this is the only part where usefull Line Break could be searched for, except if readableBytes is less than delimiter size (then we search from readerIndex). Second, when a Memory HttpData is created, it should be assigned an empty buffer to be consistent with the other implementations (Disk or Mixed mode). We cannot change the default behavior of the content() or getByteBuf() of the Memory based HttpData since the ByteBuf is supposed to be null when released, but not empty. When a new ByteBuf is added, one more check verifies if the current ByteBuf is empty, and if so, it is released and replaced by the new one, without creating a new CompositeByteBuf. Result: In the tests testBIgFileUploadDelimiterInMiddleChunkDecoderMemoryFactory and related for other modes, the buffers are starting with a CRLF. When we offer only the prefix part of the multipart (no data at all), the current Partial HttpData has an empty buffer. The first time we offer the data starting with CRLF to the decoder, it now has a correct current Partial HttpData with a buffer not empty. The Benchmark was re-run against this new version. Old Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 4,037 ± 0,358 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 4,226 ± 0,471 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,875 ± 0,029 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 4,346 ± 0,275 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,044 ± 0,020 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 2,278 ± 0,159 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,174 ± 0,004 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,370 ± 0,065 ops/ms New Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 5,604 ± 0,415 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 6,058 ± 0,111 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 0,914 ± 0,031 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 6,053 ± 0,051 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 2,636 ± 0,141 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 3,033 ± 0,181 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,178 ± 0,006 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 2,859 ± 0,189 ops/ms So +20 to +40% improvement due to not searching for CRLF/LF into the full buffer when no delimiter is found, but only from the end and delimiter size + 2 (CRLF).
Expected behavior
Once a file upload object exists in the multipart request decoder, but not finished, offering more data to the decoder should populate the buffer of the file object
Actual behavior
The buffer is not created/updated
Steps to reproduce
git clone https://github.com/micronaut-projects/micronaut-core
git checkout upgrade-netty
./gradlew test-suite:test --tests "io.micronaut.upload.StreamUploadSpec.test the file is not corrupted with transferTo"
Netty version
4.1.60+ due to #11001
JVM version (e.g.
java -version
)openjdk version "1.8.0_282"
OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_282-b08)
OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.282-b08, mixed mode)
OS version (e.g.
uname -a
)Darwin MacBook-Pro.local 20.3.0 Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64 x86_64
This issue is a blocker for Micronaut to upgrade Netty. With the functionality as it is, it is impossible to read a chunk of the file and release it immediately because new buffers are not set on the underlying file upload object.
This line is the culprit. https://github.com/fredericBregier/netty/blob/6daeb0cc51d8689805c1a657e61d395450afec47/codec-http/src/main/java/io/netty/handler/codec/http/multipart/HttpPostMultipartRequestDecoder.java#L1187
In my case
posDelimiter
is 0, so the content is never added to the upload.The text was updated successfully, but these errors were encountered: