-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requests with Large POST Body Immediately Goes into Delay Cancellation Right After the Server Received the Request #460
Comments
Hi @raphaelNguyen, thanks for the report. Can you give a more high-level description of the problem? What are you trying to achieve and with which code and what happens in the error case? The difference between the cases is that in the error case, the data comes in multiple chunks of data. There is no guarantee on how data is split up in the entity data stream. On local host and without TLS, it is far more likely that data is received in bigger chunks than with an actual connection with higher latency and maybe chunking caused by TLS and other networking components. In any case, as said before, there's no guarantee on how the data is split up. Making a guess, the problem here is that some component expects data to be split up in a certain way (or not at all). Any previous solution might have been accidental based on lucky timing. Your logs show |
This seems to be an |
I can somewhat reproduce your issue, assuming that you cancel the request stream after having read all bytes (which is somewhat expected based on the Here's the test, to be put into "round-trip with HTTP/1.0 POST request with Default entity (entity received in bits) and Connection: close" in assertAllStagesStopped(new TestSetup {
send(
"""POST / HTTP/1.0
|Host: example.com
|Connection: close
|Content-Length: 12
|
|abcdef""")
val entity =
inside(expectRequest()) {
case HttpRequest(POST, _, _, entity, _) =>
entity.contentLengthOption shouldEqual Some(12)
entity
}
requests.request(1) // emulate that the server with pull in more requests early
val dataProbe = ByteStringSinkProbe()
entity.dataBytes.runWith(dataProbe.sink)
dataProbe.expectUtf8EncodedString("abcdef")
send("ghijkl")
dataProbe.expectUtf8EncodedString("ghijkl")
dataProbe.cancel()
// proper user behavior instead of cancel makes the test pass:
// dataProbe.request(1)
// dataProbe.expectComplete()
// let the server some time to propagate messages / prepare the response
Thread.sleep(2000)
responses.sendNext(HttpResponse(protocol = HttpProtocols.`HTTP/1.0`))
expectResponseWithWipedDate(
"""HTTP/1.0 200 OK
|Server: pekko-http/test
|Date: XXXX
|Content-Length: 0
|
|""")
shutdownBlueprint()
}) Somewhat similar previous instance: akka/akka-http#3458 (Above explanation about why localhost version work is still valid, request is received fully -> HttpEntity.Strict, data is now fully detached from the http connection, so whatever you do has no consequence on the actual connection). I can remember a previous issue of |
Hi @jrudolph. Thank you for spending the time diving into this issue. I'm learning a lot here. I'll try to answer your questions to the best of my abilities but let me preface all this by stating that I'm using
After upgrading to Turning on all debug loggings shows me the debug message from I can reproduce the issue with a bare-boned After posting this issue, I was also able to reproduce the problem in
Monitoring the TCP/TLS comminucation, I can see that when directly hitting localhost, the data is being sent in in 1 chunk. However, the case where I sent 18356 bytes request via nginx (1 byte less than error case), the TCP/TLS communication shows the same pattern of traffic as the error case, but my breakpoint loggings show that it's also being received in 1 chunk and did not error out, which I find interesting.
To the best of my knowledge, these are not based on my code as I do not interact with
As far as I know, my nginx is performing the same as before I upgraded to
Potentially new play behaviour as I don't touch the stream directly. I will dig into play code a bit and maybe take this problem to them as well. Thank you again for all your assistance thus far in diagnosing this issue. If I find any more information, I'll continue to update you here. |
After some tweaking, I was able to reproduce the issue on my minimal play application without going through nginx. |
I can reproduce this in Play, it's because we have HTTP pipelining enabled be default, see |
@mkurz does this work with Akka HTTP, ie have we broken this in Pekko HTTP? |
@pjfanning Behaviour in Akka HTTP 10.2.x and Pekko HTTP v1 is the same, so you did not break anything. |
Thank you @jrudolph and @mkurz and others for looking into this issue. This issue has been resolved for me by playframework/playframework#12351 |
Environments
pekko-http:1.0.0
pekko:1.0.1
play-framework:3.0.0
Issue also observed with the following dependencies
akka-http:10.2.10
akka:2.6.21
play-framework:2.9.0
Issue description
I use
pekko
andpekko-http
viaplay-framework
and the following issue seems to only be reproducible when I receive the request via my nginx. If I hit my server throughlocalhost:<server-port>
with the same large request, the request does not suffers the same issue.When server receives a request with a large POST body (~18400 bytes or more) from nginx, the request immediately goes into delayed cancellation after the request was received with the below debug log. Some debugging shows that this happens before the request is routed to the corresponding play
Action
and gets processed. If the server takes longer than thelinger-timeout
(default of 1 minute) to process the request, the connection is cancelled by the server.In the past, I've encountered these symptoms when using
play-framework:2.8.x
(akka:2.6.x
andakka-http:10.1.x
). At the time, I found that upgradingakka-http
to10.2.1
fixed the issue. However, after the upgrading toplayframework:3.0.0
and adoptingpekko
, the same symptoms can be observed again. I've confirmed that the issue also happen onplayframework:2.9.0
and itsakka
versions (listed above).I'm not sure if this issue has been reintroduced in
akka-http:10.2.10
andpekko
has inheritted it and would like to ask for your assistance.Debug Information
While I do not have a good minimum reproducible case to report, I've obtained some debug information I hope could help in figuring out the issue.
These logs contains the information that would have been printed by the following lines...
https://github.com/apache/incubator-pekko/blob/cfff9c53df859bb0f4407caf4821e7831dabeb19/stream/src/main/scala/org/apache/pekko/stream/impl/fusing/GraphInterpreter.scala#L551-L553
https://github.com/apache/incubator-pekko/blob/cfff9c53df859bb0f4407caf4821e7831dabeb19/stream/src/main/scala/org/apache/pekko/stream/impl/fusing/GraphInterpreter.scala#L563-L564
https://github.com/apache/incubator-pekko/blob/cfff9c53df859bb0f4407caf4821e7831dabeb19/stream/src/main/scala/org/apache/pekko/stream/impl/fusing/GraphInterpreter.scala#L517-L519
... and the
cause
for cancellation athttps://github.com/apache/incubator-pekko/blob/cfff9c53df859bb0f4407caf4821e7831dabeb19/stream/src/main/scala/org/apache/pekko/stream/impl/fusing/GraphInterpreter.scala#L522
The POST body sent is a
text/plain
body with the contentdata=0000000000[...]
with the number of zero being specified by me to control the size of the body. The log contains allPUSH
,PULL
andCANCEL
up to the time when the request gets to the playController
.Thank you very much for your time and if you require more information, please do not hesitate to ask.
The text was updated successfully, but these errors were encountered: