-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing final empty chunk when closing connection #805
Comments
It is very strange that jetty is using a chunked response if there is a Connection:close. There is no need to chunk of the message is EOF terminated? Are you somehow forcing chunking in the response? |
I am using a I would say that chunking and connection keep alive are orthogonal issues. |
Jetty should not pick chunking if it knows that the response is Connection:close, because that means it is EOF framed and thus there is no need to go to the effort of chunking. Can you send the headers of your request and response and I will see if I can reproduce. |
The request is setting the I am able to reproduce with a simple get request setting only the |
A request with Connection:close will cause a response to be created also with Connection:close. A response with Connection:close should not be chunked. I've created a simple server to demonstrate this by sending two requests, one is Connection:close and the other is not. It produces the output:
First response is correctly chunked (and terminated). The second response is not chunked and does not need termination. So I cannot see how you are creating a response that is both closed and chunked. |
In an earlier comment, you noted that the request will be one of EOF, Content-Length, Chunking. However, this is not an accurate classification: there is no EOF in HTTP. The choice is only content-length vs chunking. I can find no mention of any relationship between non-persistent connections and restrictions on chunking in the spec, and as I noted above, there are definitely valid use cases for chunking a non-persistent connection. I'm not sure what's forcing the chunking. I'll dig a bit to see if I can find the responsible configuration/filter to assist in reproducing. |
@cconroy let me assure you that EOF is and has always been a valid message framing in HTTP. One of the first issue I encountered writing Jetty 20 years ago was that java 0.9 did not have an explicit socket close API! So trust me on this one! or if not then at least check the RFCs: But it is also true that it is not illegal to send a chunk response AND connection:close, it is just that left to its own devices, jetty will never do that. So unless I can reproduce, I can't see why we may be incorrectly leaving out the end-chunk. Are you explicitly setting the transfer-encoding? If so, that's a bad thing to do as HTTP 0.9, 1.0 and 2.0 all do not support chunking, so your application should not be messing with those headers. |
If I read RFC 7230 correctly, the presence of a chunked transfer encoding (point 3) takes precedence over the possibility of the server closing the connection (point 7). In general, it seems bad to use a server close of the connection to terminate the response: the client cannot distinguish between the server intentionally closing the connection vs a network or intermediate proxy terminating the connection. So, even if it were technically valid to close the connection in this case, it is strictly better to first send the final chunk (5 bytes) so that the client can be sure it has received the entire response. FWIW, This application only supports HTTP 1.1 clients. I am not explicitly setting the transfer-encoding anywhere in the application. I'll ping this thread once I figure out what is triggering the chunking. |
@cconroy I'm not disagreeing that chunking is a better framing mechanism than Connection:close. Nor am I disagreeing that if chunking and Connection:close are used together that the end chunk must be sent. But in this case, the client has requested to use Connection:close, so Jetty validly chooses to not chunk, as was the norm for many years before HTTP/1.1 was available. So there may well be a bug in jetty here, but you have to throw me a bone and tell me how you are creating the situation that causes it. From what you have to told me, jetty should not even be chunking, so you are not telling me something. How about taking my simple server pasted above and updating it so you can reproduce the problem. |
@gregw I tried to reproduce with your example but was unable to. So far I haven't been able to find the culprit in the stack of my application. The use of chunking does appear to be related to size: smaller responses are getting a |
So smaller responses fit within Jetty's aggregation buffer, so once the output stream is closed or the response completed, it can look at that buffer and set a content-length. If the response is committed because the aggregate buffer overflows (or from an explicit flush), then jetty has to commit the headers before it has seen all the content and thus cannot determine a content-length. So that part sounds like normal jetty behaviour, but for some reason in both cases it is acting like there is no Connection:close. Can you capture the headers of your request and response that are causing the problem on your server. |
@cconroy any progress on working out how to reproduce this? I've tried explicitly setting transfer-encoding or forcing Connection:close when the client does not ask for it, but I cannot get Jetty to send a chunked response with Connection:close... at least not with 9.3.x So I'm closing for now, please reopen if you can reproduce. |
@gregw I was unable to get the simple server to reproduce, but unfortunately I also could not track down a root cause in my application. Thanks you for looking into this. I will try a few more things and hopefully can track down the culprit. |
Is your jetty being fronted by something? A proxy? a load balancer? nginx? haproxy? Just curious if something else is adding the |
@gregw Perhaps this is entering new-ticket territory, but I'd like to ask (since we were bitten by this) As @cconroy mentioned, a client that cares about data integrity and wants to validate that it received a complete (non-truncated) response needs either a Content-Length header or proper HTTP chunking with the signature last-chunk-empty (logical end-of-transmission). HTTP TCP connection closure alone, without one of the above two, does not offer completeness guarantees. This is independent of:
Would it not make sense for jetty to adopt the safer behavior ? If a client declares it's HTTP/1.1 (supports chunking), then streamed responses (no known content-length) should chunk, irrespective of keepalive behavior |
@minaguib you are misinterpreting the HTTP specs, both the older RFC2616 and the updates in RFC7230. Some relevant sections in RFC2616: https://tools.ietf.org/html/rfc2616#section-3.6
https://tools.ietf.org/html/rfc2616#section-4.4
https://tools.ietf.org/html/rfc2616#section-8.1
and
Some relevant sections in RFC7230: https://tools.ietf.org/html/rfc7230#section-3.3.1
https://tools.ietf.org/html/rfc7230#section-3.4
https://tools.ietf.org/html/rfc7230#section-9.6
In short, if the client or the server indicates This style of response, a raw body with connection close as the signal that the content is complete, isn't unique to Jetty, or HTTP/1.1. Chunked Transfer-Encoding is a concept that only exists in HTTP/1.1, and only for persistent connections. If you have a need to validate that the content is actually complete, you have to do it in your User-Agent, per the recommendations in the specs. |
This only states that if What @minaguib is trying to say is that chunked encoding's mutually-exclusivity is dependent on the HTTP version and not the existence of the |
FWIW, Nginx will happily return a chunked response even when "Connection: close" is specified on the request. Others seem to be encountering this in the wild as well: hyperium/hyper#547 I agree that "Connection: close" shouldn't imply no chunking when content-length isn't available. |
To address @minaguib and his statement:
Just because you are using chunking has no meaning that the entire response content was received and/or sent. The signals:
Mean the same thing, that the server will send no more response body content on this specific HTTP exchange. If this is a persistent connection, then the User-Agent's HTTP parser will note that the exchange is complete and the next content it receives is for a different HTTP exchange. It does not mean that the entire intended body content was received if the This is either done with knowledge of the content (like if the final closing element on XML, or final block close on JSON is present, or the image file can be fully decoded, etc), or with some information in the metadata (response headers) that helps the User-Agent validate that the entire response content was received. If an error (servlet exception, threading exception, io exception, runtime exception, throwable, etc) occurs during the operations on the server side to write the response (blocking write, async I/O writelistener, AsyncContext, non-connection timeouts, etc) then the appropriate Using responses without a Content-Length means the response body content is essentially a stream. A stream that can just end, normally, or in error. A normal end is just a stream that stops, not that its "complete" or "finished" or any other meaning you are associating with message integrity. |
@joakime What you're saying "Chunked and final 0 block and Closed Connection mean the same thing" seems to contradict the following section of RFC7230:
Why would a server choose to send a "chunked 0 final block" if an error occurs? |
@joakime Let me reiterate a point I made above: even if it were technically valid to close the connection where we have a Connection:close header and are using chunked, it is strictly better to first send the final 0 byte chunk (5 bytes) so that the client can be sure it has received the entire response.
Indeed: if a chunked stream encounters an unexpected error, then it should close the connection as that is the only way it can signal that something went wrong. However, if it finishes successfully it should send the terminating 0 chunk and then close so that the client can distinguish between these two cases. |
The following is what we had been hoping would be possible using Jetty: "If no Content-Length header field is present and Streaming Miss is in effect, Fastly will stream the content back to the client. However, if while streaming the response body Fastly determines that the object exceeds the maximum object size, it will terminate the client connection abruptly. The client will detect a protocol violation, because it will see its connection close without a properly terminating 0-length chunk." It would be unfortunate to have to add an application-specific stream terminator when HTTP chunked encoding should support differentiating the two cases. |
@joakime I think the most relevant portion of RFC7230 is 3.3.3:
I read this as stating that this behavior is acceptable but not encouraged. |
@tokenrove that's for network failure, which is only a small fraction of error conditions that can cause a response stream to finish. Application errors, for example, have to be handled by the application, and since there is no facility in the Servlet spec to close or terminate an active (committed) response, you are stuck with returning from the Servlet, which closes the stream normally. The only situation where the servlet container can safely make the determination that the responses is invalid, is if the Content-Length was specified, and that amount of content was not sent, allowing the server to forcibly close the connection. |
@cconroy the statement
This kind of mixing of As greg pointed out, once you have |
@joakime I tried throwing a RuntimeException after writing a bunch of data on the ServletOuputStream and I don't see a "chunked 0 final block" written.
|
I have a related question: I want to abort a chunked output stream. I have an application that is dynamic streaming and using Transfer-Encoding: chunked.
Thanks |
Throwing is probably the most portable way: the Servlet container should attempt to generate an error page, find out the response is already committed and hard close the connection. |
Hi Simone, Is there anything in the servlet 3.1 spec that says that a container "should" hard-close the connection in this case? Jetty and Tomcat do, but I know of one vendor's Servlet 3.1 container that does not, and would really like to pin them to the spec and fix it. |
When Jetty is doing chunked encoding and the client sets the request header
Connection: close
, Jetty writes out the response data and terminates the connection, but it fails to write the final empty chunk to signal the termination of the chunked response.The missing data in hex is
30 0d 0a 0d 0a
for the terminating0\r\n\r\n
Some clients (e.g. curl) appear to be okay with this and simply assume that whatever content they have received until the closed connection is the complete response. However, a client should not accept such a response: it has no way of knowing if the response completed successfully or if the connection was prematurely terminated.
I discovered this issue on Jetty 9.2.15.v20160210
The text was updated successfully, but these errors were encountered: