Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expect continue #204

Open
pschrammel opened this issue Apr 15, 2023 · 10 comments
Open

Expect continue #204

pschrammel opened this issue Apr 15, 2023 · 10 comments

Comments

@pschrammel
Copy link
Contributor

Hi,
curl and also browsers are sending POST request with Expect: 100-continue header if the body is big enough. This is a nice feature but is hard to implement correctly in rack. As far as I understood the response would have to deliver the 100 continue response and then the sender is continuing the upload AND the server sends a full response.
Puma does this quite hackish : https://github.com/puma/puma/blob/87c052f514488286a9ee70855db8a265c90a4dbb/lib/puma/client.rb#L340
and totally fails the expect-continue purpose (the server should be able the respond with a non 100 and reject the POST if it's too big or has other issues - but this should be the app's decision and not the server's).

Is there a way to have a clean way to respond to expect-continue and decide how to continue (100 or something else)?

@nateberkopec
Copy link

I'm interested in what @ioquatix's thoughts are, so any Puma implementation doesn't get too far out of whack with what other servers might want to do.

@pschrammel
Copy link
Contributor Author

I have an example hack here: https://github.com/socketry/falcon/pull/206/files#diff-32e79c935c64dde33bd327b3b8c530bc4b842262ec53092b82830f7bb0177184 it's not clean but works. My example has other issues. Under load it doesn't work (several uploads fail). Didn't have the time for a thorough debugging.

@ioquatix
Copy link
Member

ioquatix commented Aug 15, 2023

I think most people think of HTTP as a request/response protocol.

Intermediate non-final responses break 99% of the interfaces people actually code against, e.g.

response = client.request("GET", "/")

# and

def server(request)
  return [200, ...]
end

The only solution I have to this is to consider some kind of response chaining, e.g.:

response = client.request("GET", "/")
response.status # 100
while !response.final?
  response = response.next
end
response.status # 200 or something else

(I'm not even sure how you'd do this with a post body - waiting for 100 continue on the client and then posting the body as a 2nd request??).

I don't think the benefits of non-final responses is commensurate with the interface complexity they introduce. That's my current opinion, but I'd be willing to change it if there was some awesome new use case I was not familiar with.

In addition, HTTP/2 already has stream cancellation for this kind of problem, and there is nothing wrong with cancelling a stream mid-flight, even for HTTP/1 - it's not as efficient as HTTP/2+ but it has a consistent semantic.

So: in summary, I think provisional responses introduce significant complexities to the interface both on the client AND server, and I don't think the value they add to a few small corner cases is worth that complexity. Remember that every step of the way, including proxies, etc has to handle it correctly.

The question in my mind is, do we want request/response or request/response/response/response/response?

@dentarg
Copy link

dentarg commented Aug 15, 2023

My motivation for opening the issue on Puma was puma/puma#3188 (comment), redirecting where an upload should go (e.g. my client POST to my app, app responds with redirect to some cloud storage – the app has generated the (temporary) URL for that, my clients doesn't need to know how to auth with the cloud storage, just my app)

Since many years back the similar(?) feature 103 Early Hints was implemented in Puma and also Falcon. A few years later rack/rack#1692 was opened and also rack/rack#1831 which started to discuss 100-continue too. Looks like the goal was to include this in Rack 3 but obviously that didn't happen. :)

Not sure where we go from here, I still have much to read up on in regards to Rack and HTTP/2 and so on.

@dentarg
Copy link

dentarg commented Aug 15, 2023

@ioquatix here's my idea for Puma: puma/puma#3188 (comment)

@ioquatix
Copy link
Member

@dentarg That's an interesting use case.

Are there any clients that actually support the redirect mechanism as outlined?

@dentarg
Copy link

dentarg commented Aug 15, 2023

@ioquatix
Copy link
Member

Any others?

@dhavalsingh
Copy link

dhavalsingh commented Aug 16, 2023

akka/akka#15799 (comment)
This is a pretty good discussion on this issue of how the client should/does handle the 100-continue header.

@ioquatix
Copy link
Member

ioquatix commented Aug 16, 2023

That's a great link. One part that stands out to me:

It is amazing how many people got it wrong.

There be the dragons?

I think my interpretation most closely aligns with akka/akka#15799 (comment)

However, I appreciate how this might be possible to implement just as an internal detail. If that's true, what's the advantage?

This is not 100% correct. 100-Continue is only an optimization technique, not a protocol constraint.

This gives me some hope that maybe we don't need to expose it to the user... but it's followed by this:

This compounds the implementation complexity on the client-side: In the presence of an Expect: 100-continue request header the client must be prepared to either see 1 or 2 responses for the request, depending on the status code of the first response.

The level of complexity seems pretty high to me... and it's followed by this:

Many clients got the specs wrong, and that's why many servers actually always force Connection:close on any error response when expect-continue was in use. If you still have doubts, consult this answer from Roy Fielding: https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0324.html

Which makes me thing that the entire thing is not worth pursuing except if you enjoy suffering through the implementation and all the compatibility issues... in the best case, rejecting the incoming body, according to Dr Fielding, we have to close the connection, isn't that going to be worse for performance? i.e. isn't it easier just to close the connection if you want to reject the body? Not only that, but latency can be introduced by the client waiting for the 100 continue status.. I don't know if the original use case of redirecting the post body is valid, because apparently the client can just ignore the 100 continue and start sending the body anyway?

Finally, for me, probably the biggest bias I have, is this problem is already solved in HTTP/2+ since closing a stream is so easy... Maybe for HTTP/1 it kind of sucks, but for HTTP/2+ I feel like this is a non-issue.

I'm still intrigued and interested in where this discussion goes, but I'm not sure I have patience to actually do the implementation...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants