Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP/2 request processing optionally #2211

Open
splitice opened this issue Jul 29, 2023 · 21 comments
Open

HTTP/2 request processing optionally #2211

splitice opened this issue Jul 29, 2023 · 21 comments

Comments

@splitice
Copy link

PR #2174 removes support for http request processing.

I think this should be optionally still supported, it is after all perfectly servisable in a HTTP/2 request response flow. Perhaps a timeout can be added to error on grpc (rather than spiking the request)?

@oowl
Copy link
Contributor

oowl commented Jul 29, 2023

The problem is not only GRPC requests, All HTTP2 traffic may have long connection stream, you can see my analyze in #2172 (comment)

@oowl
Copy link
Contributor

oowl commented Jul 29, 2023

HTTP2 is stream base protocol, read the whole body is theoretically impossible, and it does not make sense.

@splitice
Copy link
Author

The way nginx does it, potentially ignoring Content-Length seems a bit strange. However most request / responses do follow the non infinite upload property...

I think a timeout would be a good idea (maximum time spent reading request data) for sure.

@oowl
Copy link
Contributor

oowl commented Jul 29, 2023

I think a timeout would be a good idea (maximum time spent reading request data) for sure.

I agree 😆 cc @zhuizhuhaomeng

@zhuizhuhaomeng
Copy link
Contributor

zhuizhuhaomeng commented Jul 29, 2023

@oowl I think we need to revert the previous PR and disable in GRPC only. cc @splitice

HTTP2 is stream base protocol, read the whole body is theoretically impossible, and it does not make sense.

Will a large HTTP request body in http2 connection block other HTTP requestes in the same connection? @oowl

@splitice
Copy link
Author

It will block the same amount as http/1.1.That should be acceptable and consistent in behaviour.

@oowl
Copy link
Contributor

oowl commented Jul 31, 2023

Will a large HTTP request body in http2 connection block other HTTP requestes in the same connection? @oowl

No, Nginx process HTTP2 one ngx_http_request_t per stream, so every stream is isolated, one stream traffic spike can not influence other HTTP2 stream traffic.

And I have chat with @zhuizhuhaomeng on IM about #2174 , It can not be revert, Openresty can not support HTTP2 read body now, So maybe we need to consider this is a feature(read _body in HTTP2), And I think read_body timeout is a good idea in HTTP2 processing traffic. @zhuizhuhaomeng @splitice

it's also blocked in test-nginx https://github.com/openresty/test-nginx/blob/master/lib/Test/Nginx/Util.pm#L2910

@swananan
Copy link
Contributor

swananan commented Sep 5, 2023

Hi, HTTP/3 also faces similar issues in OpenResty. I have been thinking about how to address this problem. For achieving full-duplex communication in both HTTP/2 and HTTP/3, we can consider extending the functionalities of ngx.req.socket to enable handling the traffic interaction within a single stream in OpenResty's Lua module. This is because there are still several limitations when it comes to the read_body API, even with timeout support.

In my opinion, we could tackle this issue with a two-step solution:

Step 1: Revert the previous pull request and introduce support for a timeout option in read_body.

Step 2: Need to check whether ngx.req.socket works properly in the HTTP/2 or HTTP/3, if not, then enhance the capabilities of ngx.req.socket to accommodate both HTTP/2 and HTTP/3.

I am willing to take on these tasks to address this issue.

@splitice
Copy link
Author

splitice commented Sep 5, 2023

BTW I've reverted this for two different client applications and continue to experience no real problems. We don't have the concern of spiked requests though.

The problems really are limited to certain use cases, use cases that could do with a line in the documentation.

@oowl
Copy link
Contributor

oowl commented Sep 7, 2023

Hi, HTTP/3 also faces similar issues in OpenResty. I have been thinking about how to address this problem. For achieving full-duplex communication in both HTTP/2 and HTTP/3, we can consider extending the functionalities of ngx.req.socket to enable handling the traffic interaction within a single stream in OpenResty's Lua module. This is because there are still several limitations when it comes to the read_body API, even with timeout support.

In my opinion, we could tackle this issue with a two-step solution:

Step 1: Revert the previous pull request and introduce support for a timeout option in read_body.

Step 2: Need to check whether ngx.req.socket works properly in the HTTP/2 or HTTP/3, if not, then enhance the capabilities of ngx.req.socket to accommodate both HTTP/2 and HTTP/3.

I am willing to take on these tasks to address this issue.

I agree, So we need to know what is the openresty option. cc @zhuizhuhaomeng
If you guys need any help, Please feel free to pin me. Or i can contribute something.

@zhuizhuhaomeng
Copy link
Contributor

@splitice Would you please show the config of your usages?

@splitice
Copy link
Author

What do you want @zhuizhuhaomeng?

@dndx
Copy link
Member

dndx commented Feb 27, 2024

We recently got affected by this change as well. IMO we should not add too much restrictions to the ngx.req.read_body() API for HTTP/2 as it severely limits the usefulness of this API call, client_body_timeout should already handles the timeout issue and we don't need to invent another param for such rare occurrence.

ngx.req.read_body() should be a thin wrapper around ngx_http_read_client_request_body() only, the potential for endless stream should and is already handled by client_max_body_size and client_body_timeout. We should not invent more limitations inside the OpenResty API but leave the freedom to the end user instead.

@splitice
Copy link
Author

@dndx I 100% agree with your points.

I think the root of the actual problem is that client_body_timeout is between reads. Which the nginx folks think is OK (I agree) and the openresty folks do not. This is definately not the way it should be fixed.

For immediate effect fork and revert the commit. Thats what I'll be doing for the forseeable future.

@dndx
Copy link
Member

dndx commented Feb 27, 2024

I think the root of the actual problem is that client_body_timeout is between reads. Which the nginx folks think is OK (I agree) and the openresty folks do not.

This should not been an issue. Even with Content-Length provided inside the request, the user can send 1 byte every 60 seconds to not trigger the timeout for a long time. If this hasn't caused a problem in HTTP/1 then certainly it won't be an issue in HTTP/2 either. Why treat HTTP/2 differently?

@zhuizhuhaomeng
Copy link
Contributor

If there is no problem getting the HTTP request body, then this restriction should be lifted.
If there are some cases that will time out, then we add them to the document. Otherwise, this will break backward compatibility.
@swananan

@swananan
Copy link
Contributor

I think the root of the actual problem is that client_body_timeout is between reads. Which the nginx folks think is OK (I agree) and the openresty folks do not.

This should not been an issue. Even with Content-Length provided inside the request, the user can send 1 byte every 60 seconds to not trigger the timeout for a long time. If this hasn't caused a problem in HTTP/1 then certainly it won't be an issue in HTTP/2 either. Why treat HTTP/2 differently?

Hi, #2174 disabled the ngx.read_body API in HTTP/2 due to #2172, and #2237 tried to reduce the limitations, so #2237 allows the request with Content-Length to use the read_body api, I was putting all my focus into preventing #2172 from happening again. Currently, it doesn't seem like a good idea, as @dndx said, we should let the end user make their choice, especially since OpenResty already supported the ngx.read_body in HTTP/2.

@zhuizhuhaomeng So #2174 and #2237 should be reverted, and we need to enhance the doc about the #2172 scenario, and give suggestions like using HTTP/2 end stream flag and client_body_timeout.

@oowl
Copy link
Contributor

oowl commented Feb 27, 2024

I agree with revert #2174, Sacrificing the majority of user usage for the sake of a handful of technically perfect solutions does not seem reasonable in some cases, apologies for this, I was thinking too much about the technical stuff before.

@oowl
Copy link
Contributor

oowl commented Feb 27, 2024

But I want to emphasize that client_body_timeout does not look like a solution, it would make ngx.read_body() return no content and directly let Nginx return a 408 status code without any logs. More like a last-resort strategy

@oowl
Copy link
Contributor

oowl commented Feb 28, 2024

Hi, I have submitted PR for the revert commit above these #2286, pls help me review.

@oowl
Copy link
Contributor

oowl commented Mar 1, 2024

8dec675
6e29c1a
e0d19f7

These PRs have been merged into the master branch, So can we close this issue? Sorry for causing trouble to everyone

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants