Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

native support in Netty for HTTP/1 persistent connections and pipelining #12311

Open
MCLiTian opened this issue Apr 19, 2022 · 3 comments
Open

Comments

@MCLiTian
Copy link

1.See if others agree that the current support is lacking.
2.Solicit feedback and ideas on how to better support it (both in terms of implementation and how it could be broken down into multiple PR's).
HTTP/2 drastically overhauled the concept of reusable connections in the form of multiplexing and stream ID's. Netty provides excellent support for this feature via its Http2MultiplexHandler, where every request/response transaction is given its own Channel.

Strongly associating a single transaction with a single Channel has many benefits, especially regarding things like the Channel's AttributeMap or stateful pipeline handlers.

@hyperxpro
Copy link
Contributor

HTTP/1.1 is a simple plain-text protocol. Compared to HTTP/2, HTTP/1.1 does not have any state or stream identification which makes it difficult to track and monitor. To make HTTP/1.1 pipelining work, we need a handle to buffer HTTP/1.1 requests one by one. We have to mark each HTTP/1.1 request with some value (or nonce) to make it identifiable on the connection level. And doing this will increase the response time of HTTP/1.1 over the same connection.

Coming to the Netty part, creating new channels every time a new request is received by the client on the same connection.

@hyperxpro
Copy link
Contributor

@slandelle Your input will be appreciated! :)

@slandelle
Copy link
Contributor

On the client side, HTTP/1 pipelining is pretty simple to implement.
You have to write several requests on the channel without flushing and only flush once a set limit is reached.
Responses must be received in the same order and the requests were written.

On the server side, you have to make sure you write the responses in the same order as the requests were read.

Kind of exotic use case. Browsers have stopped doing that a long time ago (too many broken server implementations).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants