Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HCM supports configuring keep-alive headers #33593

Open
johnlanni opened this issue Apr 17, 2024 · 5 comments · May be fixed by #33838
Open

HCM supports configuring keep-alive headers #33593

johnlanni opened this issue Apr 17, 2024 · 5 comments · May be fixed by #33838
Labels
area/http enhancement Feature requests. Not bugs or questions. stale stalebot believes this issue/PR has not been touched recently

Comments

@johnlanni
Copy link
Contributor

johnlanni commented Apr 17, 2024

Title: *HCM supports configuring keep-alive headers *

Description:

image

Doc link: https://hc.apache.org/httpcomponents-client-4.5.x/current/tutorial/html/connmgmt.html

When using a lower version (<5.2) of Java Apache HTTP client, the client will determine how long a connection can be reused based on the Keep-Alive header in the backend response.If the Keep-Alive header is not present in the response, HttpClient assumes the connection can be kept alive indefinitely. If Envoy removes this Keep-Alive response header, the client may attempt to reuse a closed connection, resulting in I/O errors.

This is due to incorrect implementation of connection reuse mechanism by the client, but in our scenario, such clients are present on numerous old devices and cannot be upgraded. Therefore, we hope that Envoy can support configuring not to remove this response header. This would help us migrate from a large number of Nginx gateways to Envoy gateways.

(The following content is added after modification.)

I realize that it is inaccurate to solve this issue by simply not clearing the hop-by-hop headers. In fact, we need to set the Connection and Keep-Alive headers for the envoy proxy layer in order to adapt to certain special clients, instead of passing through these headers from upstream. However, currently there is no distinction made when mutating response headers whether these headers were added through response_headers_to_add in route config or returned by upstream, which prevents us from setting them.

A reasonable solution that I have considered is to refer to the keepalive_timeout directive in Nginx. Set the idle_timeout to the Keep-Alive Header when handling HTTP/1, but do not set it for HTTP/2 or HTTP/3.

@johnlanni johnlanni added enhancement Feature requests. Not bugs or questions. triage Issue requires triage labels Apr 17, 2024
@johnlanni johnlanni changed the title HCM supports configuring whether to enable clear_hop_by_hop_headers HCM supports configuring whether to clear hop by hop response headers Apr 19, 2024
@adisuissa adisuissa added area/http and removed triage Issue requires triage labels Apr 19, 2024
@adisuissa
Copy link
Contributor

cc @alyssawilk @RyanTheOptimist IIRC you worked on the previous hop-by-hop headers removal.

@alyssawilk
Copy link
Contributor

When using a lower version (<5.2) of Java Apache HTTP client, the client will determine how long a connection can be reused based on the Keep-Alive header in the backend response.If the Keep-Alive header is not present in the response, HttpClient assumes the connection can be kept alive indefinitely. If Envoy removes this Keep-Alive response header, the client may attempt to reuse a closed connection, resulting in I/O errors.

I think there's a misunderstanding on how keep-alive works

If you have upstream ---- Envoy --- client
the upstream keep-alive header tells Envoy how long the connection between the upstream and Envoy can be used. There's no correlation between the lifetime of that connection and the Envoy-client connection.

Imagine the client sending request 1 over connection 1. This causes Envoy to fetch connection A upstream. In the response for A, the upstream sends a keep-alive response. This does not apply for connection 1. If the client reuses connection 1 for request 2, it may use a completely different upstream connection. In this case, sending keep-alive is going to result in problematic behavior.

@johnlanni johnlanni changed the title HCM supports configuring whether to clear hop by hop response headers HCM supports configuring keep-alive headers May 1, 2024
@johnlanni
Copy link
Contributor Author

johnlanni commented May 1, 2024

@alyssawilk You are right, I realize that it is inaccurate to solve this issue by simply not clearing the hop-by-hop headers. Sorry for not updating my understanding of this solution in this issue (already updated now).
In this PR, my implementation is to support setting the keep-alive state for the connection between Downstream and Envoy, instead of directly returning the Upstream's keep-alive header.

@johnlanni
Copy link
Contributor Author

https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout
image

This nginx directive supports the following configuration:
keepalive_timeout 90s 60s;

This configuration means that the idle timeout for connections between Nginx and Downstream is 90 seconds, and Nginx will return Connection: keep-alive and Keep-Alive: timeout=60 in the response of each HTTP/1 request.

Because Envoy sets idle_timeout in http_protocol_options, so here we add a setting for the timeout value of keep-alive header in HCM's configuration options.

Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label May 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/http enhancement Feature requests. Not bugs or questions. stale stalebot believes this issue/PR has not been touched recently
Projects
None yet
3 participants