-
Notifications
You must be signed in to change notification settings - Fork 7.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP idleTimeout only apply to Clusters, not to the upstream service's HTTP Filters #40619
Comments
I think we have applied this value to inbound cluster, seems not apply it to inbound listener. Maybe related to multi services the pod may belong to, while each has different DestinationRule bounded. @ramaraochavali @howardjohn Do you know exactly? |
Even if the pod was in multiple services, the listener is still per pod, so I would expect a single service per pod, which also maps nicely to a listener/http_connection_manager ? |
For inbound listener, you have to set based on node metadata |
What is the reason why this wouldn't happen for the client? If we want to make the sidecar "transparent", it should happen both for the client AND the server IMO. If not we start getting into weird different timeout issues on the mesh (which is what we are seeing). |
This would happen at the client i.e. service A making call to service B, the idle timeout of that connection is applied at client side (service A). On server side, also It can be set, but just that it has a different config as I mentioned. DR from API perspective comes in to picture when client is making a call. Brief history on why it was implemented like that https://github.com/istio/istio/pull/13515/files#r277489637 |
Thanks @hzxuzhonghu that's exactly what we are seeing. IMO the timeout should be consistent on the whole path for a service. So sidecar B This is a bit more tricky on the source sidecar listener (sidecar A), as that listener might be shared for multiple destinations. |
Yes, it is tricky for shared listener |
🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2022-08-27. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions. Created by the issue and PR lifecycle manager. |
Bug Description
I'm trying to adjust the duration of HTTP timeouts for a specific service to more than one hour (the default), by adding the following DestinationRule (Set to 4 hours/14400s in this example):
As a result, I see all the upstream clusters correctly adjusted all over the mesh with the following correct configuration on Envoy (
idle_timeout
correctly set to14400s
):However when I look at the sidecar of the upstream service, I don't see that the
idle_timeout
option was configured on the http connection manager, which results in the default idle_timeout of 1 hour still being applied to the connections coming from the gateways.So if my understanding is correct, for the Gateway <-> Istio-Sidecar HTTP connections, the Gateway correctly adjusted its timeout to 4 hour on the cluster configuration, while the istio-sidecar didn't adjust that option on the HTTP Connection manager, resulting in those connections being detroyed by the sidecar after 1 hour.
So it seems that the
idleTimeout
option on the destinationRule works fine when it is less than one hour (decreasing it from the default) as it only needs to be configured on one of the connection's sides. However if we want to increase it to more than one hour, both side of the connections need to have that option adjusted on Envoy.Version
Additional Information
No response
The text was updated successfully, but these errors were encountered: