-
Notifications
You must be signed in to change notification settings - Fork 547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mirror DestinationRule connection pool configuration on Sidecar #2961
Mirror DestinationRule connection pool configuration on Sidecar #2961
Conversation
…ound connections to Envoy sidecars
😊 Welcome @ZackButcher! This is either your first contribution to the Istio api repo, or it's been You can learn more about the Istio working groups, Code of Conduct, and contribution guidelines Thanks for contributing! Courtesy of your friendly welcome wagon. |
I'll get release notes and all the other bits in order once we're agreed on the bigger stuff like the field names, behavior of the API, and the documentation on it. |
@ZackButcher It can not express per port connection pool |
What the DestinationRule does today, which I do in the implementation PR, is configure the connection pool on the inbound cluster that's generated for the pod (e.g. the Cluster Again, this is exactly how the per-port connection pool settings configured by a |
// This setting overrides the top-level default `inboundConnectionPool` to configure | ||
// specific settings for this port. This configuration mirrors the `DestinationRule`'s | ||
// [`PortTrafficPolicy.connectionPool`](https://istio.io/latest/docs/reference/config/networking/destination-rule/#TrafficPolicy-PortTrafficPolicy) field. | ||
ConnectionPoolSettings connection_pool = 8; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, didn't consider we would want it at 2 levels. Part of me thinks we could do
ingress:
- connectionPool: ...
(no port)
for this case. But I think that is probably too complex.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An ingress without a port is not allowed by validation today, hence the top level vs the per-port level.
… destinationrule. Add a release note for the new field.
@howardjohn PTAL at the updated comments |
FWIW there's an argument we could support more than just
Of course, some fields (like load balancing settings) wouldn't make sense in the sidecar inbound context. So then the whole discussion of do you use the same As @keithmattix pointed out, seems to be a lot of similar |
// http: | ||
// http1MaxPendingRequests: 1024 | ||
// http2MaxRequests: 1024 | ||
// maxRequestsPerConnection: 1024 | ||
// maxRetries: 100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to show overrides here?
Or is it http connection pool is the same but user still has to list it here due to tcp setting is different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, I repeated mainly to highlight that the user has to repeat list because the settings are not merged.
The UX is probably improved if we merge the fields, but today for DestinationRule we do not merge at that level: we either replace the entire ConnectionPoolSetting
or not but do not look inside it, and I'm trying to keep the behavior as close to DestinationRule as possible (principle of least surprise).
@@ -576,6 +679,15 @@ message IstioIngressListener { | |||
// sidecar for requests originating from outside the mesh. | |||
// Currently supports only SIMPLE and MUTUAL TLS modes. | |||
ServerTLSSettings tls = 7; | |||
|
|||
// Settings controlling the volume of connections Envoy will accept from the network. | |||
// This setting overrides the top-level default `inboundConnectionPool` to configure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to clarify if only overrides need to be specified (if we can merge with the top-level or not) or the entire connection pool config?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably have the conversation on merging: I think it would likely improve UX, but today we do not do it for DestinationRule
. Here's the code that merges per-Port and default DestinationRule config. Merging wouldn't be super hard to do, but it would be a behavioral change compared to what we do today. In any case, whatever we do w.r.t. merging here I think we need to match in DestinationRule
merging its top-level and per-port settings, or the API gets harder for users to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefer not to merge personally. Doesn't seem terribly compelling and adds a lot of complexity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, reduce merging
|
||
// Settings controlling the volume of connections Envoy will accept from the network. | ||
// This setting overrides the top-level default `inboundConnectionPool` to configure | ||
// specific settings for this port. This configuration mirrors the `DestinationRule`'s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems redundant comment as the ConnectionPoolSettings
scheme is actually same one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly -- I want to call that out explicitly for folks so they don't have to think as hard about it (rather than having them realize because the page links to the same place). Happy to cut this if we want to keep the docs more concise.
So, take away is no merging inside of ConnectionPoolSetting in this PR, which I agree with. If we want to take up that issue (and more generally, merging fields inside of With that in mind, are there updates to the examples/docs that we should do to make it more explicitly than I already have that merging is not happening? |
…onfigured via the Sidecar API (see istio/api#2961). This PR implements the following precedence for picking connection pool config (highest to lowest): 1) per-port ConnectionPool from the Sidecar; 2) top-level InboundConnectionPool from the Sidecar; 3) per-port TrafficPolicy.ConnectionPool from the DestinationRule; 4) top-level TrafficPolicy.ConnectionPool from the DestinationRule. (3 and 4 are unchanged from before this PR.) This PR does not change how client-side connection pools are configured (which remains DestinationRule only).
…ters via the Sidecar API (#47313) * update API to HEAD to get new connection pool fields -- note we'll need to remove the replace directive before merge * Implement support for ConnectionPoolSettings on the inbound cluster configured via the Sidecar API (see istio/api#2961). This PR implements the following precedence for picking connection pool config (highest to lowest): 1) per-port ConnectionPool from the Sidecar; 2) top-level InboundConnectionPool from the Sidecar; 3) per-port TrafficPolicy.ConnectionPool from the DestinationRule; 4) top-level TrafficPolicy.ConnectionPool from the DestinationRule. (3 and 4 are unchanged from before this PR.) This PR does not change how client-side connection pools are configured (which remains DestinationRule only). * point to latest revision of the istio/api change * remove one useless test case, and format a few things * simplify if statement logic * remove the replace directive and update to the 1.20 release of the API repo; restore test file I messed up during rebase * Add release notes that mirror those from istio/api; fix linter error * roll back go mod changes as they weren't needed anyway -- yay caches 🙃 * include port name+number in the warning about HTTP settings on TCP ports
Has there also been discussions about doing this for egress? In particular, we have applications that want to limit their exposure to specific dependency services, and e.g. setting up concurrency circuit breakers on the source side would be beneficial. But the configuration would be specific to the source application (e.g. the level of concurrency), and so it would make sense to stick it in the Sidecar config for it. |
The Better docs around this would help, a lot |
But that wouldn't help source service owners if they also want to retain the destination service's DR configuration, except for the specific circuit breakers that are necessary on the source side, right? Ideally there should be something similar to this, where source owners can have specific overrides for the stuff they care about. |
Revives #1754 to support the same
ConnectionPoolSettings
we use for DestinationRule on the inbound cluster via configuring theSidecar
.Today developers can configure the inbound cluster's connection pool, via
DestinationRule
-- Istiod programs both the client and server to use the same connection pool settings. However, when multiple services listen on the same port it can be challenging (impossible?) for a user to ensure the correct DestinationRule is applying connection pooling settings on the server side. This API allows the user to set their inbound connection pool explicitly (and differently than the DestinationRule they configure for callers) both at the top level of the sidecar as well as per-port.There's a paired implementation PR in istio/istio that I'll link shortly. In that PR, I've implemented it so that the connection pool is configured for the server-side sidecar with the following precedence (highest to lowest):
ConnectionPool
setting from SidecarInboundConnectionPool
setting from SidecarConnectionPool
setting from DestinationRuleTrafficPool.ConnectionPool
setting from DestinationRuleClient-side connection pools are unchanged, where per-port then top-level configuration from the DestinationRule still apply.