-
Notifications
You must be signed in to change notification settings - Fork 353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support routing to Service Cluster IP #1900
Comments
two ways to solve this
|
ptal @envoyproxy/gateway-maintainers |
chatted with @kflynn the other day about this and we both feel adding it in the load balancing API is more suitable
|
I am +1 on load balancing api, if we set this in EnvoyProxy, IMHO, this is on GC level and working for all managed Gateways, but if people just want this to work in specific Gateways, a Policy is more suitable. |
This issue has been automatically marked as stale because it has not had activity in the last 30 days. |
this new |
Hi @arkodg, i think this use case can be achieved by adding a apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
namespace: default
name: policy-for-route1
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: httproute-1
namespace: default
loadBalancer:
type: LeastRequest
slowStart:
window: 300s
fallbackToClusterIP: true # Enables fallback to service's cluster IP |
@yeedove, I personally prefer
|
Turns out that to get Envoy Gateway working well with Linkerd, we either need this or we need an I'd personally vote for |
please assign to me |
I like @kflynn's suggestion of a dedicated field |
|
rethinking this one, if the field lives in |
I guess this depends on do we need per-service configuration for the choice of cluster IP vs pod IP? |
one reason to select Service routing is for mesh case (sidecar beside gateway) to work seamlessly, so it will work for the xRoute backends (with sidecars ) but not if the ext auth / ext proc / Otel (have sidecars ) because it won't get intercepted properly in the gateway sidecar |
This issue has been automatically marked as stale because it has not had activity in the last 30 days. |
Any progress on this? I’m looking to deploy EG alongside an Istio mesh and from what I’m hearing here and in the related ticket, I would need this feature to make it work? |
@kflynn still planning on working on this one ? |
@kflynn by setting the @arkodg discussed the possibility of running an idle linkerd proxy sidecar on the gateway which would handle mTLS, but then it seems like we'd lose Linkerd multi-cluster service discovery. |
I think if service-mesh is being used, the metrics are likely going to be more useful from the mesh than from the ingress, because the mesh will handle service -> service as well as envoy -> service in the same format and namespace.. |
I think @lnattrass has the right of it – part of the point of the mesh is that it should be handling both metrics and routing for you in this case. |
@arkodg I'd still like to, yes! so here's a question for you. There already is an |
@kflynn this should work as expected, the gatewayapi test should prove this easily, if the flag is set, the clusterIP will get set it in the IR |
@lnattrass we're not necessarily interested in Linkerd for routing on our ingress, but more so for cross-cluster service discovery and mTLS. So we'd prefer to keep the load balancing capabilities and metrics as close to our proxy as possible. In the past the CPU requirements of running 4k RPS+ on an ingress pod with linkerd over doubles our usage and since we've always used headless services we haven't relied on Linkerd's load balancing. I suppose this is more of a niche use-case, but it does limit us on implementing multi-cluster Linkerd. @kflynn would Linkerd be open to mirroring |
Hi @kflynn I'm sorry to take this over from you, but we internally needed a fix earlier :) feel free to check out and comment my PR, though! |
@evacchi No apology needed! as you can see, I've been pulled onto other things, thank you for making this happen!! 🙂 |
No description provided.
The text was updated successfully, but these errors were encountered: