Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

datapath: Add support to delegate routing to other plugins, such as AWS-CNI #29111

Merged
merged 1 commit into from
Nov 27, 2023

Conversation

Alex-Waring
Copy link
Contributor

@Alex-Waring Alex-Waring commented Nov 10, 2023

AWS uses VLAN interfaces for SG routing, and intentionally avoids upserting a route
so that traffic flows through the VPC and SG rules can be evaluated. Cilium will
upsert the route when an endpoint is created, and this causes traffic to be dropped
due to assymetric routing.

This exposes a wider issue where, when cilium is not in charge of routing traffic,
there is no ability to avoid creating the route (InstallEndointRoute) while still
configuring the BPF programs (RequireEgressProg). This PR implements a config option
to do just that, meaning we allow the host CNI to configure routing but still enable
network policy for those endpoints (Pods).

This flag is automatically set to True when running with aws-cni, as this is required
due to the issue mentioned above.

I would like this backported to 1.13, however the way to access to cni chaining mode seems to have changed, meaning it may need to be manual.

Fixes: #27152

Please ensure your pull request adheres to the following guidelines:

  • For first time contributors, read Submitting a pull request
  • All code is covered by unit and/or runtime tests where feasible.
  • All commits contain a well written commit description including a title,
    description and a Fixes: #XXX line if the commit addresses a particular
    GitHub issue.
  • If your commit description contains a Fixes: <commit-id> tag, then
    please add the commit author[s] as reviewer[s] to this issue.
  • All commits are signed off. See the section Developer’s Certificate of Origin
  • Provide a title or release-note blurb suitable for the release notes.
  • Are you a user of Cilium? Please add yourself to the Users doc
  • Thanks for contributing!

Fixes: #27152

Fix routing delegation to AWS-VPC-CNI when using the security groups feature. 

@Alex-Waring Alex-Waring requested a review from a team as a code owner November 10, 2023 17:10
@maintainer-s-little-helper maintainer-s-little-helper bot added the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label Nov 10, 2023
@github-actions github-actions bot added the kind/community-contribution This was a contribution made by a community member. label Nov 10, 2023
@Alex-Waring
Copy link
Contributor Author

@aditighag would you be able to trigger the tests please?

@joestringer
Copy link
Member

Did you consider whether to just deploy Cilium with endpoint-routes mode disabled as an alternative to this approach? It seems like there's a discrepancy here where on one hand Cilium is deployed with endpoint-routes enabled, but on the other hand it should not be configured this way in order to properly configure connectivity between Pods on the nodes.

Something else that's worth keeping in mind here is that depending on how traffic is routed towards Pods on the node, there's a risk that disabling endpoint routes mode may cause endpoint ingress policy to break. Though that said, CI or the cilium-cli connectivity tests should be able to pick up such misconfigurations if that is an issue with this particular deployment configuration.

@joestringer
Copy link
Member

/ci-awscni

@Alex-Waring
Copy link
Contributor Author

Hi @joestringer, thanks for taking a look. We did consider it, but because it appears to break L7 policy I wanted to implement a more specific resolution. In my view you shouldn't rely on a config option to fix something that only occasionally happens but can be fully reasoned out (this happens when the pod is on the same node on a vlan, so we know the route it is trying to upsert).

Having said that I don't fully understand the implication of this on traffic policy, so happy to gate this behind a feature flag, and would also like to see if it causes any issues in the tests. I also guess a test is needed to pick up this edge case, I would assume there is no current test in the CI that checks for pod to pod traffic on the same node with aws-cni chaining and one SG.

@aditighag
Copy link
Member

My first thought was: why not disable endpoint routes? (Joe already raised this question.)

We did consider it, but because it appears to break L7 policy

Can you point to an existing issue if it exists, or create a new one with your findings?

@Alex-Waring
Copy link
Contributor Author

Yes, the issue linked in the PR covers it: #27152

@joestringer
Copy link
Member

We did consider it, but because it appears to break L7 policy I wanted to implement a more specific resolution. In my view you shouldn't rely on a config option to fix something that only occasionally happens but can be fully reasoned out (this happens when the pod is on the same node on a vlan, so we know the route it is trying to upsert).

Having said that I don't fully understand the implication of this on traffic policy, so happy to gate this behind a feature flag, and would also like to see if it causes any issues in the tests. I also guess a test is needed to pick up this edge case, I would assume there is no current test in the CI that checks for pod to pod traffic on the same node with aws-cni chaining and one SG.

I'm a bit surprised, but the awscni tests pass with this patch. I am not sure if that's because the patch is ineffective or if that's because it's effective and everything works with this change. To confirm I would suggest doing an ip route | grep <pod-ip> to confirm that there are no matches for the target Pod IP. With endpoint routes enabled, there would be a route for each specific endpoint IP. With the feature disabled, there would be a route for prefixes instead, not just the IP.

That said, configuring Cilium with endpoint-routes mode disabled is functionally equivalent to this proposal as-is. I can see that this proposal only changes these settings in the case where aws-cni is in use, but if you just disable endpoint routes mode in environments with aws-cni then Cilium will configure the datapath for the endpoints in the same way. Whatever bugs or incompatibilities that may impact you with that configuration will also impact you with this patch. The main issue I have with this patch is that it is attempting to make a special case to override the cilium-agent global configuration for this very specific case, but you can achieve the exact same thing with no code changes by just configuring the global flag.

@aditighag
Copy link
Member

That said, configuring Cilium with endpoint-routes mode disabled is functionally equivalent to this proposal as-is.

This was my first thought as well. However, it's possible that there are some more ifdefs in the datapath that pertain to disabling the endpoint-routes config as opposed to what this PR proposes (skip installing per ep routes).

The main issue I have with this patch is that it is attempting to make a special case to override the cilium-agent global configuration for this very specific case

+1

Yes, the issue linked in the PR covers it: #27152

I don't see any details around why L7 policy datapath is broken. I only see this -

Problem happens regardless of whether one or both of the pods have Cilium Network Policy applicable to them.
Setting the value endpointRoutes.enabled=false fixes problem when using L4-based policies but completely breaks L7 policies.

@Alex-Waring
Copy link
Contributor Author

Alex-Waring commented Nov 17, 2023

To confirm I would suggest doing an ip route | grep to confirm that there are no matches for the target Pod IP.

Yes, that's the exact behaviour we see.

That said, configuring Cilium with endpoint-routes mode disabled is functionally equivalent to this proposal as-is.

I'm not necessarily sure this is true. As @aditighag says, surely the only endpoints aren't pods on the same node? And as for L7, the issue is correct, I can no longer see traffic on the envoy sidecar for these pods for things like filtering on HTTP method. This is the reason why I applied this patch. It means that I loose this functionality for fewer pods.

Having said that, if you don't think this is a sensible approach then I will close the PR. Unfortunately I don't understand the datapath well enough to fix the route issue here, but that is probably one for the AWS CNI team.

@joestringer
Copy link
Member

I took a closer look at the changes given the feedback from @aditighag, @Alex-Waring and I realized that I was wrong in my previous assessment. I now understand better exactly how this applies and why the core change here addresses the reported issue. We can see that in the config check around the code that changes here that there are a few options being enabled:

	if option.Config.EnableEndpointRoutes {
...
		epTemplate.DatapathConfiguration.InstallEndpointRoute = true
...
		epTemplate.DatapathConfiguration.RequireEgressProg = true
...

Broadly speaking this comes down to the EnableEndpointRoutes flag enabling a couple of features:

  1. Install one route per endpoint into the route table
  2. Configure BPF programs at receive to the endpoint rather than implementing the receive policy at the transmit point for another device. This ensures that network policy is applied correctly in this mode.
    (There's another, but I don't think it's relevant to this discussion)

Evidently, aws-cni wants to configure the routing table. If an external agent configures the routing table, then Cilium doesn't need to configure routes for this endpoint (point 1). However, we do need to configure the BPF programs at receive (point 2). That's why the change makes a difference.

Based on the above, I think that it would be more generic if we exposed this as a flag, and autodetected that aws-cni matches this case. Here's a patch (untested) that should achieve the same, but will also allow other plugins that are responsible for routing to also be configured in this mode in future:

diff --git a/Documentation/cmdref/cilium-agent.md b/Documentation/cmdref/cilium-agent.md
index 7633d727ce40..7fe9976d05c8 100644
--- a/Documentation/cmdref/cilium-agent.md
+++ b/Documentation/cmdref/cilium-agent.md
@@ -67,6 +67,7 @@ cilium-agent [flags]
       --cni-chaining-mode string                                  Enable CNI chaining with the specified plugin (default "none")
       --cni-chaining-target string                                CNI network name into which to insert the Cilium chained configuration. Use '*' to select any network.
       --cni-exclusive                                             Whether to remove other CNI configurations
+      --cni-external-routing                                      Whether the other CNI handles routing on the node
       --cni-log-file string                                       Path where the CNI plugin should write logs (default "/var/run/cilium/cilium-cni.log")
       --config string                                             Configuration file (default "$HOME/ciliumd.yaml")
       --config-dir string                                         Configuration directory that contains a file for each option
diff --git a/Documentation/cmdref/cilium-agent_hive.md b/Documentation/cmdref/cilium-agent_hive.md
index a1a573fd4377..b95343fd54da 100644
--- a/Documentation/cmdref/cilium-agent_hive.md
+++ b/Documentation/cmdref/cilium-agent_hive.md
@@ -21,6 +21,7 @@ cilium-agent hive [flags]
       --cni-chaining-mode string                                  Enable CNI chaining with the specified plugin (default "none")
       --cni-chaining-target string                                CNI network name into which to insert the Cilium chained configuration. Use '*' to select any network.
       --cni-exclusive                                             Whether to remove other CNI configurations
+      --cni-external-routing                                      Whether the other CNI handles routing on the node
       --cni-log-file string                                       Path where the CNI plugin should write logs (default "/var/run/cilium/cilium-cni.log")
       --controller-group-metrics strings                          List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions.
       --disable-iptables-feeder-rules strings                     Chains to ignore when installing feeder rules.
diff --git a/Documentation/cmdref/cilium-agent_hive_dot-graph.md b/Documentation/cmdref/cilium-agent_hive_dot-graph.md
index 79ac0df24c17..9033e8c3bb7a 100644
--- a/Documentation/cmdref/cilium-agent_hive_dot-graph.md
+++ b/Documentation/cmdref/cilium-agent_hive_dot-graph.md
@@ -27,6 +27,7 @@ cilium-agent hive dot-graph [flags]
       --cni-chaining-mode string                                  Enable CNI chaining with the specified plugin (default "none")
       --cni-chaining-target string                                CNI network name into which to insert the Cilium chained configuration. Use '*' to select any network.
       --cni-exclusive                                             Whether to remove other CNI configurations
+      --cni-external-routing                                      Whether the other CNI handles routing on the node
       --cni-log-file string                                       Path where the CNI plugin should write logs (default "/var/run/cilium/cilium-cni.log")
       --controller-group-metrics strings                          List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions.
       --disable-iptables-feeder-rules strings                     Chains to ignore when installing feeder rules.
diff --git a/daemon/cmd/cni/cell.go b/daemon/cmd/cni/cell.go
index 8fa5565faf17..283b8e3737ca 100644
--- a/daemon/cmd/cni/cell.go
+++ b/daemon/cmd/cni/cell.go
@@ -33,6 +33,7 @@ type Config struct {
        CNILogFile            string
        CNIExclusive          bool
        CNIChainingTarget     string
+       CNIExternalRouting    bool
 }

 type CNIConfigManager interface {
@@ -44,6 +45,10 @@ type CNIConfigManager interface {
        GetChainingMode() string

        GetCustomNetConf() *cnitypes.NetConf
+
+       // ExternalRoutingEnabled returns true if the chained plugin implements
+       // routing for Endpoints (Pods).
+       ExternalRoutingEnabled() bool
 }

 var defaultConfig = Config{
@@ -58,6 +63,7 @@ func (cfg Config) Flags(flags *pflag.FlagSet) {
        flags.String(option.CNILogFile, defaultConfig.CNILogFile, "Path where the CNI plugin should write logs")
        flags.String(option.CNIChainingTarget, defaultConfig.CNIChainingTarget, "CNI network name into which to insert the Cilium chained configuration. Use '*' to select any network.")
        flags.Bool(option.CNIExclusive, defaultConfig.CNIExclusive, "Whether to remove other CNI configurations")
+       flags.Bool(option.CNIExternalRouting, defaultConfig.CNIExternalRouting, "Whether the other CNI handles routing on the node")
 }

 func enableConfigManager(lc hive.Lifecycle, log logrus.FieldLogger, cfg Config, dcfg *option.DaemonConfig /*only for .Debug*/) CNIConfigManager {
@@ -69,6 +75,7 @@ func enableConfigManager(lc hive.Lifecycle, log logrus.FieldLogger, cfg Config,
 func newConfigManager(log logrus.FieldLogger, cfg Config, debug bool) *cniConfigManager {
        if cfg.CNIChainingMode == "aws-cni" && cfg.CNIChainingTarget == "" {
                cfg.CNIChainingTarget = "aws-cni"
+               cfg.CNIExternalRouting = true
        }

        if cfg.CNIChainingTarget != "" && cfg.CNIChainingMode == "" {
diff --git a/daemon/cmd/cni/config.go b/daemon/cmd/cni/config.go
index 2589a5203530..e4a3c4c3c684 100644
--- a/daemon/cmd/cni/config.go
+++ b/daemon/cmd/cni/config.go
@@ -59,6 +59,12 @@ func (c *cniConfigManager) GetChainingMode() string {
        return c.config.CNIChainingMode
 }

+// ExternalRoutingEnabled returns true if the chained plugin implements routing
+// for Endpoints (Pods).
+func (c *cniConfigManager) ExternalRoutingEnabled() bool {
+       return c.config.CNIExternalRouting
+}
+
 // GetCustomNetConf returns the parsed custom CNI configuration, if provided
 // (In other words, the value to --read-cni-conf).
 // Otherwise, returns nil.
diff --git a/daemon/cmd/endpoint.go b/daemon/cmd/endpoint.go
index c9c34555469b..3d6d793973c8 100644
--- a/daemon/cmd/endpoint.go
+++ b/daemon/cmd/endpoint.go
@@ -10,7 +10,6 @@ import (
        "net"
        "net/http"
        "runtime"
-       "strings"
        "sync"

        "github.com/go-openapi/runtime/middleware"
@@ -340,9 +339,15 @@ func (d *Daemon) createEndpoint(ctx context.Context, owner regeneration.Owner, e
                // via cilium_host interface
                epTemplate.DatapathConfiguration.InstallEndpointRoute = true

-               // Skip creating an endpoint route if we are in AWS using chaining mode and the endpoint is a vlan
-               // This happens when the endpoint has an SG attached, and the endpoint route causes asymetric routing
-               if d.cniConfigManager.GetChainingMode() == "aws-cni" && strings.HasPrefix(epTemplate.InterfaceName, "vlan") {
+               // EndpointRoutes mode enables two features:
+               // - Install one route per endpoint into the route table
+               // - Configure BPF programs at receive to the endpoint rather
+               //   than implementing the receive policy at the transmit point
+               //   for another device.
+               // If an external agent configures the routing table, then we
+               // don't need to configure routes for this endpoint. However,
+               // we *do* need to configure the BPF programs at receive.
+               if d.cniConfigManager.ExternalRoutingEnabled() {
                        epTemplate.DatapathConfiguration.InstallEndpointRoute = false
                }

diff --git a/install/kubernetes/cilium/templates/cilium-configmap.yaml b/install/kubernetes/cilium/templates/cilium-configmap.yaml
index e3c70d00c453..4804837ea6f1 100644
--- a/install/kubernetes/cilium/templates/cilium-configmap.yaml
+++ b/install/kubernetes/cilium/templates/cilium-configmap.yaml
@@ -807,6 +807,9 @@ data:
 {{- if (not (kindIs "invalid" .Values.cni.chainingTarget)) }}
   cni-chaining-target: {{ .Values.cni.chainingTarget | quote }}
 {{- end}}
+{{- if (not (kindIs "invalid" .Values.cni.externalRouting)) }}
+  cni-external-routing: {{ .Values.cni.externalRouting | quote }}
+{{- end}}
 {{- if .Values.kubeConfigPath }}
   k8s-kubeconfig-path: {{ .Values.kubeConfigPath | quote }}
 {{- end }}
diff --git a/pkg/option/config.go b/pkg/option/config.go
index ff2d05ade215..df05b45e510c 100644
--- a/pkg/option/config.go
+++ b/pkg/option/config.go
@@ -1288,6 +1288,9 @@ const (
        // CNIExclusive tells the agent to remove other CNI configuration files
        CNIExclusive = "cni-exclusive"

+       // CNIExternalRouting delegates endpoint routing to the other CNI.
+       CNIExternalRouting = "cni-external-routing"
+
        // CNILogFile is the path to a log file (on the host) for the CNI plugin
        // binary to use for logging.
        CNILogFile = "cni-log-file"

I think that this (incremental) proposal above will be a bit more obvious around how the change works, as well as making it future-proof due to the new CLI argument. If we agree this is the path forward, you're welcome to add these changes on top of your patch and add these tags to the commit message to indicate the partial origin of the patch:

Co-authored-by: Joe Stringer <joe@cilium.io>
Signed-off-by: Joe Stringer <joe@cilium.io>

@Alex-Waring
Copy link
Contributor Author

Alex-Waring commented Nov 21, 2023

Thank you @joestringer, I've applied the patch to my commit after confirming this resolves the issue locally. I still think the backport will need a manual change due to a difference to how the CNI config is retrieved and I'm not sure how to go about that. Any tips greatly appreciated!

Also, hubble relay seems to be unhappy in the CI, but this change doesn't impact it so i'm unsure what's going on there.

@Alex-Waring Alex-Waring force-pushed the AWar_aws_vlan_routing branch 2 times, most recently from f2dc873 to 020d187 Compare November 21, 2023 11:01
@joestringer
Copy link
Member

joestringer commented Nov 21, 2023

From the summary page of one of the CI failures (link), there's as sysdump. Downloading and opening it up, there is a logs-hubble-relay-*-prev.log which seems to be mostly healthy, just shows that hubble-relay was requested to shut down:

2023-11-21T11:12:15.984196300Z level=info msg="Starting gRPC server..." options="{peerTarget:hubble-peer.kube-system.svc.cluster.local:443 dialTimeout:5000000000 retryTimeout:30000000000 listenAddress::4245 metricsListenAddress: log:0xc000550070 serverTLSConfig:<nil> insecureServer:true clientTLSConfig:0xc000aa0300 clusterName:kind-chart-testing insecureClient:false observerOptions:[0x1ef32a0 0x1ef3380] grpcMetrics:<nil> grpcUnaryInterceptors:[] grpcStreamInterceptors:[]}" subsys=hubble-relay
2023-11-21T11:12:15.996491574Z level=debug msg="Client mtls handshake" config=tls-to-hubble keypair-sn=b1b47042dad411e7a6f647d1b95157f3 subsys=hubble-relay
2023-11-21T11:12:15.999167994Z level=info msg="Received peer change notification" change notification="name:\"kind-chart-testing/chart-testing-worker\" address:\"172.18.0.2\" type:PEER_ADDED tls:{server_name:\"chart-testing-worker.kind-chart-testing.hubble-grpc.cilium.io\"}" subsys=hubble-relay
2023-11-21T11:12:15.999258094Z level=info msg="Received peer change notification" change notification="name:\"kind-chart-testing/chart-testing-control-plane\" address:\"172.18.0.3\" type:PEER_ADDED tls:{server_name:\"chart-testing-control-plane.kind-chart-testing.hubble-grpc.cilium.io\"}" subsys=hubble-relay
2023-11-21T11:12:15.999266851Z level=info msg=Connecting address="172.18.0.3:4244" hubble-tls=true peer=kind-chart-testing/chart-testing-control-plane subsys=hubble-relay
2023-11-21T11:12:15.999277211Z level=info msg=Connecting address="172.18.0.2:4244" hubble-tls=true peer=kind-chart-testing/chart-testing-worker subsys=hubble-relay
2023-11-21T11:12:16.002696547Z level=debug msg="Client mtls handshake" config=tls-to-hubble keypair-sn=b1b47042dad411e7a6f647d1b95157f3 subsys=hubble-relay
2023-11-21T11:12:16.004911458Z level=debug msg="Client mtls handshake" config=tls-to-hubble keypair-sn=b1b47042dad411e7a6f647d1b95157f3 subsys=hubble-relay
2023-11-21T11:12:16.005179379Z level=info msg=Connected address="172.18.0.3:4244" hubble-tls=true peer=kind-chart-testing/chart-testing-control-plane subsys=hubble-relay
2023-11-21T11:12:16.006904580Z level=info msg=Connected address="172.18.0.2:4244" hubble-tls=true peer=kind-chart-testing/chart-testing-worker subsys=hubble-relay
2023-11-21T11:13:18.733151654Z level=info msg="Stopping server..." subsys=hubble-relay

The k8s-events*html shows why it was killed:

11:10:09Z (x20)	kube-system	kubelet	chart-testing-worker	hubble-relay-694cdd7cb9-x7q97	Unhealthy	Startup probe failed: timeout: failed to connect service "10.244.1.19:4222" within 3s: context deadline exceeded
11:10:09Z	kube-system	kubelet	chart-testing-worker	hubble-relay-694cdd7cb9-x7q97	Killing	Container hubble-relay failed startup probe, will be restarted

Searching hubble-relay in the pull requests view shows this PR was recently merged, which touched this area. There might be a regression in the main branch? I agree that this doesn't look related to this change.

#28765

@joestringer joestringer added the release-note/minor This PR changes functionality that users may find relevant to operating Cilium. label Nov 21, 2023
@maintainer-s-little-helper maintainer-s-little-helper bot removed the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label Nov 21, 2023
@joestringer
Copy link
Member

joestringer commented Nov 21, 2023

Given that the current failures seem unrelated to the PR, I'll trigger the full testsuite for further feedback. I would expect that only the ci-awscni job has meaningful feedback for this changeset, unless there's some sort of linting / test mocking failure somewhere else in the tree.

Copy link
Member

@sayboras sayboras left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks 🍒

@aanm aanm added this pull request to the merge queue Nov 27, 2023
@maintainer-s-little-helper maintainer-s-little-helper bot added the ready-to-merge This PR has passed all tests and received consensus from code owners to merge. label Nov 27, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Nov 27, 2023
@aanm aanm added this pull request to the merge queue Nov 27, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Nov 27, 2023
@aanm aanm merged commit 95a7d12 into cilium:main Nov 27, 2023
61 checks passed
@joestringer joestringer changed the title datapath: Avoid Upserting AWS SG vlans datapath: Add support to delegate routing to other plugins, such as AWS-CNI Nov 27, 2023
@joestringer joestringer added affects/v1.13 This issue affects v1.13 branch affects/v1.14 This issue affects v1.14 branch labels Nov 27, 2023
@GolubevV
Copy link

GolubevV commented Dec 4, 2023

Thanks for fixing #27152, will test the fix once released.

Maybe need consider to update documentation describing the Cilium + SGFP setup where the flag endpointRoutes.enabled is explicitly mentioned to be set to true for chaining scenario which seems not to be correct?

@joestringer
Copy link
Member

@GolubevV as discussed above, it's a bit more nuanced but the short answer is that endpointRoutes.enabled needs to still be enabled in this scenario.

@joestringer joestringer added needs-backport/1.14 This PR / issue needs backporting to the v1.14 branch and removed affects/v1.14 This issue affects v1.14 branch labels Dec 4, 2023
@maintainer-s-little-helper maintainer-s-little-helper bot added this to Needs backport from main in 1.14.5 Dec 4, 2023
@joestringer joestringer added affects/v1.14 This issue affects v1.14 branch release-note/bug This PR fixes an issue in a previous release of Cilium. and removed release-note/minor This PR changes functionality that users may find relevant to operating Cilium. labels Dec 4, 2023
@joestringer
Copy link
Member

I've nominated this PR for backport to v1.14 based on it being a bugfix (criteria) for a documented feature per this page.

@nbusseneau nbusseneau mentioned this pull request Dec 5, 2023
10 tasks
@nbusseneau nbusseneau added backport-pending/1.14 The backport for Cilium 1.14.x for this PR is in progress. and removed needs-backport/1.14 This PR / issue needs backporting to the v1.14 branch labels Dec 5, 2023
@maintainer-s-little-helper maintainer-s-little-helper bot moved this from Needs backport from main to Backport pending to v1.14 in 1.14.5 Dec 5, 2023
@github-actions github-actions bot added backport-done/1.14 The backport for Cilium 1.14.x for this PR is done. and removed backport-pending/1.14 The backport for Cilium 1.14.x for this PR is in progress. labels Dec 6, 2023
@maintainer-s-little-helper maintainer-s-little-helper bot moved this from Backport pending to v1.14 to Backport done to v1.14 in 1.14.5 Dec 6, 2023
@maintainer-s-little-helper maintainer-s-little-helper bot removed this from Backport done to v1.14 in 1.14.5 Dec 6, 2023
@GolubevV
Copy link

@GolubevV as discussed above, it's a bit more nuanced but the short answer is that endpointRoutes.enabled needs to still be enabled in this scenario.

Got it, thanks! I missed the point that the newly introduced flag is activated automatically when in chaining mode, meaning there is no need to explicitly configure it and disable the previous flag - very user-firendly implementation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects/v1.13 This issue affects v1.13 branch affects/v1.14 This issue affects v1.14 branch backport-done/1.14 The backport for Cilium 1.14.x for this PR is done. kind/community-contribution This was a contribution made by a community member. ready-to-merge This PR has passed all tests and received consensus from code owners to merge. release-note/bug This PR fixes an issue in a previous release of Cilium. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Projects
No open projects
Status: Released
10 participants