Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iptables: Chain already exists. #11276

Closed
tgraf opened this issue Apr 30, 2020 · 5 comments
Closed

iptables: Chain already exists. #11276

tgraf opened this issue Apr 30, 2020 · 5 comments
Assignees
Labels
kind/bug This is a bug in the Cilium logic. need-more-info More information is required to further debug or fix the issue.

Comments

@tgraf
Copy link
Member

tgraf commented Apr 30, 2020

Agent fails to start repeatedly with the following error:

ERROR [2020-04-29T19:29:58.05957853Z] cilium.io/agent: "Command execution failed" cmd="[iptables -w 5 -t filter -N CILIUM_TRANSIENT_FORWARD]" error="exit status 1" subsys=iptables
WARN  [2020-04-29T19:29:58.059649147Z] cilium.io/agent: "iptables: Chain already exists." subsys=iptables
ERROR [2020-04-29T19:29:58.059671388Z] cilium.io/agent: "Error while initializing daemon" error="cannot add custom chain CILIUM_TRANSIENT_FORWARD: exit status 1" subsys=daemon
ERROR [2020-04-29T19:29:58.059689483Z] cilium.io/agent: "Error while creating daemon" error="cannot add custom chain CILIUM_TRANSIENT_FORWARD: exit status 1" subsys=daemon
@tgraf tgraf added kind/bug This is a bug in the Cilium logic. priority/high This is considered vital to an upcoming release. labels Apr 30, 2020
@qmonnet
Copy link
Member

qmonnet commented May 1, 2020

From what I understand the issue happens multiple times consecutively, preventing the agent to restart.

I had a look at the code and I fail to understand what can be happening here. From what I understand, chain CILIUM_TRANSIENT_FORWARD is only added/removed with TransientRulesStart() and TransientRulesEnd() (below the former). BUT TransientRulesStart() starts with calling TransientRulesEnd(), so the chain should always be deleted when we try to create it again.

We could imagine that the agent crashed for whatever reason when those transient rules were up, and that we failed to delete the chain on the next restart attempt. Failing to delete the chain might happen for a number of reasons, I see:

  1. We error and exit before trying to delete the chain
  2. The chain is not empty
  3. There are rules in other chains referring to this chain
  4. The /run/xtables.lock is taken

For 1., TransientRulesEnd() is defined as such

// TransientRulesEnd removes Cilium related rules installed from TransientRulesStart.
func (m *IptablesManager) TransientRulesEnd(quiet bool) {
	if option.Config.EnableIPv4 {
		m.removeCiliumRules("filter", "iptables", ciliumTransientForwardChain)
		transientChain.remove(m.waitArgs, quiet)
	}
}

And we do not exit if m.removeCiliumRules() fails. Then transientChain.remove() attempts only two things: flush and delete, see next point.

For 2., we try to flush the chain before deleting it. Flushing might fail too, but as I understand from the case where we observed the issue, the chain was empty. So I don't understand why that would fail (any more than I understand why deleting it would fail). Note that trying to flush an empty chain does not make iptables return an error.

Point 3. is not supposed to happen, and as I understand from the current case, it didn't.

For 4. the lock might be held by some other application, but in that case we would see a different error message stating that the lock is held. This would happen before iptables realises that the chain we try to build is already existing (I checked that on my setup). Even if the lock was held just when we attempted to remove the chain, subsequent attempts (when trying to restart the Cilium agent again, and again) would be super-unlikely to hit the same contention.

So at this point I have no idea why this chain is still present in the table. Note that we silence the logs when trying to delete it when TransientRulesEnd() is called from TransientRulesStart(), so we have no indication. If possible, maybe trying to remove the chain by hand might provide some information.

Not sure how to debug further at this point, if we cannot find a reproducer. Talking with @joestringer, one of the best paths forwards might be to check the error message we get when creating the chain fails, and to avoid error-ing out if it fails because the chain is already here. I have no clue if the chain is in a “normal” state and could still receive the transient rules, if we failed to delete it; but transient rules are here to avoid packet loss during restart anyway, and no restart is probably worse than no transient rules in that regard?

@joestringer
Copy link
Member

I say let's prepare the PR to add those logs, if it helps to gather more information next time then that's at least a step forward.

@joestringer
Copy link
Member

joestringer commented May 6, 2020

I also note that while the issue description states the log at ERROR level:

ERROR [2020-04-29T19:29:58.059689483Z] cilium.io/agent: "Error while creating daemon" error="cannot add custom chain CILIUM_TRANSIENT_FORWARD: exit status 1" subsys=daemon

The corresponding code calls Fatal():

log.WithError(err).Fatal("Error while creating daemon")

qmonnet added a commit that referenced this issue Jun 10, 2020
When Reinitialize()-ing the datapath, transient iptables rules are set
up to avoid dropping packets while Cilium's rules are not in place.

In rare occasions, a failure to add those rules has been observed (see
issue #11276), leading to an early exit from Reinitialize() and a
failure to set up the daemon.

But those transient rules are just used to lend a hand and keep packets
going for a very small window of time: it does not actually matter much
if we fail to install them, and it should not stop the reinitializing of
the daemon. Let's simply log a warning and carry on if we fail to add
those rules.

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
@qmonnet
Copy link
Member

qmonnet commented Jun 10, 2020

@joestringer I could not find any version in our history where this log would be on an error level, so that would suggest local modifications? Maybe they could be linked to the current issue, too?

Anyway, I made a PR at last so we can get more info from iptables if the issue occurs again.

qmonnet added a commit that referenced this issue Jun 10, 2020
When Cilium reinitialises its daemon, transient iptables rules are set
up to avoid dropping packets while the regular rules are not in place.

On rare occasions, setting up those transient rules has been found to
fail, for an unknown reason (see issue #11276). The error message states
that the "Chain already exists", even though we try to flush and remove
any leftover from previous transient rules before adding the new ones.
It sounds likely that removing the leftovers is failing, but we were not
able to understand why, because we quieten the function to avoid
spurious warnings the first time we try to remove them (since none is
existing).

It would be helpful to get more information to understand what happens
in those rare occasions where setting up transient rules fails. Let's
find a way to get more logs, without making too much noise.

We cannot warn unconditionally in remove() since we want removal in the
normal case to remain quiet. What we can do is logging when the "quiet"
flag is not passed, _or_ when the error is different from the chain
being not found, i.e. when the error is different from the one we want
to silence on start up. This means matching on the error message
returned by ip(6)tables. It looks fragile, but at least this message has
not changed since 2009, so it should be relatively stable and pretty
much the same on all supported systems. Since remove() is used for
chains other than for transient rules too, we also match on chain name
to make sure we are dealing with transient rules if ignoring the "quiet"
flag.

This additional logging could be removed once we reproduce and fix the
issue.

Alternative approaches could be:

- Uncoupling the remove() function for transient rules and regular
  rules, to avoid matching on chain name (but it sounds worse).
- Logging on failure for all rules even when the "quiet" flag is passed,
  but on "info" level instead of "warning". This would still require a
  modified version of runProg(), with also a modified version of
  CombinedOutput() in package "exec". Here I chose to limit the number
  of logs and keep the changes local.
- Listing the chain first before trying to remove it, so we only try to
  remove if it exists, but this would likely add unnecessary complexity
  and latency.

Should help with (but does not solve): #11276
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
qmonnet added a commit that referenced this issue Jun 10, 2020
In an attempt to catch #11276 in CI, let's add any message related to a
failure to flush or delete the chain related to iptables transient rules
to the list of badLogMessages we want to catch.

We need to filter on the name of the chain for transient rules to avoid
false positives, which requires exporting that name.

We also need to modify the log message error, to avoid adding four
disting logs to the list (combinations for iptables/ip6tables,
flush/delete).

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
aanm pushed a commit that referenced this issue Jun 11, 2020
When Reinitialize()-ing the datapath, transient iptables rules are set
up to avoid dropping packets while Cilium's rules are not in place.

In rare occasions, a failure to add those rules has been observed (see
issue #11276), leading to an early exit from Reinitialize() and a
failure to set up the daemon.

But those transient rules are just used to lend a hand and keep packets
going for a very small window of time: it does not actually matter much
if we fail to install them, and it should not stop the reinitializing of
the daemon. Let's simply log a warning and carry on if we fail to add
those rules.

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
aanm pushed a commit that referenced this issue Jun 11, 2020
When Cilium reinitialises its daemon, transient iptables rules are set
up to avoid dropping packets while the regular rules are not in place.

On rare occasions, setting up those transient rules has been found to
fail, for an unknown reason (see issue #11276). The error message states
that the "Chain already exists", even though we try to flush and remove
any leftover from previous transient rules before adding the new ones.
It sounds likely that removing the leftovers is failing, but we were not
able to understand why, because we quieten the function to avoid
spurious warnings the first time we try to remove them (since none is
existing).

It would be helpful to get more information to understand what happens
in those rare occasions where setting up transient rules fails. Let's
find a way to get more logs, without making too much noise.

We cannot warn unconditionally in remove() since we want removal in the
normal case to remain quiet. What we can do is logging when the "quiet"
flag is not passed, _or_ when the error is different from the chain
being not found, i.e. when the error is different from the one we want
to silence on start up. This means matching on the error message
returned by ip(6)tables. It looks fragile, but at least this message has
not changed since 2009, so it should be relatively stable and pretty
much the same on all supported systems. Since remove() is used for
chains other than for transient rules too, we also match on chain name
to make sure we are dealing with transient rules if ignoring the "quiet"
flag.

This additional logging could be removed once we reproduce and fix the
issue.

Alternative approaches could be:

- Uncoupling the remove() function for transient rules and regular
  rules, to avoid matching on chain name (but it sounds worse).
- Logging on failure for all rules even when the "quiet" flag is passed,
  but on "info" level instead of "warning". This would still require a
  modified version of runProg(), with also a modified version of
  CombinedOutput() in package "exec". Here I chose to limit the number
  of logs and keep the changes local.
- Listing the chain first before trying to remove it, so we only try to
  remove if it exists, but this would likely add unnecessary complexity
  and latency.

Should help with (but does not solve): #11276
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
aanm pushed a commit that referenced this issue Jun 11, 2020
In an attempt to catch #11276 in CI, let's add any message related to a
failure to flush or delete the chain related to iptables transient rules
to the list of badLogMessages we want to catch.

We need to filter on the name of the chain for transient rules to avoid
false positives, which requires exporting that name.

We also need to modify the log message error, to avoid adding four
disting logs to the list (combinations for iptables/ip6tables,
flush/delete).

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
aanm pushed a commit that referenced this issue Jun 11, 2020
[ upstream commit b2c73e7 ]

When Reinitialize()-ing the datapath, transient iptables rules are set
up to avoid dropping packets while Cilium's rules are not in place.

In rare occasions, a failure to add those rules has been observed (see
issue #11276), leading to an early exit from Reinitialize() and a
failure to set up the daemon.

But those transient rules are just used to lend a hand and keep packets
going for a very small window of time: it does not actually matter much
if we fail to install them, and it should not stop the reinitializing of
the daemon. Let's simply log a warning and carry on if we fail to add
those rules.

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: André Martins <andre@cilium.io>
aanm pushed a commit that referenced this issue Jun 11, 2020
[ upstream commit 052fa31 ]

When Cilium reinitialises its daemon, transient iptables rules are set
up to avoid dropping packets while the regular rules are not in place.

On rare occasions, setting up those transient rules has been found to
fail, for an unknown reason (see issue #11276). The error message states
that the "Chain already exists", even though we try to flush and remove
any leftover from previous transient rules before adding the new ones.
It sounds likely that removing the leftovers is failing, but we were not
able to understand why, because we quieten the function to avoid
spurious warnings the first time we try to remove them (since none is
existing).

It would be helpful to get more information to understand what happens
in those rare occasions where setting up transient rules fails. Let's
find a way to get more logs, without making too much noise.

We cannot warn unconditionally in remove() since we want removal in the
normal case to remain quiet. What we can do is logging when the "quiet"
flag is not passed, _or_ when the error is different from the chain
being not found, i.e. when the error is different from the one we want
to silence on start up. This means matching on the error message
returned by ip(6)tables. It looks fragile, but at least this message has
not changed since 2009, so it should be relatively stable and pretty
much the same on all supported systems. Since remove() is used for
chains other than for transient rules too, we also match on chain name
to make sure we are dealing with transient rules if ignoring the "quiet"
flag.

This additional logging could be removed once we reproduce and fix the
issue.

Alternative approaches could be:

- Uncoupling the remove() function for transient rules and regular
  rules, to avoid matching on chain name (but it sounds worse).
- Logging on failure for all rules even when the "quiet" flag is passed,
  but on "info" level instead of "warning". This would still require a
  modified version of runProg(), with also a modified version of
  CombinedOutput() in package "exec". Here I chose to limit the number
  of logs and keep the changes local.
- Listing the chain first before trying to remove it, so we only try to
  remove if it exists, but this would likely add unnecessary complexity
  and latency.

Should help with (but does not solve): #11276
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: André Martins <andre@cilium.io>
aanm pushed a commit that referenced this issue Jun 11, 2020
[ upstream commit e78405a ]

In an attempt to catch #11276 in CI, let's add any message related to a
failure to flush or delete the chain related to iptables transient rules
to the list of badLogMessages we want to catch.

We need to filter on the name of the chain for transient rules to avoid
false positives, which requires exporting that name.

We also need to modify the log message error, to avoid adding four
disting logs to the list (combinations for iptables/ip6tables,
flush/delete).

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: André Martins <andre@cilium.io>
joestringer pushed a commit that referenced this issue Jun 11, 2020
[ upstream commit b2c73e7 ]

When Reinitialize()-ing the datapath, transient iptables rules are set
up to avoid dropping packets while Cilium's rules are not in place.

In rare occasions, a failure to add those rules has been observed (see
issue #11276), leading to an early exit from Reinitialize() and a
failure to set up the daemon.

But those transient rules are just used to lend a hand and keep packets
going for a very small window of time: it does not actually matter much
if we fail to install them, and it should not stop the reinitializing of
the daemon. Let's simply log a warning and carry on if we fail to add
those rules.

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Joe Stringer <joe@cilium.io>
joestringer pushed a commit that referenced this issue Jun 11, 2020
[ upstream commit 052fa31 ]

When Cilium reinitialises its daemon, transient iptables rules are set
up to avoid dropping packets while the regular rules are not in place.

On rare occasions, setting up those transient rules has been found to
fail, for an unknown reason (see issue #11276). The error message states
that the "Chain already exists", even though we try to flush and remove
any leftover from previous transient rules before adding the new ones.
It sounds likely that removing the leftovers is failing, but we were not
able to understand why, because we quieten the function to avoid
spurious warnings the first time we try to remove them (since none is
existing).

It would be helpful to get more information to understand what happens
in those rare occasions where setting up transient rules fails. Let's
find a way to get more logs, without making too much noise.

We cannot warn unconditionally in remove() since we want removal in the
normal case to remain quiet. What we can do is logging when the "quiet"
flag is not passed, _or_ when the error is different from the chain
being not found, i.e. when the error is different from the one we want
to silence on start up. This means matching on the error message
returned by ip(6)tables. It looks fragile, but at least this message has
not changed since 2009, so it should be relatively stable and pretty
much the same on all supported systems. Since remove() is used for
chains other than for transient rules too, we also match on chain name
to make sure we are dealing with transient rules if ignoring the "quiet"
flag.

This additional logging could be removed once we reproduce and fix the
issue.

Alternative approaches could be:

- Uncoupling the remove() function for transient rules and regular
  rules, to avoid matching on chain name (but it sounds worse).
- Logging on failure for all rules even when the "quiet" flag is passed,
  but on "info" level instead of "warning". This would still require a
  modified version of runProg(), with also a modified version of
  CombinedOutput() in package "exec". Here I chose to limit the number
  of logs and keep the changes local.
- Listing the chain first before trying to remove it, so we only try to
  remove if it exists, but this would likely add unnecessary complexity
  and latency.

Should help with (but does not solve): #11276
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Joe Stringer <joe@cilium.io>
joestringer pushed a commit that referenced this issue Jun 11, 2020
[ upstream commit e78405a ]

[ Backporter's notes: Minor conflicts against PR #10639 and CI failure
                      log messages. ]

In an attempt to catch #11276 in CI, let's add any message related to a
failure to flush or delete the chain related to iptables transient rules
to the list of badLogMessages we want to catch.

We need to filter on the name of the chain for transient rules to avoid
false positives, which requires exporting that name.

We also need to modify the log message error, to avoid adding four
disting logs to the list (combinations for iptables/ip6tables,
flush/delete).

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Joe Stringer <joe@cilium.io>
joestringer pushed a commit that referenced this issue Jun 12, 2020
[ upstream commit b2c73e7 ]

When Reinitialize()-ing the datapath, transient iptables rules are set
up to avoid dropping packets while Cilium's rules are not in place.

In rare occasions, a failure to add those rules has been observed (see
issue #11276), leading to an early exit from Reinitialize() and a
failure to set up the daemon.

But those transient rules are just used to lend a hand and keep packets
going for a very small window of time: it does not actually matter much
if we fail to install them, and it should not stop the reinitializing of
the daemon. Let's simply log a warning and carry on if we fail to add
those rules.

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: André Martins <andre@cilium.io>
joestringer pushed a commit that referenced this issue Jun 12, 2020
[ upstream commit 052fa31 ]

When Cilium reinitialises its daemon, transient iptables rules are set
up to avoid dropping packets while the regular rules are not in place.

On rare occasions, setting up those transient rules has been found to
fail, for an unknown reason (see issue #11276). The error message states
that the "Chain already exists", even though we try to flush and remove
any leftover from previous transient rules before adding the new ones.
It sounds likely that removing the leftovers is failing, but we were not
able to understand why, because we quieten the function to avoid
spurious warnings the first time we try to remove them (since none is
existing).

It would be helpful to get more information to understand what happens
in those rare occasions where setting up transient rules fails. Let's
find a way to get more logs, without making too much noise.

We cannot warn unconditionally in remove() since we want removal in the
normal case to remain quiet. What we can do is logging when the "quiet"
flag is not passed, _or_ when the error is different from the chain
being not found, i.e. when the error is different from the one we want
to silence on start up. This means matching on the error message
returned by ip(6)tables. It looks fragile, but at least this message has
not changed since 2009, so it should be relatively stable and pretty
much the same on all supported systems. Since remove() is used for
chains other than for transient rules too, we also match on chain name
to make sure we are dealing with transient rules if ignoring the "quiet"
flag.

This additional logging could be removed once we reproduce and fix the
issue.

Alternative approaches could be:

- Uncoupling the remove() function for transient rules and regular
  rules, to avoid matching on chain name (but it sounds worse).
- Logging on failure for all rules even when the "quiet" flag is passed,
  but on "info" level instead of "warning". This would still require a
  modified version of runProg(), with also a modified version of
  CombinedOutput() in package "exec". Here I chose to limit the number
  of logs and keep the changes local.
- Listing the chain first before trying to remove it, so we only try to
  remove if it exists, but this would likely add unnecessary complexity
  and latency.

Should help with (but does not solve): #11276
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: André Martins <andre@cilium.io>
joestringer pushed a commit that referenced this issue Jun 12, 2020
[ upstream commit e78405a ]

In an attempt to catch #11276 in CI, let's add any message related to a
failure to flush or delete the chain related to iptables transient rules
to the list of badLogMessages we want to catch.

We need to filter on the name of the chain for transient rules to avoid
false positives, which requires exporting that name.

We also need to modify the log message error, to avoid adding four
disting logs to the list (combinations for iptables/ip6tables,
flush/delete).

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: André Martins <andre@cilium.io>
aanm pushed a commit that referenced this issue Jun 12, 2020
[ upstream commit b2c73e7 ]

When Reinitialize()-ing the datapath, transient iptables rules are set
up to avoid dropping packets while Cilium's rules are not in place.

In rare occasions, a failure to add those rules has been observed (see
issue #11276), leading to an early exit from Reinitialize() and a
failure to set up the daemon.

But those transient rules are just used to lend a hand and keep packets
going for a very small window of time: it does not actually matter much
if we fail to install them, and it should not stop the reinitializing of
the daemon. Let's simply log a warning and carry on if we fail to add
those rules.

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Joe Stringer <joe@cilium.io>
aanm pushed a commit that referenced this issue Jun 12, 2020
[ upstream commit 052fa31 ]

When Cilium reinitialises its daemon, transient iptables rules are set
up to avoid dropping packets while the regular rules are not in place.

On rare occasions, setting up those transient rules has been found to
fail, for an unknown reason (see issue #11276). The error message states
that the "Chain already exists", even though we try to flush and remove
any leftover from previous transient rules before adding the new ones.
It sounds likely that removing the leftovers is failing, but we were not
able to understand why, because we quieten the function to avoid
spurious warnings the first time we try to remove them (since none is
existing).

It would be helpful to get more information to understand what happens
in those rare occasions where setting up transient rules fails. Let's
find a way to get more logs, without making too much noise.

We cannot warn unconditionally in remove() since we want removal in the
normal case to remain quiet. What we can do is logging when the "quiet"
flag is not passed, _or_ when the error is different from the chain
being not found, i.e. when the error is different from the one we want
to silence on start up. This means matching on the error message
returned by ip(6)tables. It looks fragile, but at least this message has
not changed since 2009, so it should be relatively stable and pretty
much the same on all supported systems. Since remove() is used for
chains other than for transient rules too, we also match on chain name
to make sure we are dealing with transient rules if ignoring the "quiet"
flag.

This additional logging could be removed once we reproduce and fix the
issue.

Alternative approaches could be:

- Uncoupling the remove() function for transient rules and regular
  rules, to avoid matching on chain name (but it sounds worse).
- Logging on failure for all rules even when the "quiet" flag is passed,
  but on "info" level instead of "warning". This would still require a
  modified version of runProg(), with also a modified version of
  CombinedOutput() in package "exec". Here I chose to limit the number
  of logs and keep the changes local.
- Listing the chain first before trying to remove it, so we only try to
  remove if it exists, but this would likely add unnecessary complexity
  and latency.

Should help with (but does not solve): #11276
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Joe Stringer <joe@cilium.io>
aanm pushed a commit that referenced this issue Jun 12, 2020
[ upstream commit e78405a ]

[ Backporter's notes: Minor conflicts against PR #10639 and CI failure
                      log messages. ]

In an attempt to catch #11276 in CI, let's add any message related to a
failure to flush or delete the chain related to iptables transient rules
to the list of badLogMessages we want to catch.

We need to filter on the name of the chain for transient rules to avoid
false positives, which requires exporting that name.

We also need to modify the log message error, to avoid adding four
disting logs to the list (combinations for iptables/ip6tables,
flush/delete).

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Joe Stringer <joe@cilium.io>
@joestringer joestringer added need-more-info More information is required to further debug or fix the issue. and removed priority/high This is considered vital to an upcoming release. labels Jul 11, 2020
@joestringer
Copy link
Member

We don't seem to have heard anything on this topic for almost a year, despite adding logging specifically to try to help investigate.

PR #16391 is actually looking to remove such logging along with the transient rules 'feature' so I doubt we'll take this issue any further.

If you see an issue like this in future, please file a brand new issue with the symptoms and details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This is a bug in the Cilium logic. need-more-info More information is required to further debug or fix the issue.
Projects
None yet
Development

No branches or pull requests

3 participants